Version 3.

2
11/9/2010

NVIDIA CUDA™
NVIDIA CUDA C
Programming Guide






ii CUDA C Programming Guide Version 3.2

Changes from Version 3.1.1
 Simplified all the code samples that use cuParamSetv() to set a kernel
parameter of type CUdeviceptr since CUdeviceptr is now of same size and
alignment as void*, so there is no longer any need to go through an
interneditate void* variable.
 Added Section 3.2.4.1.4 on 16-bit floating-point textures.
 Added Section 3.2.4.4 on read/write coherency for texture and surface memory.
 Added more details about surface memory access to Section 3.2.4.2.
 Added more details to Section 3.2.6.5.
 Mentioned new stream synchronization function cudaStreamSynchronize()
in Section 3.2.6.5.2.
 Mentioned in Sections 3.2.7.2, 3.3.10.2, and 4.3 the new API calls to deal with
devices using NVIDIA SLI in AFR mode.
 Added Sections 3.2.9 and 3.3.12 about the call stack.
 Changed the type of the pitch variable in the second code sample of
Section 3.3.4 from unsigned int to size_t following the function
signature change of cuMemAllocPitch().
 Changed the type of the bytes variable in the last code sample of Section 3.3.4
from unsigned int to size_t following the function signature change of
cuModuleGetGlobal().
 Removed cuParamSetTexRef() from Section 3.3.7 as it is no longer
necessary.
 Updated Section 5.2.3, Table 5-1, and Section G.4.1 for devices of compute
capability 2.1.
 Added GeForce GTX 480M, GeForce GTX 470M, GeForce GTX 460M,
GeForce GTX 445M, GeForce GTX 435M, GeForce GTX 425M,
GeForce GTX 420M, GeForce GTX 415M, GeForce GTX 460,
GeForce GTS 450, GeForce GTX 465, GeForce GTX 580, Quadro 2000,
Quadro 600, Quadro 4000, Quadro 5000, Quadro 5000M, and Quadro 6000 to
Table A-1.
 Fixed sample code in Section B.2.3: array[] was declared as an array of char
causing a compiler error (“Unaligned memory accesses not supported”) when
casting array to a pointer of higher alignment requirement; declaring
array[] as an array of float fixes it.
 Mentioned in Section B.11 that any atomic operation can be implemented based
on atomic Compare And Swap.
 Added Section B.15 on the new malloc() and free() device functions.
 Moved the type casting functions to a separate section C.2.4.
 Fixed the maximum height of a 2D texture reference for devices of compute
capability 2.x (65535 instead of 65536) in Section G.1.



CUDA C Programming Guide Version 3.2 iii

 Fixed the maximum dimensions for surface references in Section G.1.
 Mentioned the new
cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() API calls in
Section G.4.1.
 Mentioned in Section G.4.2 that global memory accesses that are cached in L2
only are serviced with 32-byte memory transactions.



iv CUDA C Programming Guide Version 3.2

Table of Contents
Chapter 1. Introduction ................................................................................... 1
1.1 From Graphics Processing to General-Purpose Parallel Computing ................... 1
1.2 CUDA™: a General-Purpose Parallel Computing Architecture .......................... 3
1.3 A Scalable Programming Model .................................................................... 4
1.4 Document’s Structure ................................................................................. 6
Chapter 2. Programming Model ....................................................................... 7
2.1 Kernels ...................................................................................................... 7
2.2 Thread Hierarchy ........................................................................................ 8
2.3 Memory Hierarchy .................................................................................... 10
2.4 Heterogeneous Programming .................................................................... 11
2.5 Compute Capability ................................................................................... 14
Chapter 3. Programming Interface ................................................................ 15
3.1 Compilation with NVCC ............................................................................. 15
3.1.1 Compilation Workflow ......................................................................... 16
3.1.2 Binary Compatibility ........................................................................... 16
3.1.3 PTX Compatibility ............................................................................... 16
3.1.4 Application Compatibility ..................................................................... 17
3.1.5 C/C++ Compatibility .......................................................................... 18
3.1.6 64-Bit Compatibility ............................................................................ 18
3.2 CUDA C ................................................................................................... 18
3.2.1 Device Memory .................................................................................. 19
3.2.2 Shared Memory ................................................................................. 21
3.2.3 Multiple Devices ................................................................................. 28
3.2.4 Texture and Surface Memory .............................................................. 29
3.2.4.1 Texture Memory .......................................................................... 29
3.2.4.2 Surface Memory .......................................................................... 34
3.2.4.3 CUDA Arrays ............................................................................... 36
3.2.4.4 Read/Write Coherency ................................................................. 36



CUDA C Programming Guide Version 3.2 v

3.2.5 Page-Locked Host Memory .................................................................. 36
3.2.5.1 Portable Memory ......................................................................... 37
3.2.5.2 Write-Combining Memory ............................................................. 37
3.2.5.3 Mapped Memory .......................................................................... 37
3.2.6 Asynchronous Concurrent Execution .................................................... 38
3.2.6.1 Concurrent Execution between Host and Device ............................. 38
3.2.6.2 Overlap of Data Transfer and Kernel Execution .............................. 38
3.2.6.3 Concurrent Kernel Execution ........................................................ 38
3.2.6.4 Concurrent Data Transfers ........................................................... 39
3.2.6.5 Stream ....................................................................................... 39
3.2.6.6 Event ......................................................................................... 41
3.2.6.7 Synchronous Calls ....................................................................... 42
3.2.7 Graphics Interoperability ..................................................................... 42
3.2.7.1 OpenGL Interoperability ............................................................... 43
3.2.7.2 Direct3D Interoperability .............................................................. 45
3.2.8 Error Handling ................................................................................... 51
3.2.9 Call Stack .......................................................................................... 52
3.3 Driver API ................................................................................................ 52
3.3.1 Context ............................................................................................. 54
3.3.2 Module .............................................................................................. 55
3.3.3 Kernel Execution ................................................................................ 56
3.3.4 Device Memory .................................................................................. 58
3.3.5 Shared Memory ................................................................................. 61
3.3.6 Multiple Devices ................................................................................. 62
3.3.7 Texture and Surface Memory .............................................................. 62
3.3.7.1 Texture Memory .......................................................................... 62
3.3.7.2 Surface Memory .......................................................................... 64
3.3.8 Page-Locked Host Memory .................................................................. 65
3.3.9 Asynchronous Concurrent Execution .................................................... 66
3.3.9.1 Stream ....................................................................................... 66
3.3.9.2 Event Management ...................................................................... 67
3.3.9.3 Synchronous Calls ....................................................................... 67
3.3.10 Graphics Interoperability ..................................................................... 67



vi CUDA C Programming Guide Version 3.2

3.3.10.1 OpenGL Interoperability ............................................................... 68
3.3.10.2 Direct3D Interoperability .............................................................. 70
3.3.11 Error Handling ................................................................................... 77
3.3.12 Call Stack .......................................................................................... 77
3.4 Interoperability between Runtime and Driver APIs ....................................... 77
3.5 Versioning and Compatibility...................................................................... 78
3.6 Compute Modes ....................................................................................... 79
3.7 Mode Switches ......................................................................................... 79
Chapter 4. Hardware Implementation ........................................................... 81
4.1 SIMT Architecture ..................................................................................... 81
4.2 Hardware Multithreading ........................................................................... 82
4.3 Multiple Devices ....................................................................................... 83
Chapter 5. Performance Guidelines ............................................................... 85
5.1 Overall Performance Optimization Strategies ............................................... 85
5.2 Maximize Utilization .................................................................................. 85
5.2.1 Application Level ................................................................................ 85
5.2.2 Device Level ...................................................................................... 86
5.2.3 Multiprocessor Level ........................................................................... 86
5.3 Maximize Memory Throughput ................................................................... 88
5.3.1 Data Transfer between Host and Device .............................................. 89
5.3.2 Device Memory Accesses .................................................................... 89
5.3.2.1 Global Memory ............................................................................ 90
5.3.2.2 Local Memory .............................................................................. 91
5.3.2.3 Shared Memory ........................................................................... 92
5.3.2.4 Constant Memory ........................................................................ 92
5.3.2.5 Texture and Surface Memory ........................................................ 93
5.4 Maximize Instruction Throughput ............................................................... 93
5.4.1 Arithmetic Instructions ....................................................................... 94
5.4.2 Control Flow Instructions .................................................................... 96
5.4.3 Synchronization Instruction ................................................................. 97
Appendix A. CUDA-Enabled GPUs .................................................................. 99
Appendix B. C Language Extensions ............................................................ 103
B.1 Function Type Qualifiers .......................................................................... 103



CUDA C Programming Guide Version 3.2 vii

B.1.1 __device__ ...................................................................................... 103
B.1.2 __global__ ...................................................................................... 103
B.1.3 __host__ ......................................................................................... 103
B.1.4 Restrictions ..................................................................................... 104
B.1.4.1 Functions Parameters ................................................................ 104
B.1.4.2 Variadic Functions ..................................................................... 104
B.1.4.3 Static Variables ......................................................................... 104
B.1.4.4 Function Pointers ....................................................................... 104
B.1.4.5 Recursion ................................................................................. 104
B.2 Variable Type Qualifiers .......................................................................... 105
B.2.1 __device__ ...................................................................................... 105
B.2.2 __constant__ ................................................................................... 105
B.2.3 __shared__ ..................................................................................... 105
B.2.4 Restrictions ..................................................................................... 106
B.2.4.1 Storage and Scope .................................................................... 106
B.2.4.2 Assignment ............................................................................... 106
B.2.4.3 Automatic Variable .................................................................... 106
B.2.4.4 Pointers .................................................................................... 107
B.2.5 volatile ............................................................................................ 107
B.3 Built-in Vector Types ............................................................................... 108
B.3.1 char1, uchar1, char2, uchar2, char3, uchar3, char4, uchar4, short1,
ushort1, short2, ushort2, short3, ushort3, short4, ushort4, int1, uint1, int2, uint2,
int3, uint3, int4, uint4, long1, ulong1, long2, ulong2, long3, ulong3, long4, ulong4,
longlong1, ulonglong1, longlong2, ulonglong2, float1, float2, float3, float4, double1,
double2 108
B.3.2 dim3 ............................................................................................... 109
B.4 Built-in Variables .................................................................................... 109
B.4.1 gridDim ........................................................................................... 109
B.4.2 blockIdx .......................................................................................... 109
B.4.3 blockDim ......................................................................................... 109
B.4.4 threadIdx ........................................................................................ 109
B.4.5 warpSize ......................................................................................... 110
B.4.6 Restrictions ..................................................................................... 110
B.5 Memory Fence Functions ......................................................................... 110



viii CUDA C Programming Guide Version 3.2

B.6 Synchronization Functions ....................................................................... 111
B.7 Mathematical Functions ........................................................................... 112
B.8 Texture Functions ................................................................................... 113
B.8.1 tex1Dfetch() .................................................................................... 113
B.8.2 tex1D() ........................................................................................... 114
B.8.3 tex2D() ........................................................................................... 114
B.8.4 tex3D() ........................................................................................... 114
B.9 Surface Functions ................................................................................... 114
B.9.1 surf1Dread() .................................................................................... 115
B.9.2 surf1Dwrite() ................................................................................... 115
B.9.3 surf2Dread() .................................................................................... 115
B.9.4 surf2Dwrite() ................................................................................... 115
B.10 Time Function ........................................................................................ 115
B.11 Atomic Functions .................................................................................... 116
B.11.1 Arithmetic Functions ......................................................................... 116
B.11.1.1 atomicAdd() .............................................................................. 116
B.11.1.2 atomicSub() .............................................................................. 117
B.11.1.3 atomicExch() ............................................................................. 117
B.11.1.4 atomicMin() .............................................................................. 117
B.11.1.5 atomicMax() .............................................................................. 117
B.11.1.6 atomicInc() ............................................................................... 117
B.11.1.7 atomicDec() .............................................................................. 118
B.11.1.8 atomicCAS() .............................................................................. 118
B.11.2 Bitwise Functions ............................................................................. 118
B.11.2.1 atomicAnd() .............................................................................. 118
B.11.2.2 atomicOr() ................................................................................ 118
B.11.2.3 atomicXor() ............................................................................... 118
B.12 Warp Vote Functions ............................................................................... 119
B.13 Profiler Counter Function ......................................................................... 119
B.14 Formatted Output ................................................................................... 119
B.14.1 Format Specifiers ............................................................................. 120
B.14.2 Limitations ...................................................................................... 120
B.14.3 Associated Host-Side API .................................................................. 121



CUDA C Programming Guide Version 3.2 ix

B.14.4 Examples ........................................................................................ 121
B.15 Dynamic Global Memory Allocation ........................................................... 122
B.15.1 Heap Memory Allocation ................................................................... 123
B.15.2 Interoperability with Host Memory API ............................................... 123
B.15.3 Examples ........................................................................................ 123
B.15.3.1 Per Thread Allocation ................................................................. 123
B.15.3.2 Per Thread Block Allocation ........................................................ 124
B.15.3.3 Allocation Persisting Between Kernel Launches ............................. 125
B.16 Execution Configuration .......................................................................... 126
B.17 Launch Bounds ....................................................................................... 127
Appendix C. Mathematical Functions ........................................................... 129
C.1 Standard Functions ................................................................................. 129
C.1.1 Single-Precision Floating-Point Functions ............................................ 129
C.1.2 Double-Precision Floating-Point Functions .......................................... 132
C.1.3 Integer Functions ............................................................................. 134
C.2 Intrinsic Functions .................................................................................. 134
C.2.1 Single-Precision Floating-Point Functions ............................................ 134
C.2.2 Double-Precision Floating-Point Functions .......................................... 136
C.2.3 Integer Functions ............................................................................. 136
C.2.4 Type Casting Functions ..................................................................... 137
Appendix D. C++ Language Constructs ....................................................... 139
D.1 Polymorphism ........................................................................................ 139
D.2 Default Parameters ................................................................................. 140
D.3 Operator Overloading.............................................................................. 140
D.4 Namespaces ........................................................................................... 141
D.5 Function Templates ................................................................................ 141
D.6 Classes .................................................................................................. 142
D.6.1 Example 1 Pixel Data Type................................................................ 142
D.6.2 Example 2 Functor Class ................................................................... 143
Appendix E. NVCC Specifics ......................................................................... 145
E.1 __noinline__ and __forceinline__ ............................................................. 145
E.2 #pragma unroll ...................................................................................... 145
E.3 __restrict__ ........................................................................................... 146



x CUDA C Programming Guide Version 3.2

Appendix F. Texture Fetching ...................................................................... 149
F.1 Nearest-Point Sampling ........................................................................... 150
F.2 Linear Filtering ....................................................................................... 150
F.3 Table Lookup ......................................................................................... 152
Appendix G. Compute Capabilities ............................................................... 153
G.1 Features and Technical Specifications ....................................................... 154
G.2 Floating-Point Standard ........................................................................... 155
G.3 Compute Capability 1.x ........................................................................... 157
G.3.1 Architecture ..................................................................................... 157
G.3.2 Global Memory ................................................................................ 158
G.3.2.1 Devices of Compute Capability 1.0 and 1.1 .................................. 158
G.3.2.2 Devices of Compute Capability 1.2 and 1.3 .................................. 158
G.3.3 Shared Memory ............................................................................... 159
G.3.3.1 32-Bit Strided Access ................................................................. 159
G.3.3.2 32-Bit Broadcast Access ............................................................. 160
G.3.3.3 8-Bit and 16-Bit Access .............................................................. 160
G.3.3.4 Larger Than 32-Bit Access .......................................................... 160
G.4 Compute Capability 2.x ........................................................................... 161
G.4.1 Architecture ..................................................................................... 161
G.4.2 Global Memory ................................................................................ 163
G.4.3 Shared Memory ............................................................................... 165
G.4.3.1 32-Bit Strided Access ................................................................. 165
G.4.3.2 Larger Than 32-Bit Access .......................................................... 165
G.4.4 Constant Memory ............................................................................. 166




CUDA C Programming Guide Version 3.2 xi

List of Figures
Figure 1-1. Floating-Point Operations per Second and Memory Bandwidth for the CPU
and GPU 2
Figure 1-2. The GPU Devotes More Transistors to Data Processing ............................ 3
Figure 1-3. CUDA is Designed to Support Various Languages or Application
Programming Interfaces .................................................................................... 4
Figure 1-4. Automatic Scalability ............................................................................ 5
Figure 2-1. Grid of Thread Blocks ........................................................................... 9
Figure 2-2. Memory Hierarchy .............................................................................. 11
Figure 2-3. Heterogeneous Programming .............................................................. 13
Figure 3-1. Matrix Multiplication without Shared Memory ........................................ 24
Figure 3-2. Matrix Multiplication with Shared Memory ............................................ 28
Figure 3-3. Library Context Management .............................................................. 55
Figure 3-4. The Driver API is Backward, but Not Forward Compatible ...................... 79

Figure F-1. Nearest-Point Sampling of a One-Dimensional Texture of Four Texels .. 150
Figure F-2. Linear Filtering of a One-Dimensional Texture of Four Texels in Clamp
Addressing Mode ........................................................................................... 151
Figure F-3. One-Dimensional Table Lookup Using Linear Filtering .......................... 152
Figure G-1. Examples of Global Memory Accesses by a Warp, 4-Byte Word per Thread,
and Associated Memory Transactions Based on Compute Capability .................. 164
Figure G-2 Examples of Strided Shared Memory Accesses for Devices of Compute
Capability 2.x ................................................................................................ 167
Figure G-3 Examples of Irregular and Colliding Shared Memory Accesses for Devices
of Compute Capability 2.x .............................................................................. 169







CUDA C Programming Guide Version 3.1 1

Chapter 1.
Introduction
1.1 From Graphics Processing to
General-Purpose Parallel Computing
Driven by the insatiable market demand for realtime, high-definition 3D graphics,
the programmable Graphic Processor Unit or GPU has evolved into a highly
parallel, multithreaded, manycore processor with tremendous computational
horsepower and very high memory bandwidth, as illustrated by Figure 1-1.
Chapter 1. Introduction


2 CUDA C Programming Guide Version 3.2


Figure 1-1. Floating-Point Operations per Second and
Memory Bandwidth for the CPU and GPU
Chapter 1. Introduction


CUDA C Programming Guide Version 3.2 3


The reason behind the discrepancy in floating-point capability between the CPU and
the GPU is that the GPU is specialized for compute-intensive, highly parallel
computation – exactly what graphics rendering is about – and therefore designed
such that more transistors are devoted to data processing rather than data caching
and flow control, as schematically illustrated by Figure 1-2.


Figure 1-2. The GPU Devotes More Transistors to Data
Processing

More specifically, the GPU is especially well-suited to address problems that can be
expressed as data-parallel computations – the same program is executed on many
data elements in parallel – with high arithmetic intensity – the ratio of arithmetic
operations to memory operations. Because the same program is executed for each
data element, there is a lower requirement for sophisticated flow control, and
because it is executed on many data elements and has high arithmetic intensity, the
memory access latency can be hidden with calculations instead of big data caches.
Data-parallel processing maps data elements to parallel processing threads. Many
applications that process large data sets can use a data-parallel programming model
to speed up the computations. In 3D rendering, large sets of pixels and vertices are
mapped to parallel threads. Similarly, image and media processing applications such
as post-processing of rendered images, video encoding and decoding, image scaling,
stereo vision, and pattern recognition can map image blocks and pixels to parallel
processing threads. In fact, many algorithms outside the field of image rendering
and processing are accelerated by data-parallel processing, from general signal
processing or physics simulation to computational finance or computational biology.
1.2 CUDA™: a General-Purpose Parallel
Computing Architecture
In November 2006, NVIDIA introduced CUDA™, a general purpose parallel
computing architecture – with a new parallel programming model and instruction
set architecture – that leverages the parallel compute engine in NVIDIA GPUs to
Cache
ALU
Control
ALU
ALU
ALU
DRAM
CPU
DRAM








GPU
Chapter 1. Introduction


4 CUDA C Programming Guide Version 3.2

solve many complex computational problems in a more efficient way than on a
CPU.
CUDA comes with a software environment that allows developers to use C as a
high-level programming language. As illustrated by Figure 1-3, other languages or
application programming interfaces are supported, such as CUDA FORTRAN,
OpenCL, and DirectCompute.

Figure 1-3. CUDA is Designed to Support Various Languages
or Application Programming Interfaces
1.3 A Scalable Programming Model
The advent of multicore CPUs and manycore GPUs means that mainstream
processor chips are now parallel systems. Furthermore, their parallelism continues
to scale with Moore‟s law. The challenge is to develop application software that
transparently scales its parallelism to leverage the increasing number of processor
cores, much as 3D graphics applications transparently scale their parallelism to
manycore GPUs with widely varying numbers of cores.
The CUDA parallel programming model is designed to overcome this challenge
while maintaining a low learning curve for programmers familiar with standard
programming languages such as C.
At its core are three key abstractions – a hierarchy of thread groups, shared
memories, and barrier synchronization – that are simply exposed to the programmer
as a minimal set of language extensions.
These abstractions provide fine-grained data parallelism and thread parallelism,
nested within coarse-grained data parallelism and task parallelism. They guide the
programmer to partition the problem into coarse sub-problems that can be solved
independently in parallel by blocks of threads, and each sub-problem into finer
pieces that can be solved cooperatively in parallel by all threads within the block.
This decomposition preserves language expressivity by allowing threads to
Chapter 1. Introduction


CUDA C Programming Guide Version 3.2 5

cooperate when solving each sub-problem, and at the same time enables automatic
scalability. Indeed, each block of threads can be scheduled on any of the available
processor cores, in any order, concurrently or sequentially, so that a compiled
CUDA program can execute on any number of processor cores as illustrated by
Figure 1-4, and only the runtime system needs to know the physical processor
count.
This scalable programming model allows the CUDA architecture to span a wide
market range by simply scaling the number of processors and memory partitions:
from the high-performance enthusiast GeForce GPUs and professional Quadro and
Tesla computing products to a variety of inexpensive, mainstream GeForce GPUs
(see Appendix A for a list of all CUDA-enabled GPUs).


A multithreaded program is partitioned into blocks of threads that execute independently from each
other, so that a GPU with more cores will automatically execute the program in less time than a GPU
with fewer cores.
Figure 1-4. Automatic Scalability

GPU with 2 Cores

Core 1 Core 0
GPU with 4 Cores

Core 1 Core 0 Core 3 Core 2
Block 5 Block 6
Multithreaded CUDA Program
Block 0 Block 1 Block 2 Block 3
Block 4 Block 5 Block 6 Block 7

Block 1 Block 0

Block 3 Block 2

Block 5 Block 4

Block 7 Block 6

Block 0 Block 1 Block 2 Block 3

Block 4 Block 5 Block 6 Block 7
Chapter 1. Introduction


6 CUDA C Programming Guide Version 3.2

1.4 Document’s Structure
This document is organized into the following chapters:
 Chapter 1 is a general introduction to CUDA.
 Chapter 2 outlines the CUDA programming model.
 Chapter 3 describes the programming interface.
 Chapter 4 describes the hardware implementation.
 Chapter 5 gives some guidance on how to achieve maximum performance.
 Appendix A lists all CUDA-enabled devices.
 Appendix B is a detailed description of all extensions to the C language.
 Appendix C lists the mathematical functions supported in CUDA.
 Appendix D lists the C++ constructs supported in device code.
 Appendix E lists the specific keywords and directives supported by nvcc.
 Appendix F gives more details on texture fetching.
 Appendix G gives the technical specifications of various devices, as well as
more architectural details.







CUDA C Programming Guide Version 3.1 7

Chapter 2.
Programming Model
This chapter introduces the main concepts behind the CUDA programming model
by outlining how they are exposed in C. An extensive description of CUDA C is
given in Section 3.2.
Full code for the vector addition example used in this chapter and the next can be
found in the vectorAdd SDK code sample.
2.1 Kernels
CUDA C extends C by allowing the programmer to define C functions, called
kernels, that, when called, are executed N times in parallel by N different CUDA
threads, as opposed to only once like regular C functions.
A kernel is defined using the __global__ declaration specifier and the number of
CUDA threads that execute that kernel for a given kernel call is specified using a
new <<<…>>> execution configuration syntax (see Appendix B.16). Each thread that
executes the kernel is given a unique thread ID that is accessible within the kernel
through the built-in threadIdx variable.
As an illustration, the following sample code adds two vectors A and B of size N
and stores the result into vector C:
// Kernel definition
__global__ void VecAdd(float* A, float* B, float* C)
{
int i = threadIdx.x;
C[i] = A[i] + B[i];
}

int main()
{
...
// Kernel invocation with N threads
VecAdd<<<1, N>>>(A, B, C);
}
Here, each of the N threads that execute VecAdd() performs one pair-wise
addition.
Chapter 2. Programming Model


8 CUDA C Programming Guide Version 3.2

2.2 Thread Hierarchy
For convenience, threadIdx is a 3-component vector, so that threads can be
identified using a one-dimensional, two-dimensional, or three-dimensional thread
index, forming a one-dimensional, two-dimensional, or three-dimensional thread
block. This provides a natural way to invoke computation across the elements in a
domain such as a vector, matrix, or volume.
The index of a thread and its thread ID relate to each other in a straightforward
way: For a one-dimensional block, they are the same; for a two-dimensional block
of size (Dx, Dy), the thread ID of a thread of index (x, y) is (x + y Dx); for a three-
dimensional block of size (Dx, Dy, Dz), the thread ID of a thread of index (x, y, z) is
(x + y Dx + z Dx Dy).
As an example, the following code adds two matrices A and B of size NxN and
stores the result into matrix C:
// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = threadIdx.x;
int j = threadIdx.y;
C[i][j] = A[i][j] + B[i][j];
}

int main()
{
...
// Kernel invocation with one block of N * N * 1 threads
int numBlocks = 1;
dim3 threadsPerBlock(N, N);
MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);
}
There is a limit to the number of threads per block, since all threads of a block are
expected to reside on the same processor core and must share the limited memory
resources of that core. On current GPUs, a thread block may contain up to 1024
threads.
However, a kernel can be executed by multiple equally-shaped thread blocks, so that
the total number of threads is equal to the number of threads per block times the
number of blocks.
Blocks are organized into a one-dimensional or two-dimensional grid of thread
blocks as illustrated by Figure 2-1. The number of thread blocks in a grid is usually
dictated by the size of the data being processed or the number of processors in the
system, which it can greatly exceed.

Chapter 2: Programming Model


CUDA C Programming Guide Version 3.2 9


Figure 2-1. Grid of Thread Blocks

The number of threads per block and the number of blocks per grid specified in the
<<<…>>> syntax can be of type int or dim3. Two-dimensional blocks or grids can
be specified as in the example above.
Each block within the grid can be identified by a one-dimensional or two-
dimensional index accessible within the kernel through the built-in blockIdx
variable. The dimension of the thread block is accessible within the kernel through
the built-in blockDim variable.
Extending the previous MatAdd() example to handle multiple blocks, the code
becomes as follows.
// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (i < N && j < N)
C[i][j] = A[i][j] + B[i][j];
Grid
Block (1, 1)
Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0)
Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1)
Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2)
Block (2, 1) Block (1, 1) Block (0, 1)
Block (2, 0) Block (1, 0) Block (0, 0)
Chapter 2. Programming Model


10 CUDA C Programming Guide Version 3.2

}

int main()
{
...
// Kernel invocation
dim3 threadsPerBlock(16, 16);
dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y);
MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);
}
A thread block size of 16x16 (256 threads), although arbitrary in this case, is a
common choice. The grid is created with enough blocks to have one thread per
matrix element as before. For simplicity, this example assumes that the number of
threads per grid in each dimension is evenly divisible by the number of threads per
block in that dimension, although that need not be the case.
Thread blocks are required to execute independently: It must be possible to execute
them in any order, in parallel or in series. This independence requirement allows
thread blocks to be scheduled in any order across any number of cores as illustrated
by Figure 1-4, enabling programmers to write code that scales with the number of
cores.
Threads within a block can cooperate by sharing data through some shared memory
and by synchronizing their execution to coordinate memory accesses. More
precisely, one can specify synchronization points in the kernel by calling the
__syncthreads() intrinsic function; __syncthreads() acts as a barrier at
which all threads in the block must wait before any is allowed to proceed.
Section 3.2.2 gives an example of using shared memory.
For efficient cooperation, the shared memory is expected to be a low-latency
memory near each processor core (much like an L1 cache) and __syncthreads()
is expected to be lightweight.
2.3 Memory Hierarchy
CUDA threads may access data from multiple memory spaces during their
execution as illustrated by Figure 2-2. Each thread has private local memory. Each
thread block has shared memory visible to all threads of the block and with the
same lifetime as the block. All threads have access to the same global memory.
There are also two additional read-only memory spaces accessible by all threads: the
constant and texture memory spaces. The global, constant, and texture memory
spaces are optimized for different memory usages (see Sections 5.3.2.1, 5.3.2.4, and
5.3.2.5). Texture memory also offers different addressing modes, as well as data
filtering, for some specific data formats (see Section 3.2.4).
The global, constant, and texture memory spaces are persistent across kernel
launches by the same application.
Chapter 2: Programming Model


CUDA C Programming Guide Version 3.2 11


Figure 2-2. Memory Hierarchy
2.4 Heterogeneous Programming
As illustrated by Figure 2-3, the CUDA programming model assumes that the
CUDA threads execute on a physically separate device that operates as a coprocessor
to the host running the C program. This is the case, for example, when the kernels
execute on a GPU and the rest of the C program executes on a CPU.














Global memory
Grid 0
Block (2, 1) Block (1, 1) Block (0, 1)
Block (2, 0) Block (1, 0) Block (0, 0)
Grid 1
Block (1, 1)
Block (1, 0)
Block (1, 2)
Block (0, 1)
Block (0, 0)
Block (0, 2)
Thread Block

Per-block shared
memory
Thread
Per-thread local
memory
Chapter 2. Programming Model


12 CUDA C Programming Guide Version 3.2

The CUDA programming model also assumes that both the host and the device
maintain their own separate memory spaces in DRAM, referred to as host memory and
device memory, respectively. Therefore, a program manages the global, constant, and
texture memory spaces visible to kernels through calls to the CUDA runtime
(described in Chapter 3). This includes device memory allocation and deallocation as
well as data transfer between host and device memory.
Chapter 2: Programming Model


CUDA C Programming Guide Version 3.2 13


Serial code executes on the host while parallel code executes on the device.
Figure 2-3. Heterogeneous Programming
Device
Grid 0
Block (2, 1) Block (1, 1) Block (0, 1)
Block (2, 0) Block (1, 0) Block (0, 0)
Host
C Program
Sequential
Execution

Serial code


Parallel kernel
Kernel0<<<>>>()








Serial code




Parallel kernel
Kernel1<<<>>>()



Host
Device
Grid 1
Block (1, 1)
Block (1, 0)
Block (1, 2)
Block (0, 1)
Block (0, 0)
Block (0, 2)
Chapter 2. Programming Model


14 CUDA C Programming Guide Version 3.2

2.5 Compute Capability
The compute capability of a device is defined by a major revision number and a minor
revision number.
Devices with the same major revision number are of the same core architecture. The
major revision number of devices based on the Fermi architecture is 2. Prior devices
are all of compute capability 1.x (Their major revision number is 1).
The minor revision number corresponds to an incremental improvement to the core
architecture, possibly including new features.
Appendix A lists of all CUDA-enabled devices along with their compute capability.
Appendix G gives the technical specifications of each compute capability.







CUDA C Programming Guide Version 3.1 15

Chapter 3.
Programming Interface
Two interfaces are currently supported to write CUDA programs: CUDA C and the
CUDA driver API. An application typically uses either one or the other, but it can
use both as described in Section 3.4.
CUDA C exposes the CUDA programming model as a minimal set of extensions to
the C language. Any source file that contains some of these extensions must be
compiled with nvcc as outlined in Section 3.1. These extensions allow
programmers to define a kernel as a C function and use some new syntax to specify
the grid and block dimension each time the function is called.
The CUDA driver API is a lower-level C API that provides functions to load
kernels as modules of CUDA binary or assembly code, to inspect their parameters,
and to launch them. Binary and assembly codes are usually obtained by compiling
kernels written in C.
CUDA C comes with a runtime API and both the runtime API and the driver API
provide functions to allocate and deallocate device memory, transfer data between
host memory and device memory, manage systems with multiple devices, etc.
The runtime API is built on top of the CUDA driver API. Initialization, context,
and module management are all implicit and resulting code is more concise.
In contrast, the CUDA driver API requires more code, is harder to program and
debug, but offers a better level of control and is language-independent since it
handles binary or assembly code.
Section 3.2 continues the description of CUDA C started in Chapter 2. It also
introduces concepts that are common to both CUDA C and the driver API: linear
memory, CUDA arrays, shared memory, texture memory, page-locked host
memory, device enumeration, asynchronous execution, interoperability with
graphics APIs. Section 3.3 assumes knowledge of these concepts and describes how
they are exposed by the driver API.
3.1 Compilation with NVCC
Kernels can be written using the CUDA instruction set architecture, called PTX,
which is described in the PTX reference manual. It is however usually more
Chapter 3. Programming Interface


16 CUDA C Programming Guide Version 3.2

effective to use a high-level programming language such as C. In both cases, kernels
must be compiled into binary code by nvcc to execute on the device.
nvcc is a compiler driver that simplifies the process of compiling C or PTX code: It
provides simple and familiar command line options and executes them by invoking
the collection of tools that implement the different compilation stages. This section
gives an overview of nvcc workflow and command options. A complete
description can be found in the nvcc user manual.
3.1.1 Compilation Workflow
Source files compiled with nvcc can include a mix of host code (i.e. code that
executes on the host) and device code (i.e. code that executes on the device). nvcc‟s
basic workflow consists in separating device code from host code and compiling the
device code into an assembly form (PTX code) and/or binary form (cubin object).
The generated host code is output either as C code that is left to be compiled using
another tool or as object code directly by letting nvcc invoke the host compiler
during the last compilation stage.
Applications can then:
 Either load and execute the PTX code or cubin object on the device using the
CUDA driver API (see Section 3.3) and ignore the generated host code (if any);
 Or link to the generated host code; the generated host code includes the PTX
code and/or cubin object as a global initialized data array and a translation of the
<<<…>>> syntax introduced in Section 2.1 (and described in more details in
Section B.16) into the necessary CUDA C runtime function calls to load and
launch each compiled kernel.
Any PTX code loaded by an application at runtime is compiled further to binary
code by the device driver. This is called just-in-time compilation. Just-in-time
compilation increases application load time, but allow applications to benefit from
latest compiler improvements. It is also the only way for applications to run on
devices that did not exist at the time the application was compiled, as detailed in
Section 3.1.4.
3.1.2 Binary Compatibility
Binary code is architecture-specific. A cubin object is generated using the compiler
option –code that specifies the targeted architecture: For example, compiling with
–code=sm_13 produces binary code for devices of compute capability 1.3. Binary
compatibility is guaranteed from one minor revision to the next one, but not from
one minor revision to the previous one or across major revisions. In other words, a
cubin object generated for compute capability X.y is only guaranteed to execute on
devices of compute capability X.z where z≥y.
3.1.3 PTX Compatibility
Some PTX instructions are only supported on devices of higher compute
capabilities. For example, atomic instructions on global memory are only supported
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 17

on devices of compute capability 1.1 and above; double-precision instructions are
only supported on devices of compute capability 1.3 and above. The –arch
compiler option specifies the compute capability that is assumed when compiling C
to PTX code. So, code that contains double-precision arithmetic, for example, must
be compiled with “-arch=sm_13” (or higher compute capability), otherwise
double-precision arithmetic will get demoted to single-precision arithmetic.
PTX code produced for some specific compute capability can always be compiled to
binary code of greater or equal compute capability.
3.1.4 Application Compatibility
To execute code on devices of specific compute capability, an application must load
binary or PTX code that is compatible with this compute capability as described in
Sections 3.1.2 and 3.1.3. In particular, to be able to execute code on future
architectures with higher compute capability – for which no binary code can be
generated yet –, an application must load PTX code that will be compiled just-in-
time for these devices.
Which PTX and binary code gets embedded in a CUDA C application is controlled
by the –arch and –code compiler options or the –gencode compiler option as
detailed in the nvcc user manual. For example,
nvcc x.cu
–gencode arch=compute_10,code=sm_10
–gencode arch=compute_11,code=\’compute_11,sm_11\’
embeds binary code compatible with compute capability 1.0 (first –gencode
option) and PTX and binary code compatible with compute capability 1.1 (second
-gencode option).
Host code is generated to automatically select at runtime the most appropriate code
to load and execute, which, in the above example, will be:
 1.0 binary code for devices with compute capability 1.0,
 1.1 binary code for devices with compute capability 1.1, 1.2, 1.3,
 binary code obtained by compiling 1.1 PTX code for devices with compute
capabilities 2.0 and higher.
x.cu can have an optimized code path that uses atomic operations, for example,
which are only supported in devices of compute capability 1.1 and higher. The
__CUDA_ARCH__ macro can be used to differentiate various code paths based on
compute capability. It is only defined for device code. When compiling with
“arch=compute_11” for example, __CUDA_ARCH__ is equal to 110.
Applications using the driver API must compile code to separate files and explicitly
load and execute the most appropriate file at runtime.
The nvcc user manual lists various shorthands for the –arch, –code, and
÷gencode compiler options. For example, “÷arch=sm_13” is a shorthand for
“÷arch=compute_13 ÷code=compute_13,sm_13” (which is the same as
“÷gencode arch=compute_13,code=\’compute_13,sm_13\’”).
Chapter 3. Programming Interface


18 CUDA C Programming Guide Version 3.2

3.1.5 C/C++ Compatibility
The front end of the compiler processes CUDA source files according to C++
syntax rules. Full C++ is supported for the host code. However, only a subset of
C++ is fully supported for the device code as described in detail in Appendix D. As
a consequence of the use of C++ syntax rules, void pointers (e.g., returned by
malloc()) cannot be assigned to non-void pointers without a typecast.
nvcc also support specific keywords and directives detailed in Appendix E.
3.1.6 64-Bit Compatibility
The 64-bit version of nvcc compiles device code in 64-bit mode (i.e. pointers are
64-bit). Device code compiled in 64-bit mode is only supported with host code
compiled in 64-bit mode.
Similarly, the 32-bit version of nvcc compiles device code in 32-bit mode and
device code compiled in 32-bit mode is only supported with host code compiled in
32-bit mode.
The 32-bit version of nvcc can compile device code in 64-bit mode also using the
÷m64 compiler option.
The 64-bit version of nvcc can compile device code in 32-bit mode also using the
÷m32 compiler option.
3.2 CUDA C
CUDA C provides a simple path for users familiar with the C programming
language to easily write programs for execution by the device.
It consists of a minimal set of extensions to the C language and a runtime library.
The core language extensions have been introduced in Chapter 2. This section
continues with an introduction to the runtime. A complete description of all
extensions can be found in Appendix B and a complete description of the runtime
in the CUDA reference manual.
The runtime is implemented in the cudart dynamic library and all its entry points
are prefixed with cuda.
There is no explicit initialization function for the runtime; it initializes the first time
a runtime function is called (more specifically any function other than functions
from the device and version management sections of the reference manual). One
needs to keep this in mind when timing runtime function calls and when
interpreting the error code from the first call into the runtime.
Once the runtime has been initialized in a host thread, any resource (memory,
stream, event, etc.) allocated via some runtime function call in the host thread is
only valid within the context of the host thread. Therefore only runtime functions
calls made by the host thread (memory copies, kernel launches, …) can operate on
these resources. This is because a CUDA context (see Section 3.3.1) is created under
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 19

the hood as part of initialization and made current to the host thread, and it cannot
be made current to any other host thread.
On system with multiple devices, kernels are executed on device 0 by default as
detailed in Section 3.2.3.
3.2.1 Device Memory
As mentioned in Section 2.4, the CUDA programming model assumes a system
composed of a host and a device, each with their own separate memory. Kernels
can only operate out of device memory, so the runtime provides functions to
allocate, deallocate, and copy device memory, as well as transfer data between host
memory and device memory.
Device memory can be allocated either as linear memory or as CUDA arrays.
CUDA arrays are opaque memory layouts optimized for texture fetching. They are
described in Section 3.2.4.
Linear memory exists on the device in a 32-bit address space for devices of compute
capability 1.x and 40-bit address space of devices of compute capability 2.x, so
separately allocated entities can reference one another via pointers, for example, in a
binary tree.
Linear memory is typically allocated using cudaMalloc() and freed using
cudaFree() and data transfer between host memory and device memory are
typically done using cudaMemcpy(). In the vector addition code sample of
Section 2.1, the vectors need to be copied from host memory to device memory:
// Device code
__global__ void VecAdd(float* A, float* B, float* C, int N)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < N)
C[i] = A[i] + B[i];
}

// Host code
int main()
{
int N = ...;
size_t size = N * sizeof(float);

// Allocate input vectors h_A and h_B in host memory
float* h_A = (float*)malloc(size);
float* h_B = (float*)malloc(size);

// Initialize input vectors
...

// Allocate vectors in device memory
float* d_A;
cudaMalloc(&d_A, size);
float* d_B;
cudaMalloc(&d_B, size);
float* d_C;
Chapter 3. Programming Interface


20 CUDA C Programming Guide Version 3.2

cudaMalloc(&d_C, size);

// Copy vectors from host memory to device memory
cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice);

// Invoke kernel
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock – 1) / threadsPerBlock;
VecAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, N);

// Copy result from device memory to host memory
// h_C contains the result in host memory
cudaMemcpy(h_C, d_C, size, cudaMemcpyDeviceToHost);

// Free device memory
cudaFree(d_A);
cudaFree(d_B);
cudaFree(d_C);

// Free host memory
...
}
Linear memory can also be allocated through cudaMallocPitch() and
cudaMalloc3D(). These functions are recommended for allocations of 2D or 3D
arrays as it makes sure that the allocation is appropriately padded to meet the
alignment requirements described in Section 5.3.2.1, therefore ensuring best
performance when accessing the row addresses or performing copies between 2D
arrays and other regions of device memory (using the cudaMemcpy2D() and
cudaMemcpy3D() functions). The returned pitch (or stride) must be used to access
array elements. The following code sample allocates a width×height 2D array of
floating-point values and shows how to loop over the array elements in device code:
// Host code
int width = 64, height = 64;
float* devPtr;
size_t pitch;
cudaMallocPitch(&devPtr, &pitch,
width * sizeof(float), height);
MyKernel<<<100, 512>>>(devPtr, pitch, width, height);

// Device code
__global__ void MyKernel(float* devPtr,
size_t pitch, int width, int height)
{
for (int r = 0; r < height; ++r) {
float* row = (float*)((char*)devPtr + r * pitch);
for (int c = 0; c < width; ++c) {
float element = row[c];
}
}
}
The following code sample allocates a width×height×depth 3D array of
floating-point values and shows how to loop over the array elements in device code:
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 21

// Host code
int width = 64, height = 64, depth = 64;
cudaExtent extent = make_cudaExtent(width * sizeof(float),
height, depth);
cudaPitchedPtr devPitchedPtr;
cudaMalloc3D(&devPitchedPtr, extent);
MyKernel<<<100, 512>>>(devPitchedPtr, width, height, depth);

// Device code
__global__ void MyKernel(cudaPitchedPtr devPitchedPtr,
int width, int height, int depth)
{
char* devPtr = devPitchedPtr.ptr;
size_t pitch = devPitchedPtr.pitch;
size_t slicePitch = pitch * height;
for (int z = 0; z < depth; ++z) {
char* slice = devPtr + z * slicePitch;
for (int y = 0; y < height; ++y) {
float* row = (float*)(slice + y * pitch);
for (int x = 0; x < width; ++x) {
float element = row[x];
}
}
}
}
The reference manual lists all the various functions used to copy memory between
linear memory allocated with cudaMalloc(), linear memory allocated with
cudaMallocPitch() or cudaMalloc3D(), CUDA arrays, and memory
allocated for variables declared in global or constant memory space.
The following code sample illustrates various ways of accessing global variables via
the runtime API:
__constant__ float constData[256];
float data[256];
cudaMemcpyToSymbol(constData, data, sizeof(data));
cudaMemcpyFromSymbol(data, constData, sizeof(data));

__device__ float devData;
float value = 3.14f;
cudaMemcpyToSymbol(devData, &value, sizeof(float));

__device__ float* devPointer;
float* ptr;
cudaMalloc(&ptr, 256 * sizeof(float));
cudaMemcpyToSymbol(devPointer, &ptr, sizeof(ptr));
cudaGetSymbolAddress() is used to retrieve the address pointing to the
memory allocated for a variable declared in global memory space. The size of the
allocated memory is obtained through cudaGetSymbolSize().
3.2.2 Shared Memory
As detailed in Section B.2 shared memory is allocated using the __shared__
qualifier.
Chapter 3. Programming Interface


22 CUDA C Programming Guide Version 3.2

Shared memory is expected to be much faster than global memory as mentioned in
Section 2.2 and detailed in Section 5.3.2.3. Any opportunity to replace global
memory accesses by shared memory accesses should therefore be exploited as
illustrated by the following matrix multiplication example.
The following code sample is a straightforward implementation of matrix
multiplication that does not take advantage of shared memory. Each thread reads
one row of A and one column of B and computes the corresponding element of C
as illustrated in Figure 3-1. A is therefore read B.width times from global memory
and B is read A.height times.
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.width + col)
typedef struct {
int width;
int height;
float* elements;
} Matrix;

// Thread block size
#define BLOCK_SIZE 16

// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);

// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
// Load A and B to device memory
Matrix d_A;
d_A.width = A.width; d_A.height = A.height;
size_t size = A.width * A.height * sizeof(float);
cudaMalloc(&d_A.elements, size);
cudaMemcpy(d_A.elements, A.elements, size,
cudaMemcpyHostToDevice);
Matrix d_B;
d_B.width = B.width; d_B.height = B.height;
size = B.width * B.height * sizeof(float);
cudaMalloc(&d_B.elements, size);
cudaMemcpy(d_B.elements, B.elements, size,
cudaMemcpyHostToDevice);

// Allocate C in device memory
Matrix d_C;
d_C.width = C.width; d_C.height = C.height;
size = C.width * C.height * sizeof(float);
cudaMalloc(&d_C.elements, size);

// Invoke kernel
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);

// Read C from device memory
cudaMemcpy(C.elements, Cd.elements, size,
cudaMemcpyDeviceToHost);
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 23


// Free device memory
cudaFree(d_A.elements);
cudaFree(d_B.elements);
cudaFree(d_C.elements);
}

// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
// Each thread computes one element of C
// by accumulating results into Cvalue
float Cvalue = 0;
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
for (int e = 0; e < A.width; ++e)
Cvalue += A.elements[row * A.width + e]
* B.elements[e * B.width + col];
C.elements[row * C.width + col] = Cvalue;
}

Chapter 3. Programming Interface


24 CUDA C Programming Guide Version 3.2


Figure 3-1. Matrix Multiplication without Shared Memory
The following code sample is an implementation of matrix multiplication that does
take advantage of shared memory. In this implementation, each thread block is
responsible for computing one square sub-matrix Csub of C and each thread within
the block is responsible for computing one element of Csub. As illustrated in Figure
3-2, Csub is equal to the product of two rectangular matrices: the sub-matrix of A of
dimension (A.width, block_size) that has the same line indices as Csub, and the sub-
matrix of B of dimension (block_size, A.width) that has the same column indices as
Csub. In order to fit into the device‟s resources, these two rectangular matrices are
divided into as many square matrices of dimension block_size as necessary and Csub is
computed as the sum of the products of these square matrices. Each of these
products is performed by first loading the two corresponding square matrices from
global memory to shared memory with one thread loading one element of each
matrix, and then by having each thread compute one element of the product. Each
thread accumulates the result of each of these products into a register and once
done writes the result to global memory.
A
B
C

B.width A.width
0
col
A
.
h
e
i
g
h
t

B
.
h
e
i
g
h
t

B
.
w
i
d
t
h
-
1

row
0
A.height-1
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 25

By blocking the computation this way, we take advantage of fast shared memory
and save a lot of global memory bandwidth since A is only read (B.width / block_size)
times from global memory and B is read (A.height / block_size) times.
The Matrix type from the previous code sample is augmented with a stride field, so
that sub-matrices can be efficiently represented with the same type. __device__
functions (see Section B.1.1) are used to get and set elements and build any sub-
matrix from a matrix.
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
int width;
int height;
int stride;
float* elements;
} Matrix;

// Get a matrix element
__device__ float GetElement(const Matrix A, int row, int col)
{
return A.elements[row * A.stride + col];
}

// Set a matrix element
__device__ void SetElement(Matrix A, int row, int col,
float value)
{
A.elements[row * A.stride + col] = value;
}

// Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is
// located col sub-matrices to the right and row sub-matrices down
// from the upper-left corner of A
__device__ Matrix GetSubMatrix(Matrix A, int row, int col)
{
Matrix Asub;
Asub.width = BLOCK_SIZE;
Asub.height = BLOCK_SIZE;
Asub.stride = A.stride;
Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row
+ BLOCK_SIZE * col];
return Asub;
}

// Thread block size
#define BLOCK_SIZE 16

// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);

// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
// Load A and B to device memory
Matrix d_A;
Chapter 3. Programming Interface


26 CUDA C Programming Guide Version 3.2

d_A.width = d_A.stride = A.width; d_A.height = A.height;
size_t size = A.width * A.height * sizeof(float);
cudaMalloc(&d_A.elements, size);
cudaMemcpy(d_A.elements, A.elements, size,
cudaMemcpyHostToDevice);
Matrix d_B;
d_B.width = d_B.stride = B.width; d_B.height = B.height;
size = B.width * B.height * sizeof(float);
cudaMalloc(&d_B.elements, size);
cudaMemcpy(d_B.elements, B.elements, size,
cudaMemcpyHostToDevice);

// Allocate C in device memory
Matrix d_C;
d_C.width = d_C.stride = C.width; d_C.height = C.height;
size = C.width * C.height * sizeof(float);
cudaMalloc(&d_C.elements, size);

// Invoke kernel
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);

// Read C from device memory
cudaMemcpy(C.elements, d_C.elements, size,
cudaMemcpyDeviceToHost);

// Free device memory
cudaFree(d_A.elements);
cudaFree(d_B.elements);
cudaFree(d_C.elements);
}

// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
// Block row and column
int blockRow = blockIdx.y;
int blockCol = blockIdx.x;

// Each thread block computes one sub-matrix Csub of C
Matrix Csub = GetSubMatrix(C, blockRow, blockCol);

// Each thread computes one element of Csub
// by accumulating results into Cvalue
float Cvalue = 0;

// Thread row and column within Csub
int row = threadIdx.y;
int col = threadIdx.x;

// Loop over all the sub-matrices of A and B that are
// required to compute Csub
// Multiply each pair of sub-matrices together
// and accumulate the results
for (int m = 0; m < (A.width / BLOCK_SIZE); ++m) {

Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 27

// Get sub-matrix Asub of A
Matrix Asub = GetSubMatrix(A, blockRow, m);

// Get sub-matrix Bsub of B
Matrix Bsub = GetSubMatrix(B, m, blockCol);

// Shared memory used to store Asub and Bsub respectively
__shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
__shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];

// Load Asub and Bsub from device memory to shared memory
// Each thread loads one element of each sub-matrix
As[row][col] = GetElement(Asub, row, col);
Bs[row][col] = GetElement(Bsub, row, col);

// Synchronize to make sure the sub-matrices are loaded
// before starting the computation
__syncthreads();

// Multiply Asub and Bsub together
for (int e = 0; e < BLOCK_SIZE; ++e)
Cvalue += As[row][e] * Bs[e][col];

// Synchronize to make sure that the preceding
// computation is done before loading two new
// sub-matrices of A and B in the next iteration
__syncthreads();
}

// Write Csub to device memory
// Each thread writes one element
SetElement(Csub, row, col, Cvalue);
}
Chapter 3. Programming Interface


28 CUDA C Programming Guide Version 3.2


Figure 3-2. Matrix Multiplication with Shared Memory
3.2.3 Multiple Devices
A host system can have multiple devices. These devices can be enumerated, their
properties can be queried, and one of them can be selected for kernel executions.
Several host threads can execute device code on the same device, but by design, a
host thread can execute device code on only one device at any given time. As a
consequence, multiple host threads are required to execute device code on multiple
devices. Also, any CUDA resources created through the runtime in one host thread
cannot be used by the runtime from another host thread.
The following code sample enumerates all devices in the system and retrieves their
properties. It also determines the number of CUDA-enabled devices.
int deviceCount;
cudaGetDeviceCount(&deviceCount);
int device;
for (device = 0; device < deviceCount; ++device) {
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, device);
if (dev == 0) {
A

B
C
C
sub


BLOCK_SIZE
B.width A.width
BLOCK_SIZE BLOCK_SIZE
B
L
O
C
K
_
S
I
Z
E

B
L
O
C
K
_
S
I
Z
E

B
L
O
C
K
_
S
I
Z
E

b
l
o
c
k
R
o
w

row
0
BLOCK_SIZE-1
B
L
O
C
K
_
S
I
Z
E
-
1

0 col
blockCol
A
.
h
e
i
g
h
t

B
.
h
e
i
g
h
t

Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 29

if (deviceProp.major == 9999 && deviceProp.minor == 9999)
printf("There is no device supporting CUDA.\n");
else if (deviceCount == 1)
printf("There is 1 device supporting CUDA\n");
else
printf("There are %d devices supporting CUDA\n",
deviceCount);
}
}
By default, the device associated to the host thread is implicitly selected as device 0
as soon as a non-device management runtime function is called (see Section 3.6 for
exceptions). Any other device can be selected by calling cudaSetDevice() first.
After a device has been selected, either implicitly or explicitly, any subsequent
explicit call to cudaSetDevice() will fail up until cudaThreadExit() is called.
cudaThreadExit() cleans up all runtime-related resources associated with the
calling host thread. Any subsequent API call reinitializes the runtime.
3.2.4 Texture and Surface Memory
CUDA supports a subset of the texturing hardware that the GPU uses for graphics
to access texture and surface memory. Reading data from texture or surface memory
instead of global memory can have several performance benefits as described in
Section 5.3.2.5.
3.2.4.1 Texture Memory
Texture memory is read from kernels using device functions called texture fetches,
described in Section B.8. The first parameter of a texture fetch specifies an object
called a texture reference.
A texture reference defines which part of texture memory is fetched. As detailed in
Section 3.2.4.1.3, it must be bound through runtime functions to some region of
memory, called a texture, before it can be used by a kernel. Several distinct texture
references might be bound to the same texture or to textures that overlap in
memory.
A texture reference has several attributes. One of them is its dimensionality that
specifies whether the texture is addressed as a one-dimensional array using one
texture coordinate, a two-dimensional array using two texture coordinates, or a three-
dimensional array using three texture coordinates. Elements of the array are called
texels, short for “texture elements.”
Other attributes define the input and output data types of the texture fetch, as well
as how the input coordinates are interpreted and what processing should be done.
A texture can be any region of linear memory or a CUDA array (described in
Section 3.2.4.3).
Section G.1 lists the maximum texture width, height, and depth depending on the
compute capability of the device.
Chapter 3. Programming Interface


30 CUDA C Programming Guide Version 3.2

3.2.4.1.1 Texture Reference Declaration
Some of the attributes of a texture reference are immutable and must be known at
compile time; they are specified when declaring the texture reference. A texture
reference is declared at file scope as a variable of type texture:
texture<Type, Dim, ReadMode> texRef;
where:
 Type specifies the type of data that is returned when fetching the texture; Type
is restricted to the basic integer and single-precision floating-point types and any
of the 1-, 2-, and 4-component vector types defined in Section B.3.1;
 Dim specifies the dimensionality of the texture reference and is equal to 1, 2, or
3; Dim is an optional argument which defaults to 1;
 ReadMode is equal to cudaReadModeNormalizedFloat or
cudaReadModeElementType; if it is cudaReadModeNormalizedFloat
and Type is a 16-bit or 8-bit integer type, the value is actually returned as
floating-point type and the full range of the integer type is mapped to [0.0, 1.0]
for unsigned integer type and [-1.0, 1.0] for signed integer type; for example, an
unsigned 8-bit texture element with the value 0xff reads as 1; if it is
cudaReadModeElementType, no conversion is performed; ReadMode is an
optional argument which defaults to cudaReadModeElementType.
A texture reference can only be declared as a static global variable and cannot be
passed as an argument to a function.
3.2.4.1.2 Runtime Texture Reference Attributes
The other attributes of a texture reference are mutable and can be changed at
runtime through the host runtime. They specify whether texture coordinates are
normalized or not, the addressing mode, and texture filtering, as detailed below.
By default, textures are referenced using floating-point coordinates in the range
[0, N) where N is the size of the texture in the dimension corresponding to the
coordinate. For example, a texture that is 64×32 in size will be referenced with
coordinates in the range [0, 63] and [0, 31] for the x and y dimensions, respectively.
Normalized texture coordinates cause the coordinates to be specified in the range
[0.0, 1.0) instead of [0, N), so the same 64×32 texture would be addressed by
normalized coordinates in the range [0, 1) in both the x and y dimensions.
Normalized texture coordinates are a natural fit to some applications‟ requirements,
if it is preferable for the texture coordinates to be independent of the texture size.
The addressing mode defines what happens when texture coordinates are out of
range. When using unnormalized texture coordinates, texture coordinates outside
the range [0, N) are clamped: Values below 0 are set to 0 and values greater or equal
to N are set to N-1. Clamping is also the default addressing mode when using
normalized texture coordinates: Values below 0.0 or above 1.0 are clamped to the
range [0.0, 1.0). For normalized coordinates, the “wrap” addressing mode also may
be specified. Wrap addressing is usually used when the texture contains a periodic
signal. It uses only the fractional part of the texture coordinate; for example, 1.25 is
treated the same as 0.25 and -1.25 is treated the same as 0.75.
Linear texture filtering may be done only for textures that are configured to return
floating-point data. It performs low-precision interpolation between neighboring
texels. When enabled, the texels surrounding a texture fetch location are read and
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 31

the return value of the texture fetch is interpolated based on where the texture
coordinates fell between the texels. Simple linear interpolation is performed for one-
dimensional textures and bilinear interpolation is performed for two-dimensional
textures.
Appendix F gives more details on texture fetching.
3.2.4.1.3 Texture Binding
As explained in the reference manual, the runtime API has a low-level C-style
interface and a high-level C++-style interface. The texture type is defined in the
high-level API as a structure publicly derived from the textureReference type
defined in the low-level API as such:
struct textureReference {
int normalized;
enum cudaTextureFilterMode filterMode;
enum cudaTextureAddressMode addressMode[3];
struct cudaChannelFormatDesc channelDesc;
}
 normalized specifies whether texture coordinates are normalized or not; if it
is non-zero, all elements in the texture are addressed with texture coordinates in
the range [0,1] rather than in the range [0,width-1], [0,height-1], or
[0,depth-1] where width, height, and depth are the texture sizes;
 filterMode specifies the filtering mode, that is how the value returned when
fetching the texture is computed based on the input texture coordinates;
filterMode is equal to cudaFilterModePoint or
cudaFilterModeLinear; if it is cudaFilterModePoint, the returned
value is the texel whose texture coordinates are the closest to the input texture
coordinates; if it is cudaFilterModeLinear, the returned value is the linear
interpolation of the two (for a one-dimensional texture), four (for a
two-dimensional texture), or eight (for a three-dimensional texture) texels
whose texture coordinates are the closest to the input texture coordinates;
cudaFilterModeLinear is only valid for returned values of floating-point
type;
 addressMode specifies the addressing mode, that is how out-of-range texture
coordinates are handled; addressMode is an array of size three whose first,
second, and third elements specify the addressing mode for the first, second,
and third texture coordinates, respectively; the addressing mode is equal to
either cudaAddressModeClamp, in which case out-of-range texture
coordinates are clamped to the valid range, or cudaAddressModeWrap, in
which case out-of-range texture coordinates are wrapped to the valid range;
cudaAddressModeWrap is only supported for normalized texture
coordinates;
 channelDesc describes the format of the value that is returned when fetching
the texture; channelDesc is of the following type:
struct cudaChannelFormatDesc {
int x, y, z, w;
enum cudaChannelFormatKind f;
};
where x, y, z, and w are equal to the number of bits of each component of the
returned value and f is:
Chapter 3. Programming Interface


32 CUDA C Programming Guide Version 3.2

 cudaChannelFormatKindSigned if these components are of signed
integer type,
 cudaChannelFormatKindUnsigned if they are of unsigned integer
type,
 cudaChannelFormatKindFloat if they are of floating point type.
normalized, addressMode, and filterMode may be directly modified in host
code.
Before a kernel can use a texture reference to read from texture memory, the texture
reference must be bound to a texture using cudaBindTexture() or
cudaBindTextureToArray(). cudaUnbindTexture() is used to unbind a
texture reference.
The following code samples bind a texture reference to linear memory pointed to by
devPtr:
 Using the low-level API:
texture<float, 2, cudaReadModeElementType> texRef;
textureReference* texRefPtr;
cudaGetTextureReference(&texRefPtr, “texRef”);
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc<float>();
cudaBindTexture2D(0, texRefPtr, devPtr, &channelDesc,
width, height, pitch);
 Using the high-level API:
texture<float, 2, cudaReadModeElementType> texRef;
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc<float>();
cudaBindTexture2D(0, texRef, devPtr, &channelDesc,
width, height, pitch);
The following code samples bind a texture reference to a CUDA array cuArray:
 Using the low-level API:
texture<float, 2, cudaReadModeElementType> texRef;
textureReference* texRefPtr;
cudaGetTextureReference(&texRefPtr, “texRef”);
cudaChannelFormatDesc channelDesc;
cudaGetChannelDesc(&channelDesc, cuArray);
cudaBindTextureToArray(texRef, cuArray, &channelDesc);
 Using the high-level API:
texture<float, 2, cudaReadModeElementType> texRef;
cudaBindTextureToArray(texRef, cuArray);
The format specified when binding a texture to a texture reference must match the
parameters specified when declaring the texture reference; otherwise, the results of
texture fetches are undefined.
The following code sample applies some simple transformation kernel to a
// 2D float texture
texture<float, 2, cudaReadModeElementType> texRef;

// Simple transformation kernel
__global__ void transformKernel(float* output,
int width, int height, float theta)
{
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 33

// Calculate normalized texture coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;

float u = x / (float)width;
float v = y / (float)height;

// Transform coordinates
u -= 0.5f;
v -= 0.5f;
float tu = u * cosf(theta) – v * sinf(theta) + 0.5f;
float tv = v * cosf(theta) + u * sinf(theta) + 0.5f;

// Read from texture and write to global memory
output[y * width + x] = tex2D(texRef, tu, tv);
}

// Host code
int main()
{
// Allocate CUDA array in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(32, 0, 0, 0,
cudaChannelFormatKindFloat);
cudaArray* cuArray;
cudaMallocArray(&cuArray, &channelDesc, width, height);

// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);

// Set texture parameters
texRef.addressMode[0] = cudaAddressModeWrap;
texRef.addressMode[1] = cudaAddressModeWrap;
texRef.filterMode = cudaFilterModeLinear;
texRef.normalized = true;

// Bind the array to the texture reference
cudaBindTextureToArray(texRef, cuArray, channelDesc);

// Allocate result of transformation in device memory
float* output;
cudaMalloc(&output, width * height * sizeof(float));

// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x – 1) / dimBlock.x,
(height + dimBlock.y – 1) / dimBlock.y);
transformKernel<<<dimGrid, dimBlock>>>(output, width, height,
angle);

// Free device memory
cudaFreeArray(cuArray);
cudaFree(output);
}
Chapter 3. Programming Interface


34 CUDA C Programming Guide Version 3.2

3.2.4.1.4 16-Bit Floating-Point Textures
The 16-bit floating-point or half format supported by CUDA arrays is the same as
the IEEE 754-2008 binary2 format.
CUDA C does not support a matching data type, but provides intrinsic functions to
convert to and from the 32-bit floating-point format via the unsigned short
type: __float2half(float) and __half2float(unsigned short). These
functions are only supported in device code. Equivalent functions for the host code
can be found in the OpenEXR library, for example.
16-bit floating-point components are promoted to 32 bit float during texture
fetching before any filtering is performed.
A channel description for the 16-bit floating-point format can be created by calling
one of the cudaCreateChannelDescHalf*() functions.
3.2.4.2 Surface Memory
A CUDA array (described in Section 3.2.4.3), created with the
cudaArraySurfaceLoadStore flag, can be read and written via a surface reference
using the functions described in Section B.9.
Section G.1 lists the maximum surface width, height, and depth depending on the
compute capability of the device.
3.2.4.2.1 Surface Reference Declaration
A surface reference is declared at file scope as a variable of type surface:
surface<void, Dim> surfRef;
where Dim specifies the dimensionality of the surface reference and is equal to 1 or
2; Dim is an optional argument which defaults to 1.
A surface reference can only be declared as a static global variable and cannot be
passed as an argument to a function.
3.2.4.2.2 Surface Binding
Before a kernel can use a surface reference to access a CUDA array, the surface
reference must be bound to the CUDA array using
cudaBindSurfaceToArray().
The following code samples bind a surface reference to a CUDA array cuArray:
 Using the low-level API:
surface<void, 2> surfRef;
surfaceReference* surfRefPtr;
cudaGetSurfaceReference(&surfRefPtr, “surfRef”);
cudaChannelFormatDesc channelDesc;
cudaGetChannelDesc(&channelDesc, cuArray);
cudaBindSurfaceToArray(surfRef, cuArray, &channelDesc);
 Using the high-level API:
surface<void, 2> surfRef;
cudaBindSurfaceToArray(surfRef, cuArray);
A CUDA array must be read and written using surface functions of matching
dimensionality and type and via a surface reference of matching dimensionality;
otherwise, the results of reading and writing the CUDA array are undefined.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 35

Unlike texture memory, surface memory uses byte addressing. This means that the
x-coordinate used to access a texture element via texture functions needs to be
multiplied by the byte size of the element to access the same element via a surface
function. For example, the element at texture coordinate x of a one-dimensional
floating-point CUDA array bound to a texture reference texRef and a surface
reference surfRef is read using tex1d(texRef, x) via texRef, but
surf1Dread(surfRef, 4*x) via surfRef. Similarly, the element at texture
coordinate x and y of a two-dimensional floating-point CUDA array bound to a
texture reference texRef and a surface reference surfRef is accessed using
tex2d(texRef, x, y) via texRef, but surf2Dread(surfRef, 4*x, y)
via surfRef (the byte offset of the y-coordinate is internally calculated from the
underlying line pitch of the CUDA array).
The following code sample applies some simple transformation kernel to a
// 2D surfaces
surface<void, 2> inputSurfRef;
surface<void, 2> outputSurfRef;

// Simple copy kernel
__global__ void copyKernel(int width, int height)
{
// Calculate surface coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
if (x < width && y < height) {
uchar4 data;
// Read from input surface
surf2Dread(&data, inputSurfRef, x * 4, y);
// Write to output surface
surf2Dwrite(data, outputSurfRef, x * 4, y);
}
}

// Host code
int main()
{
// Allocate CUDA arrays in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(8, 8, 8, 8,
cudaChannelFormatKindUnsigned);
cudaArray* cuInputArray;
cudaMallocArray(&cuInputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);
cudaArray* cuOutputArray;
cudaMallocArray(&cuOutputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);

// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuInputArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);

// Bind the arrays to the surface references
cudaBindSurfaceToArray(inputSurfRef, cuInputArray);
cudaBindSurfaceToArray(outputSurfRef, cuOutputArray);
Chapter 3. Programming Interface


36 CUDA C Programming Guide Version 3.2


// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x – 1) / dimBlock.x,
(height + dimBlock.y – 1) / dimBlock.y);
copyKernel<<<dimGrid, dimBlock>>>(width, height);

// Free device memory
cudaFreeArray(cuInputArray);
cudaFreeArray(cuOutputArray);
}
3.2.4.3 CUDA Arrays
CUDA arrays are opaque memory layouts optimized for texture fetching. They are
one-dimensional, two-dimensional, or three-dimensional and composed of
elements, each of which has 1, 2 or 4 components that may be signed or unsigned
8-, 16- or 32-bit integers, 16-bit floats, or 32-bit floats. CUDA arrays are only
readable by kernels through texture fetching and may only be bound to texture
references with the same number of packed components.
3.2.4.4 Read/Write Coherency
The texture and surface memory is cached (see Section 5.3.2.5) and within the same
kernel call, the cache is not kept coherent with respect to global memory writes and
surface memory writes, so any texture fetch or surface read to an address that has
been written to via a global write or a surface write in the same kernel call returns
undefined data. In other words, a thread can safely read some texture or surface
memory location only if this memory location has been updated by a previous
kernel call or memory copy, but not if it has been previously updated by the same
thread or another thread from the same kernel call.
3.2.5 Page-Locked Host Memory
The runtime also provides functions to allocate and free page-locked (also known as
pinned) host memory – as opposed to regular pageable host memory allocated by
malloc(): cudaHostAlloc() and cudaFreeHost().
Using page-locked host memory has several benefits:
 Copies between page-locked host memory and device memory can be
performed concurrently with kernel execution for some devices as mentioned in
Section 3.2.6;
 On some devices, page-locked host memory can be mapped into the address
space of the device, eliminating the need to copy it to or from device memory
as detailed in Section 3.2.5.3;
 On systems with a front-side bus, bandwidth between host memory and device
memory is higher if host memory is allocated as page-locked and even higher if
in addition it is allocated as write-combining as described in Section 3.2.5.2.
Page-locked host memory is a scarce resource however, so allocations in page-
locked memory will start failing long before allocations in pageable memory. In
addition, by reducing the amount of physical memory available to the operating
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 37

system for paging, allocating too much page-locked memory reduces overall system
performance.
The simple zero-copy SDK sample comes with a detailed document on the page-
locked memory APIs.
3.2.5.1 Portable Memory
A block of page-locked memory can be used by any host threads, but by default, the
benefits of using page-locked memory described above are only available for the
thread that allocates it. To make these advantages available to all threads, it needs to
be allocated by passing flag cudaHostAllocPortable to cudaHostAlloc().
3.2.5.2 Write-Combining Memory
By default page-locked host memory is allocated as cacheable. It can optionally be
allocated as write-combining instead by passing flag
cudaHostAllocWriteCombined to cudaHostAlloc(). Write-combining
memory frees up L1 and L2 cache resources, making more cache available to the
rest of the application. In addition, write-combining memory is not snooped during
transfers across the PCI Express bus, which can improve transfer performance by
up to 40%.
Reading from write-combining memory from the host is prohibitively slow, so
write-combining memory should in general be used for memory that the host only
writes to.
3.2.5.3 Mapped Memory
On devices of compute capability greater than 1.0, a block of page-locked host
memory can also be mapped into the address space of the device by passing flag
cudaHostAllocMapped to cudaHostAlloc(). Such a block has therefore two
addresses: one in host memory and one in device memory. The host memory
pointer is returned by cudaHostAlloc() and the device memory pointer can be
retrieved using cudaHostGetDevicePointer()and then used to access the
block from within a kernel.
Accessing host memory directly from within a kernel has several advantages:
 There is no need to allocate a block in device memory and copy data between
this block and the block in host memory; data transfers are implicitly performed
as needed by the kernel;
 There is no need to use streams (see Section 3.2.6.4) to overlap data transfers
with kernel execution; the kernel-originated data transfers automatically overlap
with kernel execution.
Since mapped page-locked memory is shared between host and device however, the
application must synchronize memory accesses using streams or events (see
Section 3.2.6) to avoid any potential read-after-write, write-after-read, or write-after-
write hazards.
A block of page-locked host memory can be allocated as both mapped and portable
(see Section 3.2.5.1), in which case each host thread that needs to map the block to
its device address space must call cudaHostGetDevicePointer() to retrieve a
device pointer, as device pointers will generally differ from one host thread to the
other.
Chapter 3. Programming Interface


38 CUDA C Programming Guide Version 3.2

To be able to retrieve the device pointer to any mapped page-locked memory within
a given host thread, page-locked memory mapping must be enabled by calling
cudaSetDeviceFlags() with the cudaDeviceMapHost flag before any other
CUDA calls is performed by the thread. Otherwise,
cudaHostGetDevicePointer() will return an error.
cudaHostGetDevicePointer() also returns an error if the device does not
support mapped page-locked host memory.
Applications may query whether a device supports mapped page-locked host
memory or not by calling cudaGetDeviceProperties() and checking the
canMapHostMemory property.
Note that atomic functions (Section B.11) operating on mapped page-locked
memory are not atomic from the point of view of the host or other devices.
3.2.6 Asynchronous Concurrent Execution
3.2.6.1 Concurrent Execution between Host and Device
In order to facilitate concurrent execution between host and device, some function
calls are asynchronous: Control is returned to the host thread before the device has
completed the requested task. These are:
 Kernel launches;
 Device ÷ device memory copies;
 Host ÷ device memory copies of a memory block of 64 KB or less;
 Memory copies performed by functions that are suffixed with Async;
 Memory set function calls.
Programmers can globally disable asynchronous kernel launches for all CUDA
applications running on a system by setting the CUDA_LAUNCH_BLOCKING
environment variable to 1. This feature is provided for debugging purposes only and
should never be used as a way to make production software run reliably.
When an application is run via a CUDA debugger or profiler (cuda-gdb, CUDA
Visual Profiler, Parallel Nsight), all launches are synchronous.
3.2.6.2 Overlap of Data Transfer and Kernel Execution
Some devices of compute capability 1.1 and higher can perform copies between
page-locked host memory and device memory concurrently with kernel execution.
Applications may query this capability by calling cudaGetDeviceProperties()
and checking the deviceOverlap property. This capability is currently supported
only for memory copies that do not involve CUDA arrays or 2D arrays allocated
through cudaMallocPitch() (see Section 3.2.1).
3.2.6.3 Concurrent Kernel Execution
Some devices of compute capability 2.x can execute multiple kernels concurrently.
Applications may query this capability by calling cudaGetDeviceProperties()
and checking the concurrentKernels property.
The maximum number of kernel launches that a device can execute concurrently is
sixteen.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 39

A kernel from one CUDA context cannot execute concurrently with a kernel from
another CUDA context.
Kernels that use many textures or a large amount of local memory are less likely to
execute concurrently with other kernels.
3.2.6.4 Concurrent Data Transfers
Some devices of compute capability 2.x can perform a copy from page-locked host
memory to device memory concurrently with a copy from device memory to page-
locked host memory.
3.2.6.5 Stream
Applications manage concurrency through streams. A stream is a sequence of
commands that execute in order. Different streams, on the other hand, may execute
their commands out of order with respect to one another or concurrently; this
behavior is not guaranteed and should therefore not be relied upon for correctness
(e.g. inter-kernel communication is undefined).
3.2.6.5.1 Creation and Destruction
A stream is defined by creating a stream object and specifying it as the stream
parameter to a sequence of kernel launches and host ÷ device memory copies. The
following code sample creates two streams and allocates an array hostPtr of
float in page-locked memory.
cudaStream_t stream[2];
for (int i = 0; i < 2; ++i)
cudaStreamCreate(&stream[i]);
float* hostPtr;
cudaMallocHost(&hostPtr, 2 * size);
Each of these streams is defined by the following code sample as a sequence of one
memory copy from host to device, one kernel launch, and one memory copy from
device to host:
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
Each stream copies its portion of input array hostPtr to array inputDevPtr in
device memory, processes inputDevPtr on the device by calling MyKernel(), and
copies the result outputDevPtr back to the same portion of hostPtr.
Section 3.2.6.5.4 describes how the streams overlap in this example depending on
the capability of the device. Note that hostPtr must point to page-locked host
memory for any overlap to occur.
Streams are released by calling cudaStreamDestroy().
for (int i = 0; i < 2; ++i)
cudaStreamDestroy(stream[i]);
cudaStreamDestroy() waits for all preceding commands in the given stream to
complete before destroying the stream and returning control to the host thread.
Chapter 3. Programming Interface


40 CUDA C Programming Guide Version 3.2

3.2.6.5.2 Explicit Synchronization
There are various ways to explicitly synchronize streams with each other.
cudaThreadSynchronize() waits until all preceding commands in all streams have
completed.
cudaStreamSynchronize() takes a stream as a parameter and waits until all
preceding commands in the given stream have completed. It can be used to
synchronize the host with a specific stream, allowing other streams to continue
executing on the device.
cudaStreamWaitEvent() takes a stream and an event as parameters (see
Section 3.2.6.6 for a description of events) and makes all the commands added to
the given stream after the call to cudaStreamWaitEvent() delay their execution
until the given event has completed. The stream can be 0, in which case all the
commands added to any stream after the call to cudaStreamWaitEvent() wait on
the event.
cudaStreamQuery() provides applications with a way to know if all preceding
commands in a stream have completed.
To avoid unnecessary slowdowns, all these synchronization functions are usually
best used for timing purposes or to isolate a launch or memory copy that is failing.
3.2.6.5.3 Implicit Synchronization
Two commands from different streams cannot run concurrently if either one of the
following operations is issued in-between them by the host thread:
 a page-locked host memory allocation,
 a device memory allocation,
 a device memory set,
 a device ÷ device memory copy,
 any CUDA command to stream 0 (including kernel launches and host ÷ device
memory copies that do not specify any stream parameter),
 a switch between the L1/shared memory configurations described in
Section G.4.1.
For devices that support concurrent kernel execution, any operation that requires a
dependency check to see if a streamed kernel launch is complete:
 Can start executing only when all thread blocks of all prior kernel launches from
any stream in the CUDA context have started executing;
 Blocks all later kernel launches from any stream in the CUDA context until the
kernel launch being checked is complete.
Operations that require a dependency check include any other commands within the
same stream as the launch being checked and any call to cudaStreamQuery() on
that stream. Therefore, applications should follow these guidelines to improve their
potential for concurrent kernel execution:
 All independent operations should be issued before dependent operations,
 Synchronization of any kind should be delayed as long as possible.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 41

3.2.6.5.4 Overlapping Behavior
The amount of execution overlap between two streams depends on the order in
which the commands are issued to each stream and whether or not the device
supports overlap of data transfer and kernel execution (Section 3.2.6.2), concurrent
kernel execution (Section 3.2.6.3), and/or concurrent data transfers (Section 3.2.6.4).
For example, on devices that do not support concurrent data transfers, the two
streams of the code sample of Section 3.2.6.5.1 do not overlap at all because the
memory copy from host to device is issued to stream 1 after the memory copy from
device to host is issued to stream 0. If the code is rewritten the following way (and
assuming the device supports overlap of data transfer and kernel execution)
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
for (int i = 0; i < 2; ++i)
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
then the memory copy from host to device issued to stream 1 overlaps with the
kernel launch issued to stream 0.
On devices that do support concurrent data transfers, the two streams of the code
sample of Section 3.2.6.5.1 do overlap: The memory copy from host to device
issued to stream 1 overlaps with the memory copy from device to host issued to
stream 0 and even with the kernel launch issued to stream 0 (assuming the device
supports overlap of data transfer and kernel execution). However, the kernel
executions cannot possibly overlap because the kernel launch is issued to stream 1
after the memory copy from device to host is issued to stream 0, so it is blocked
until the kernel launch issued to stream 0 is complete as per Section 3.2.6.5.3. If the
code is rewritten as above, the kernel executions overlap (assuming the device
supports concurrent kernel execution) since the kernel launch is issued to stream 1
before the memory copy from device to host is issued to stream 0. In that case
however, the memory copy from device to host issued to stream 0 only overlaps
with the last thread blocks of the kernel launch issued to stream 1 as per
Section 3.2.6.5.3, which can represent a small portion of the total execution time of
the kernel.
3.2.6.6 Event
The runtime also provides a way to closely monitor the device‟s progress, as well as
perform accurate timing, by letting the application asynchronously record events at
any point in the program and query when these events are completed. An event has
completed when all tasks – or optionally, all commands in a given stream –
preceding the event have completed. Events in stream zero are completed after all
preceding task and commands in all streams are completed.
The following code sample creates two events:
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
Chapter 3. Programming Interface


42 CUDA C Programming Guide Version 3.2

These events can be used to time the code sample of the previous section the
following way:
cudaEventRecord(start, 0);
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDev + i * size, inputHost + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDev + i * size, inputDev + i * size, size);
cudaMemcpyAsync(outputHost + i * size, outputDev + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime, start, stop);
They are destroyed this way:
cudaEventDestroy(start);
cudaEventDestroy(stop);
3.2.6.7 Synchronous Calls
When a synchronous function is called, control is not returned to the host thread
before the device has completed the requested task. Whether the host thread will
then yield, block, or spin can be specified by calling cudaSetDeviceFlags()with
some specific flags (see reference manual for details) before any other CUDA calls is
performed by the host thread.
3.2.7 Graphics Interoperability
Some resources from OpenGL and Direct3D may be mapped into the address
space of CUDA, either to enable CUDA to read data written by OpenGL or
Direct3D, or to enable CUDA to write data for consumption by OpenGL or
Direct3D.
A resource must be registered to CUDA before it can be mapped using the
functions mentioned in Sections 3.2.7.1 and 3.2.7.2. These functions return a
pointer to a CUDA graphics resource of type struct cudaGraphicsResource.
Registering a resource is potentially high-overhead and therefore typically called only
once per resource. A CUDA graphics resource is unregistered using
cudaGraphicsUnregisterResource().
Once a resource is registered to CUDA, it can be mapped and unmapped as many
times as necessary using cudaGraphicsMapResources() and
cudaGraphicsUnmapResources().
cudaGraphicsResourceSetMapFlags() can be called to specify usage hints
(write-only, read-only) that the CUDA driver can use to optimize resource
management.
A mapped resource can be read from or written to by kernels using the device
memory address returned by cudaGraphicsResourceGetMappedPointer()
for buffers and cudaGraphicsSubResourceGetMappedArray() for CUDA
arrays.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 43

Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA
produces undefined results.
Sections 3.2.7.1 and 3.2.7.2 give specifics for each graphics API and some code
samples.
3.2.7.1 OpenGL Interoperability
Interoperability with OpenGL requires that the CUDA device be specified by
cudaGLSetGLDevice() before any other runtime calls. Note that
cudaSetDevice()and cudaGLSetGLDevice() are mutually exclusive.
The OpenGL resources that may be mapped into the address space of CUDA are
OpenGL buffer, texture, and renderbuffer objects.
A buffer object is registered using cudaGraphicsGLRegisterBuffer(). In
CUDA, it appears as a device pointer and can therefore be read and written by
kernels or via cudaMemcpy() calls.
A texture or renderbuffer object is registered using
cudaGraphicsGLRegisterImage(). In CUDA, it appears as a CUDA array
and can therefore be bound to a texture reference and be read and written by
kernels or via cudaMemcpy2D() calls. cudaGraphicsGLRegisterImage()
supports all texture formats with 1, 2, or 4 components and an internal type of float
(e.g. GL_RGBA_FLOAT32) and unnormalized integer (e.g. GL_RGBA8UI). It does
not currently support normalized integer formats (e.g. GL_RGBA8). Please note that
since GL_RGBA8UI is an OpenGL 3.0 texture format, it can only be written by
shaders, not the fixed function pipeline.
The following code sample uses a kernel to dynamically modify a 2D
width x height grid of vertices stored in a vertex buffer object:
GLuint positionsVBO;
struct cudaGraphicsResource* positionsVBO_CUDA;

int main()
{
// Explicitly set device
cudaGLSetGLDevice(0);

// Initialize OpenGL and GLUT
...
glutDisplayFunc(display);

// Create buffer object and register it with CUDA
glGenBuffers(1, positionsVBO);
glBindBuffer(GL_ARRAY_BUFFER, &vbo);
unsigned int size = width * height * 4 * sizeof(float);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
cudaGraphicsGLRegisterBuffer(&positionsVBO_CUDA,
positionsVBO,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
glutMainLoop();
}

Chapter 3. Programming Interface


44 CUDA C Programming Guide Version 3.2

void display()
{
// Map buffer object for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVBO_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVBO_CUDA));

// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);

// Unmap buffer object
cudaGraphicsUnmapResources(1, &positionsVBO_CUDA, 0);

// Render from buffer object
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, positionsVBO);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, width * height);
glDisableClientState(GL_VERTEX_ARRAY);

// Swap buffers
glutSwapBuffers();
glutPostRedisplay();
}

void deleteVBO()
{
cudaGraphicsUnregisterResource(positionsVBO_CUDA);
glDeleteBuffers(1, &positionsVBO);
}

__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;

// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;

// calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;

// Write positions
positions[y * width + x] = make_float4(u, w, v, 1.0f);
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 45

}
On Windows and for Quadro GPUs, cudaWGLGetDevice() can be used to
retrieve the CUDA device associated to the handle returned by
wglEnumGpusNV(). Quadro GPUs offer higher performance OpenGL
interoperability than GeForce and Tesla GPUs in a multi-GPU configuration where
OpenGL rendering is performed on the Quadro GPU and CUDA computations are
performed on other GPUs in the system.
3.2.7.2 Direct3D Interoperability
Direct3D interoperability is supported for Direct3D 9, Direct3D 10, and
Direct3D 11.
A CUDA context may interoperate with only one Direct3D device at a time and the
CUDA context and Direct3D device must be created on the same GPU. Moreover,
the Direct3D device must be created with the
D3DCREATE_HARDWARE_VERTEXPROCESSING flag.
Interoperability with Direct3D requires that the Direct3D device be specified by
cudaD3D9SetDirect3DDevice(), cudaD3D10SetDirect3DDevice() and
cudaD3D11SetDirect3DDevice(), before any other runtime calls.
cudaD3D9GetDevice(), cudaD3D10GetDevice(), and
cudaD3D11GetDevice() can be used to retrieve the CUDA device associated to
some adapter.
A set of calls is also available to allow the creation of CUDA devices with
interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate
Frame Rendering) mode: cudaD3D[9|10|11]GetDevices(). A call to
cuD3D[9|10|11]GetDevices()can be used to obtain a list of CUDA device
handles that can be passed as the (optional) last parameter to
cudaD3D[9|10|11]SetDirect3DDevice().
The application has the choice to either create multiple CPU threads, each using a
different CUDA context, or a single CPU thread using multiple CUDA context.
Each of these CUDA contexts would be created using one of the CUDA device
handles returned by cudaD3D[9|10|11]GetDevices()).
If using a single CPU thread, the application relies on the interoperability between
CUDA driver and runtime APIs (Section 3.4), which allows it to call
cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context
active at a given time.
See Section 4.3 for general recommendations related to interoperability between
Direct3D devices using SLI and CUDA contexts.
The Direct3D resources that may be mapped into the address space of CUDA are
Direct3D buffers, textures, and surfaces. These resources are registered using
cudaGraphicsD3D9RegisterResource(),
cudaGraphicsD3D10RegisterResource(), and
cudaGraphicsD3D11RegisterResource().
The following code sample uses a kernel to dynamically modify a 2D
width x height grid of vertices stored in a vertex buffer object.
Direct3D 9 Version:
IDirect3D9* D3D;
Chapter 3. Programming Interface


46 CUDA C Programming Guide Version 3.2

IDirect3DDevice9* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
IDirect3DVertexBuffer9* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Initialize Direct3D
D3D = Direct3DCreate9(D3D_SDK_VERSION);

// Get a CUDA-enabled adapter
unsigned int adapter = 0;
for (; adapter < g_pD3D->GetAdapterCount(); adapter++) {
D3DADAPTER_IDENTIFIER9 adapterId;
g_pD3D->GetAdapterIdentifier(adapter, 0, &adapterId);
int dev;
if (cudaD3D9GetDevice(&dev, adapterId.DeviceName)
== cudaSuccess)
break;
}

// Create device
...
D3D->CreateDevice(adapter, D3DDEVTYPE_HAL, hWnd,
D3DCREATE_HARDWARE_VERTEXPROCESSING,
&params, &device);

// Register device with CUDA
cudaD3D9SetDirect3DDevice(device);

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
device->CreateVertexBuffer(size, 0, D3DFVF_CUSTOMVERTEX,
D3DPOOL_DEFAULT, &positionsVB, 0);
cudaGraphicsD3D9RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 47

cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));

// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);

// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

// Draw and present
...
}

void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}

__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;

// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;

// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;

// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
Direct3D 10 Version:
ID3D10Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D10Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Get a CUDA-enabled adapter
Chapter 3. Programming Interface


48 CUDA C Programming Guide Version 3.2

IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
int dev;
if (cudaD3D10GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();

// Create swap chain and device
...
D3D10CreateDeviceAndSwapChain(adapter,
D3D10_DRIVER_TYPE_HARDWARE, 0,
D3D10_CREATE_DEVICE_DEBUG,
D3D10_SDK_VERSION,
&swapChainDesc, &swapChain,
&device);
adapter->Release();

// Register device with CUDA
cudaD3D10SetDirect3DDevice(device);

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D10_BUFFER_DESC bufferDesc;
bufferDesc.Usage = D3D10_USAGE_DEFAULT;
bufferDesc.ByteWidth = size;
bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cudaGraphicsD3D10RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 49

positionsVB_CUDA));

// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);

// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

// Draw and present
...
}

void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}

__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;

// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;

// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;

// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
Direct3D 11 Version:
ID3D11Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D11Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
Chapter 3. Programming Interface


50 CUDA C Programming Guide Version 3.2

IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
int dev;
if (cudaD3D11GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();

// Create swap chain and device
...
sFnPtr_D3D11CreateDeviceAndSwapChain(adapter,
D3D11_DRIVER_TYPE_HARDWARE,
0,
D3D11_CREATE_DEVICE_DEBUG,
featureLevels, 3,
D3D11_SDK_VERSION,
&swapChainDesc, &swapChain,
&device,
&featureLevel,
&deviceContext);
adapter->Release();

// Register device with CUDA
cudaD3D11SetDirect3DDevice(device);

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D11_BUFFER_DESC bufferDesc;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth = size;
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cudaGraphicsD3D11RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 51

cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));

// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);

// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

// Draw and present
...
}

void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}

__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;

// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;

// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;

// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
3.2.8 Error Handling
All runtime functions return an error code, but for an asynchronous function (see
Section 3.2.6), this error code cannot possibly report any of the asynchronous errors
that could occur on the device since the function returns before the device has
completed the task; the error code only reports errors that occur on the host prior
to executing the task, typically related to parameter validation; if an asynchronous
error occurs, it will be reported by some subsequent unrelated runtime function call.
Chapter 3. Programming Interface


52 CUDA C Programming Guide Version 3.2

The only way to check for asynchronous errors just after some asynchronous
function call is therefore to synchronize just after the call by calling
cudaThreadSynchronize() (or by using any other synchronization
mechanisms described in Section 3.2.6) and checking the error code returned by
cudaThreadSynchronize().
The runtime maintains an error variable for each host thread that is initialized to
cudaSuccess and is overwritten by the error code every time an error occurs (be
it a parameter validation error or an asynchronous error).
cudaPeekAtLastError() returns this variable. cudaGetLastError() returns
this variable and resets it to cudaSuccess.
Kernel launches do not return any error code, so cudaPeekAtLastError() or
cudaGetLastError() must be called just after the kernel launch to retrieve any
pre-launch errors. To ensure that any error returned by
cudaPeekAtLastError() or cudaGetLastError() does not originate from
calls prior to the kernel launch, one has to make sure that the runtime error variable
is set to cudaSuccess just before the kernel launch, for example, by calling
cudaGetLastError() just before the kernel launch. Kernel launches are
asynchronous, so to check for asynchronous errors, the application must
synchronize in-between the kernel launch and the call to
cudaPeekAtLastError() or cudaGetLastError().
Note that cudaErrorNotReady that may be returned by cudaStreamQuery()
and cudaEventQuery() is not considered an error and is therefore not reported
by cudaPeekAtLastError() or cudaGetLastError().
3.2.9 Call Stack
On devices of compute capability 2.x, the size of the call stack can be queried using
cudaThreadGetLimit() and set using cudaThreadSetLimit().
When the call stack overflows, the kernel call fails with a stack overflow error if the
application is run via a CUDA debugger (cuda-gdb, Parallel Nsight) or an
unspecified launch error, otherwise.
3.3 Driver API
The driver API is a handle-based, imperative API: Most objects are referenced by
opaque handles that may be specified to functions to manipulate the objects.
The objects available in the driver API are summarized in Table 3-1.
Table 3-1. Objects Available in the CUDA Driver API
Object Handle Description
Device CUdevice CUDA-enabled device
Context CUcontext Roughly equivalent to a CPU process
Module CUmodule Roughly equivalent to a dynamic library
Function CUfunction Kernel
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 53

Heap memory CUdeviceptr Pointer to device memory
CUDA array CUarray Opaque container for one-dimensional or two-dimensional
data on the device, readable via texture or surface
references
Texture reference CUtexref Object that describes how to interpret texture memory data
Surface reference CUsurfref Object that describes how to read or write CUDA arrays
The driver API is implemented in the nvcuda dynamic library and all its entry
points are prefixed with cu.
The driver API must be initialized with cuInit() before any function from the
driver API is called. A CUDA context must then be created that is attached to a
specific device and made current to the calling host thread as detailed in
Section 3.3.1.
Within a CUDA context, kernels are explicitly loaded as PTX or binary objects by
the host code as described in Section 3.3.2. Kernels written in C must therefore be
compiled separately into PTX or binary objects. Kernels are launched using API
entry points as described in Section 3.3.3.
Any application that wants to run on future device architectures must load PTX, not
binary code. This is because binary code is architecture-specific and therefore
incompatible with future architectures, whereas PTX code is compiled to binary
code at load time by the driver.
Here is the host code of the sample from Section 2.1 written using the driver API:
int main()
{
int N = ...;
size_t size = N * sizeof(float);

// Allocate input vectors h_A and h_B in host memory
float* h_A = (float*)malloc(size);
float* h_B = (float*)malloc(size);

// Initialize input vectors
...

// Initialize
cuInit(0);

// Get number of devices supporting CUDA
int deviceCount = 0;
cuDeviceGetCount(&deviceCount);
if (deviceCount == 0) {
printf("There is no device supporting CUDA.\n");
exit (0);
}

// Get handle for device 0
CUdevice cuDevice;
cuDeviceGet(&cuDevice, 0);

// Create context
CUcontext cuContext;
cuCtxCreate(&cuContext, 0, cuDevice);
Chapter 3. Programming Interface


54 CUDA C Programming Guide Version 3.2


// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “VecAdd.ptx”);

// Allocate vectors in device memory
CUdeviceptr d_A;
cuMemAlloc(&d_A, size);
CUdeviceptr d_B;
cuMemAlloc(&d_B, size);
CUdeviceptr d_C;
cuMemAlloc(&d_C, size);

// Copy vectors from host memory to device memory
cuMemcpyHtoD(d_A, h_A, size);
cuMemcpyHtoD(d_B, h_B, size);

// Get function handle from module
CUfunction vecAdd;
cuModuleGetFunction(&vecAdd, cuModule, "VecAdd");

// Invoke kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(d_A));
cuParamSetv(vecAdd, offset, &d_A, sizeof(d_A));
offset += sizeof(d_A);
ALIGN_UP(offset, __alignof(d_B));
cuParamSetv(vecAdd, offset, &d_B, sizeof(d_B));
offset += sizeof(d_B);
ALIGN_UP(offset, __alignof(d_C));
cuParamSetv(vecAdd, offset, &d_C, sizeof(d_C));
offset += sizeof(d_C);
ALIGN_UP(offset, __alignof(N));
cuParamSeti(vecAdd, offset, N);
offset += sizeof(N);
cuParamSetSize(vecAdd, offset);
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock – 1) / threadsPerBlock;
cuFuncSetBlockShape(vecAdd, threadsPerBlock, 1, 1);
cuLaunchGrid(vecAdd, blocksPerGrid, 1);

...
}
Full code can be found in the vectorAddDrv SDK code sample.
3.3.1 Context
A CUDA context is analogous to a CPU process. All resources and actions
performed within the driver API are encapsulated inside a CUDA context, and the
system automatically cleans up these resources when the context is destroyed.
Besides objects such as modules and texture or surface references, each context has
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 55

its own distinct 32-bit address space. As a result, CUdeviceptr values from
different contexts reference different memory locations.
A host thread may have only one device context current at a time. When a context is
created with cuCtxCreate(), it is made current to the calling host thread. CUDA
functions that operate in a context (most functions that do not involve device
enumeration or context management) will return
CUDA_ERROR_INVALID_CONTEXT if a valid context is not current to the thread.
Each host thread has a stack of current contexts. cuCtxCreate() pushes the new
context onto the top of the stack. cuCtxPopCurrent() may be called to detach
the context from the host thread. The context is then "floating" and may be pushed
as the current context for any host thread. cuCtxPopCurrent() also restores the
previous current context, if any.
A usage count is also maintained for each context. cuCtxCreate() creates a
context with a usage count of 1. cuCtxAttach() increments the usage count and
cuCtxDetach() decrements it. A context is destroyed when the usage count goes
to 0 when calling cuCtxDetach() or cuCtxDestroy().
Usage count facilitates interoperability between third party authored code operating
in the same context. For example, if three libraries are loaded to use the same
context, each library would call cuCtxAttach() to increment the usage count and
cuCtxDetach() to decrement the usage count when the library is done using the
context. For most libraries, it is expected that the application will have created a
context before loading or initializing the library; that way, the application can create
the context using its own heuristics, and the library simply operates on the context
handed to it. Libraries that wish to create their own contexts – unbeknownst to their
API clients who may or may not have created contexts of their own – would use
cuCtxPushCurrent() and cuCtxPopCurrent() as illustrated in Figure 3-3.

Figure 3-3. Library Context Management
3.3.2 Module
Modules are dynamically loadable packages of device code and data, akin to DLLs in
Windows, that are output by nvcc (see Section 3.1). The names for all symbols,
including functions, global variables, and texture or surface references, are
Library Initialization Call
cuCtxCreate()
Initialize
context
cuCtxPopCurrent()
Library Call
cuCtxPushCurrent()
Use
context
cuCtxPopCurrent()
Chapter 3. Programming Interface


56 CUDA C Programming Guide Version 3.2

maintained at module scope so that modules written by independent third parties
may interoperate in the same CUDA context.
This code sample loads a module and retrieves a handle to some kernel:
CUmodule cuModule;
cuModuleLoad(&cuModule, “myModule.ptx”);
CUfunction myKernel;
cuModuleGetFunction(&myKernel, cuModule, “MyKernel”);
This code sample compiles and loads a new module from PTX code and parses
compilation errors:
#define ERROR_BUFFER_SIZE 100
CUmodule cuModule;
CUjit_option options[3];
void* values[3];
char* PTXCode = “some PTX code”;
options[0] = CU_ASM_ERROR_LOG_BUFFER;
values[0] = (void*)malloc(ERROR_BUFFER_SIZE);
options[1] = CU_ASM_ERROR_LOG_BUFFER_SIZE_BYTES;
values[1] = (void*)ERROR_BUFFER_SIZE;
options[2] = CU_ASM_TARGET_FROM_CUCONTEXT;
values[2] = 0;
cuModuleLoadDataEx(&cuModule, PTXCode, 3, options, values);
for (int i = 0; i < values[1]; ++i) {
// Parse error string here
}
3.3.3 Kernel Execution
cuFuncSetBlockShape() sets the number of threads per block for a given
function, and how their threadIDs are assigned.
cuFuncSetSharedSize() sets the size of shared memory for the function.
The cuParam*() family of functions is used to specify the parameters that will be
provided to the kernel the next time cuLaunchGrid() or cuLaunch() is
invoked to launch the kernel.
The second argument of each of the cuParam*() functions specifies the offset of
the parameter in the parameter stack. This offset must match the alignment
requirement for the parameter type in device code.
Alignment requirements in device code for the built-in vector types are listed in
Table B-1. For all other basic types, the alignment requirement in device code
matches the alignment requirement in host code and can therefore be obtained
using __alignof(). The only exception is when the host compiler aligns double
and long long (and long on a 64-bit system) on a one-word boundary instead of
a two-word boundary (for example, using gcc‟s compilation flag -mno-align-
double) since in device code these types are always aligned on a two-word
boundary.
CUdeviceptr is an integer, but represents a pointer, so its alignment requirement
is __alignof(void*).
The following code sample uses a macro to adjust the offset of each parameter to
meet its alignment requirement.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 57

#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;

int i;
ALIGN_UP(offset, __alignof(i));
cuParamSeti(cuFunction, offset, i);
offset += sizeof(i);

float4 f4;
ALIGN_UP(offset, 16); // float4‟s alignment is 16
cuParamSetv(cuFunction, offset, &f4, sizeof(f4));
offset += sizeof(f4);

char c;
ALIGN_UP(offset, __alignof(c));
cuParamSeti(cuFunction, offset, c);
offset += sizeof(c);

float f;
ALIGN_UP(offset, __alignof(f));
cuParamSetf(cuFunction, offset, f);
offset += sizeof(f);

CUdeviceptr dptr;
ALIGN_UP(offset, __alignof(dptr));
cuParamSetv(cuFunction, offset, &dptr, sizeof(dptr));
offset += sizeof(dptr);

float2 f2;
ALIGN_UP(offset, 8); // float2‟s alignment is 8
cuParamSetv(cuFunction, offset, &f2, sizeof(f2));
offset += sizeof(f2);

cuParamSetSize(cuFunction, offset);

cuFuncSetBlockShape(cuFunction, blockWidth, blockHeight, 1);
cuLaunchGrid(cuFunction, gridWidth, gridHeight);
The alignment requirement of a structure is equal to the maximum of the alignment
requirements of its fields. The alignment requirement of a structure that contains
built-in vector types, CUdeviceptr, or non-aligned double and long long,
might therefore differ between device code and host code. Such a structure might
also be padded differently. The following structure, for example, is not padded at all
in host code, but it is padded in device code with 12 bytes after field f since the
alignment requirement for field f4 is 16.
typedef struct {
float f;
float4 f4;
} myStruct;
Any parameter of type myStruct must therefore be passed using separate calls to
cuParam*(), such as:
myStruct s;
int offset = 0;

Chapter 3. Programming Interface


58 CUDA C Programming Guide Version 3.2

cuParamSetv(cuFunction, offset, &s.f, sizeof(s.f));
offset += sizeof(s.f);

ALIGN_UP(offset, 16); // float4‟s alignment is 16
cuParamSetv(cuFunction, offset, &s.f4, sizeof(s.f4));
offset += sizeof(s.f4);
3.3.4 Device Memory
Linear memory is allocated using cuMemAlloc() or cuMemAllocPitch() and
freed using cuMemFree().
Here is the host code of the sample from Section 3.2.1 written using the driver API:
// Host code
int main()
{
// Initialize
if (cuInit(0) != CUDA_SUCCESS)
exit (0);

// Get number of devices supporting CUDA
int deviceCount = 0;
cuDeviceGetCount(&deviceCount);
if (deviceCount == 0) {
printf("There is no device supporting CUDA.\n");
exit (0);
}

// Get handle for device 0
CUdevice cuDevice = 0;
cuDeviceGet(&cuDevice, 0);

// Create context
CUcontext cuContext;
cuCtxCreate(&cuContext, 0, cuDevice);

// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “VecAdd.ptx”);

// Get function handle from module
CUfunction vecAdd;
cuModuleGetFunction(&vecAdd, cuModule, "VecAdd");

// Allocate vectors in device memory
size_t size = N * sizeof(float);
CUdeviceptr d_A;
cuMemAlloc(&d_A, size);
CUdeviceptr d_B;
cuMemAlloc(&d_B, size);
CUdeviceptr d_C;
cuMemAlloc(&d_C, size);

// Copy vectors from host memory to device memory
// h_A and h_B are input vectors stored in host memory
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 59

cuMemcpyHtoD(d_A, h_A, size);
cuMemcpyHtoD(d_B, h_B, size);

// Invoke kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(d_A));
cuParamSetv(vecAdd, offset, &d_A, sizeof(d_A));
offset += sizeof(d_A);
ALIGN_UP(offset, __alignof(d_B));
cuParamSetv(vecAdd, offset, &d_B, sizeof(d_B));
offset += sizeof(d_B);
ALIGN_UP(offset, __alignof(d_C));
cuParamSetv(vecAdd, offset, &d_C, sizeof(d_C));
offset += sizeof(d_C);
cuParamSetSize(VecAdd, offset);
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock – 1) / threadsPerBlock;
cuFuncSetBlockShape(vecAdd, threadsPerBlock, 1, 1);
cuLaunchGrid(VecAdd, blocksPerGrid, 1);

// Copy result from device memory to host memory
// h_C contains the result in host memory
cuMemcpyDtoH(h_C, d_C, size);

// Free device memory
cuMemFree(d_A);
cuMemFree(d_B);
cuMemFree(d_C);
}
Linear memory can also be allocated through cuMemAllocPitch(). This function
is recommended for allocations of 2D arrays as it makes sure that the allocation is
appropriately padded to meet the alignment requirements described in
Section 5.3.2.1, therefore ensuring best performance when accessing the row
addresses or performing copies between 2D arrays and other regions of device
memory (using the cuMemcpy2D()). The returned pitch (or stride) must be used to
access array elements. The following code sample allocates a width×height 2D
array of floating-point values and shows how to loop over the array elements in
device code:
// Host code (assuming cuModule has been loaded)
CUdeviceptr devPtr;
size_t pitch;
cuMemAllocPitch(&devPtr, &pitch,
width * sizeof(float), height, 4);
CUfunction myKernel;
cuModuleGetFunction(&myKernel, cuModule, “MyKernel”);
cuParamSetv(myKernel, 0, &devPtr, sizeof(devPtr));
cuParamSetSize(myKernel, sizeof(devPtr));
cuFuncSetBlockShape(myKernel, 512, 1, 1);
cuLaunchGrid(myKernel, 100, 1);

// Device code
__global__ void MyKernel(float* devPtr)
Chapter 3. Programming Interface


60 CUDA C Programming Guide Version 3.2

{
for (int r = 0; r < height; ++r) {
float* row = (float*)((char*)devPtr + r * pitch);
for (int c = 0; c < width; ++c) {
float element = row[c];
}
}
}
The following code sample allocates a width×height CUDA array of one 32-bit
floating-point component:
CUDA_ARRAY_DESCRIPTOR desc;
desc.Format = CU_AD_FORMAT_FLOAT;
desc.NumChannels = 1;
desc.Width = width;
desc.Height = height;
CUarray cuArray;
cuArrayCreate(&cuArray, &desc);
The reference manual lists all the various functions used to copy memory between
linear memory allocated with cuMemAlloc(), linear memory allocated with
cuMemAllocPitch(), and CUDA arrays.
The following code sample copies the 2D array to the CUDA array allocated in the
previous code samples:
CUDA_MEMCPY2D copyParam;
memset(&copyParam, 0, sizeof(copyParam));
copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY;
copyParam.dstArray = cuArray;
copyParam.srcMemoryType = CU_MEMORYTYPE_DEVICE;
copyParam.srcDevice = devPtr;
copyParam.srcPitch = pitch;
copyParam.WidthInBytes = width * sizeof(float);
copyParam.Height = height;
cuMemcpy2D(&copyParam);
The following code sample illustrates various ways of accessing global variables via
the driver API:
CUdeviceptr devPtr;
size_t bytes;

__constant__ float constData[256];
float data[256];
cuModuleGetGlobal(&devPtr, &bytes, cuModule, “constData”);
cuMemcpyHtoD(devPtr, data, bytes);
cuMemcpyDtoH(data, devPtr, bytes);

__device__ float devData;
float value = 3.14f;
cuModuleGetGlobal(&devPtr, &bytes, cuModule, “devData”);
cuMemcpyHtoD(devPtr, &value, sizeof(float));

__device__ float* devPointer;
CUdeviceptr ptr;
cuMemAlloc(&ptr, 256 * sizeof(float));
cuModuleGetGlobal(&devPtr, &bytes, cuModule, “devPointer”);
cuMemcpyHtoD(devPtr, &ptr, sizeof(ptr));
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 61

3.3.5 Shared Memory
The following code sample is the driver version of the host code of the sample from
Section 3.2.2.
In this sample, shared memory is statically allocated within the kernel as opposed to
allocated at runtime through cuFuncSetSharedSize().
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
int width;
int height;
int stride;
float* elements;
} Matrix;

// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
CUdeviceptr elements;
// Load A and B to device memory
Matrix d_A;
d_A.width = d_A.stride = A.width; d_A.height = A.height;
size_t size = A.width * A.height * sizeof(float);
cuMemAlloc(&elements, size);
cuMemcpyHtoD(elements, A.elements, size);
d_A.elements = (float*)elements;

Matrix d_B;
d_B.width = d_B.stride = B.width; d_B.height = B.height;
size = B.width * B.height * sizeof(float);
cuMemAlloc(elements, size);
cuMemcpyHtoD(elements, B.elements, size);
d_B.elements = (float*)elements;

// Allocate C in device memory
Matrix d_C;
d_C.width = d_C.stride = C.width; d_C.height = C.height;
size = C.width * C.height * sizeof(float);
cuMemAlloc(&elements, size);
d_C.elements = (float*)elements;

// Invoke kernel (assuming cuModule has been loaded)
CUfunction matMulKernel;
cuModuleGetFunction(&matMulKernel, cuModule, "MatMulKernel");
int offset = 0;
cuParamSetv(matMulKernel, offset, &d_A, sizeof(d_A));
offset += sizeof(d_A);
cuParamSetv(matMulKernel, offset, &d_B, sizeof(d_B));
offset += sizeof(d_B);
cuParamSetv(matMulKernel, offset, &d_C, sizeof(d_C));
offset += sizeof(d_C);
cuParamSetSize(matMulKernel, offset);
cuFuncSetBlockShape(matMulKernel, BLOCK_SIZE, BLOCK_SIZE, 1);
cuLaunchGrid(matMulKernel,
Chapter 3. Programming Interface


62 CUDA C Programming Guide Version 3.2

B.width / dimBlock.x, A.height / dimBlock.y);

// Read C from device memory
cuMemcpyDtoH(C.elements, (CUdeviceptr)d_C.elements, size);

// Free device memory
cuMemFree((CUdeviceptr)d_A.elements);
cuMemFree((CUdeviceptr)d_B.elements);
cuMemFree((CUdeviceptr)d_C.elements);
}
3.3.6 Multiple Devices
cuDeviceGetCount() and cuDeviceGet() provide a way to enumerate the
devices present in the system and other functions (described in the reference
manual) to retrieve their properties:
int deviceCount;
cuDeviceGetCount(&deviceCount);
int device;
for (int device = 0; device < deviceCount; ++device) {
CUdevice cuDevice;
cuDeviceGet(&cuDevice, device);
int major, minor;
cuDeviceComputeCapability(&major, &minor, cuDevice);
}
3.3.7 Texture and Surface Memory
3.3.7.1 Texture Memory
Texure binding is done using cuTexRefSetAddress() for linear memory and
cuTexRefSetArray() for CUDA arrays.
If a module cuModule contains some texture reference texRef defined as
texture<float, 2, cudaReadModeElementType> texRef;
the following code sample retrieves texRef„s handle:
CUtexref cuTexRef;
cuModuleGetTexRef(&cuTexRef, cuModule, “texRef”);
The following code sample binds texRef to some linear memory pointed to by
devPtr:
CUDA_ARRAY_DESCRIPTOR desc;
cuTexRefSetAddress2D(cuTexRef, &desc, devPtr, pitch);
The following code samples bind texRef to a CUDA array cuArray:
cuTexRefSetArray(cuTexRef, cuArray, CU_TRSA_OVERRIDE_FORMAT);
The reference manual lists various functions used to set address mode, filter mode,
format, and other flags for some texture reference. The format specified when
binding a texture to a texture reference must match the parameters specified when
declaring the texture reference; otherwise, the results of texture fetches are
undefined.
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 63

The following code sample is the driver version of the host code of the sample from
Section 3.2.4.1.3.
// Host code
int main()
{
// Allocate CUDA array in device memory
CUarray cuArray;
CUDA_ARRAY_DESCRIPTOR desc;
desc.Format = CU_AD_FORMAT_FLOAT;
desc.NumChannels = 1;
desc.Width = width;
desc.Height = height;
cuArrayCreate(&cuArray, &desc);

// Copy to device memory some data located at address h_data
// in host memory
CUDA_MEMCPY2D copyParam;
memset(&copyParam, 0, sizeof(copyParam));
copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY;
copyParam.dstArray = cuArray;
copyParam.srcMemoryType = CU_MEMORYTYPE_HOST;
copyParam.srcHost = h_data;
copyParam.srcPitch = width * sizeof(float);
copyParam.WidthInBytes = copyParam.srcPitch;
copyParam.Height = height;
cuMemcpy2D(&copyParam);

// Set texture parameters
CUtexref texRef;
cuModuleGetTexRef(&texRef, cuModule, "texRef"));
cuTexRefSetAddressMode(texRef, 0, CU_TR_ADDRESS_MODE_WRAP);
cuTexRefSetAddressMode(texRef, 1, CU_TR_ADDRESS_MODE_WRAP);
cuTexRefSetFilterMode(texRef, CU_TR_FILTER_MODE_LINEAR);
cuTexRefSetFlags(texRef, CU_TRSF_NORMALIZED_COORDINATES);
cuTexRefSetFormat(texRef, CU_AD_FORMAT_FLOAT, 1);

// Bind the array to the texture reference
cuTexRefSetArray(texRef, cuArray, CU_TRSA_OVERRIDE_FORMAT);

// Allocate result of transformation in device memory
CUdeviceptr output;
cuMemAlloc(&output, width * height * sizeof(float));

// Invoke kernel (assuming cuModule has been loaded)
CUfunction transformKernel;
cuModuleGetFunction(&transformKernel,
cuModule, "transformKernel");
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(output));
cuParamSetv(transformKernel, offset, &output, sizeof(output));
offset += sizeof(output);
ALIGN_UP(offset, __alignof(width));
cuParamSeti(transformKernel, offset, width);
offset += sizeof(width);
Chapter 3. Programming Interface


64 CUDA C Programming Guide Version 3.2

ALIGN_UP(offset, __alignof(height));
cuParamSeti(transformKernel, offset, height);
offset += sizeof(height);
ALIGN_UP(offset, __alignof(angle));
cuParamSetf(transformKernel, offset, angle);
offset += sizeof(angle);
cuParamSetSize(transformKernel, offset));
cuFuncSetBlockShape(transformKernel, 16, 16, 1);
cuLaunchGrid(transformKernel,
(width + dimBlock.x – 1) / dimBlock.x,
(height + dimBlock.y – 1) / dimBlock.y);

// Free device memory
cuArrayDestroy(cuArray);
cuMemFree(output);
}
3.3.7.2 Surface Memory
Surface binding is done using cuSurfRefSetArray() for CUDA arrays.
If a module cuModule contains some surface reference surfRef defined as
surface<void, 2> surfRef;
the following code sample retrieves surfRef„s handle:
CUsurfref cuSurfRef;
cuModuleGetSurfRef(&cuSurfRef, cuModule, “surfRef”);
The following code samples bind surfRef to a CUDA array cuArray:
cuSurfRefSetArray(cuSurfRef, cuArray, CU_SRSA_USE_ARRAY_FORMAT);
The following code sample is the driver version of the host code of the sample from
Section 3.2.4.1.4.
// Host code
int main()
{
// Allocate CUDA arrays in device memory
CUDA_ARRAY_DESCRIPTOR desc;
desc.Format = CU_AD_FORMAT_UNSIGNED_INT8;
desc.NumChannels = 4;
desc.Width = width;
desc.Height = height;
CUarray cuInputArray;
cuArrayCreate(&cuInputArray, &desc);
CUarray cuOutputArray;
cuArrayCreate(&cuOutputArray, &desc);

// Copy to device memory some data located at address h_data
// in host memory
CUDA_MEMCPY2D copyParam;
memset(&copyParam, 0, sizeof(copyParam));
copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY;
copyParam.dstArray = cuInputArray;
copyParam.srcMemoryType = CU_MEMORYTYPE_HOST;
copyParam.srcHost = h_data;
copyParam.srcPitch = width * sizeof(float);
copyParam.WidthInBytes = copyParam.srcPitch;
copyParam.Height = height;
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 65

cuMemcpy2D(&copyParam);

// Bind the arrays to the surface references
cuSurfRefSetArray(inputSurfRef, cuInputArray,
CU_SRSA_USE_ARRAY_FORMAT);
cuSurfRefSetArray(outputSurfRef, cuOutputArray,
CU_SRSA_USE_ARRAY_FORMAT);

// Invoke kernel (assuming cuModule has been loaded)
CUfunction copyKernel;
cuModuleGetFunction(&copyKernel,
cuModule, "copyKernel");
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(width));
cuParamSeti(copyKernel, offset, width);
offset += sizeof(width);
ALIGN_UP(offset, __alignof(height));
cuParamSeti(copyKernel, offset, height);
offset += sizeof(height);
cuParamSetSize(copyKernel, offset));
cuFuncSetBlockShape(copyKernel, 16, 16, 1);
cuLaunchGrid(copyKernel,
(width + dimBlock.x – 1) / dimBlock.x,
(height + dimBlock.y – 1) / dimBlock.y);

// Free device memory
cuArrayDestroy(cuInputArray);
cuArrayDestroy(cuOutputArray);
}
3.3.8 Page-Locked Host Memory
Page-locked host memory can be allocated using cuMemHostAlloc() with
optional mutually non-exclusive flags:
 CU_MEMHOSTALLOC_PORTABLE to allocate memory that is portable across
CUDA contexts (see Section 3.2.5.1);
 CU_MEMHOSTALLOC_WRITECOMBINED to allocate memory as write-
combining (see Section 3.2.5.2);
 CU_MEMHOSTALLOC_DEVICEMAP to allocate mapped page-locked memory
(see Section 3.2.5.3).
Page-locked host memory is freed using cuMemFreeHost().
Page-locked memory mapping is enabled for a CUDA context by creating the
context with the CU_CTX_MAP_HOST flag and device pointers to mapped page-
locked memory are retrieved using cuMemHostGetDevicePointer().
Applications may query whether a device supports mapped page-locked host
memory or not by checking the
CU_DEVICE_ATTRIBUTE_CAN_MAP_HOST_MEMORY attribute using
cuDeviceGetAttribute().
Chapter 3. Programming Interface


66 CUDA C Programming Guide Version 3.2

3.3.9 Asynchronous Concurrent Execution
Applications may query if a device can perform copies between page-locked host
memory and device memory concurrently with kernel execution by checking the
CU_DEVICE_ATTRIBUTE_GPU_OVERLAP attribute using
cuDeviceGetAttribute().
Applications may query if a device supports multiple kernels running concurrently
by checking the CU_DEVICE_ATTRIBUTE_CONCURRENT_KERNELS attribute using
cuDeviceGetAttribute().
3.3.9.1 Stream
The driver API provides functions similar to the runtime API to manage streams.
The following code sample is the driver version of the code sample from
Section 3.2.6.4.
CUstream stream[2];
for (int i = 0; i < 2; ++i)
cuStreamCreate(&stream[i], 0);
float* hostPtr;
cuMemAllocHost(&hostPtr, 2 * size);

for (int i = 0; i < 2; ++i)
cuMemcpyHtoDAsync(inputDevPtr + i * size, hostPtr + i * size,
size, stream[i]);
for (int i = 0; i < 2; ++i) {
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(outputDevPtr));
cuParamSetv(cuFunction, offset,
&outputDevPtr, sizeof(outputDevPtr));
offset += sizeof(outputDevPtr);
ALIGN_UP(offset, __alignof(inputDevPtr));
cuParamSetv(cuFunction, offset,
&inputDevPtr, sizeof(inputDevPtr));
offset += sizeof(inputDevPtr);
ALIGN_UP(offset, __alignof(size));
cuParamSeti(cuFunction, offset, size);
offset += sizeof(int);
cuParamSetSize(cuFunction, offset);
cuFuncSetBlockShape(cuFunction, 512, 1, 1);
cuLaunchGridAsync(cuFunction, 100, 1, stream[i]);
}
for (int i = 0; i < 2; ++i)
cuMemcpyDtoHAsync(hostPtr + i * size, outputDevPtr + i * size,
size, stream[i]);
cuCtxSynchronize();

for (int i = 0; i < 2; ++i)
cuStreamDestroy(&stream[i]);
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 67

3.3.9.2 Event Management
The driver API provides functions similar to the runtime API to manage events.
The following code sample is the driver version of the code sample from
Section 3.2.6.6.
CUevent start, stop;
cuEventCreate(&start);
cuEventCreate(&stop);

cuEventRecord(start, 0);
for (int i = 0; i < 2; ++i)
cuMemcpyHtoDAsync(inputDevPtr + i * size, hostPtr + i * size,
size, stream[i]);
for (int i = 0; i < 2; ++i) {
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(outputDevPtr));
cuParamSetv(cuFunction, offset,
&outputDevPtr, sizeof(outputDevPtr));
offset += sizeof(outputDevPtr);
ALIGN_UP(offset, __alignof(inputDevPtr));
cuParamSetv(cuFunction, offset,
&inputDevPtr, sizeof(inputDevPtr));
offset += sizeof(inputDevPtr);
ALIGN_UP(offset, __alignof(size));
cuParamSeti(cuFunction, offset, size);
offset += sizeof(size);
cuParamSetSize(cuFunction, offset);
cuFuncSetBlockShape(cuFunction, 512, 1, 1);
cuLaunchGridAsync(cuFunction, 100, 1, stream[i]);
}
for (int i = 0; i < 2; ++i)
cuMemcpyDtoHAsync(hostPtr + i * size, outputDevPtr + i * size,
size, stream[i]);
cuEventRecord(stop, 0);
cuEventSynchronize(stop);
float elapsedTime;
cuEventElapsedTime(&elapsedTime, start, stop);
They are destroyed this way:
cuEventDestroy(start);
cuEventDestroy(stop);
3.3.9.3 Synchronous Calls
Whether the host thread will yield, block, or spin on a synchronous function call can
be specified by calling cuCtxCreate() with some specific flags as described in the
reference manual.
3.3.10 Graphics Interoperability
The driver API provides functions similar to the runtime API to manage graphics
interoperability.
Chapter 3. Programming Interface


68 CUDA C Programming Guide Version 3.2

A resource must be registered to CUDA before it can be mapped using the
functions mentioned in Sections 3.3.10.1 and 3.3.10.2. These functions return a
CUDA graphics resource of type CUgraphicsResource. Registering a resource is
potentially high-overhead and therefore typically called only once per resource. A
CUDA graphics resource is unregistered using
cuGraphicsUnregisterResource().
Once a resource is registered to CUDA, it can be mapped and unmapped as many
times as necessary using cuGraphicsMapResources() and
cuGraphicsUnmapResources(). cuGraphicsResourceSetMapFlags()
can be called to specify usage hints (write-only, read-only) that the CUDA driver can
use to optimize resource management.
A mapped resource can be read from or written to by kernels using the device
memory address returned by cuGraphicsResourceGetMappedPointer() for
buffers and cuGraphicsSubResourceGetMappedArray() for CUDA arrays.
Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA
produces undefined results.
Sections 3.3.10.1 and 3.3.10.2 give specifics for each graphics API and some code
samples.
3.3.10.1 OpenGL Interoperability
Interoperability with OpenGL requires that the CUDA context be specifically
created using cuGLCtxCreate() instead of cuCtxCreate().
The OpenGL resources that may be mapped into the address space of CUDA are
OpenGL buffer, texture, and renderbuffer objects. A buffer object is registered
using cuGraphicsGLRegisterBuffer(). A texture or renderbuffer object is
registered using cuGraphicsGLRegisterImage(). The same restrictions
described in Section 3.2.7.1 apply.
The following code sample is the driver version of the code sample from
Section 3.2.7.1.
CUfunction createVertices;
GLuint positionsVBO;
struct cudaGraphicsResource* positionsVBO_CUDA;

int main()
{
// Initialize driver API
...

// Get handle for device 0
CUdevice cuDevice = 0;
cuDeviceGet(&cuDevice, 0);

// Create context
CUcontext cuContext;
cuGLCtxCreate(&cuContext, 0, cuDevice);

// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “createVertices.ptx”);

Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 69

// Get function handle from module
cuModuleGetFunction(&createVertices,
cuModule, "createVertices");

// Initialize OpenGL and GLUT
...
glutDisplayFunc(display);

// Create buffer object and register it with CUDA
glGenBuffers(1, positionsVBO);
glBindBuffer(GL_ARRAY_BUFFER, &vbo);
unsigned int size = width * height * 4 * sizeof(float);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
cuGraphicsGLRegisterBuffer(&positionsVBO_CUDA,
positionsVBO,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
glutMainLoop();
}

void display()
{
// Map OpenGL buffer object for writing from CUDA
CUdeviceptr positions;
cuGraphicsMapResources(1, &positionsVBO_CUDA, 0);
size_t num_bytes;
cuGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVBO_CUDA));

// Execute kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(positions));
cuParamSetv(createVertices, offset,
&positions, sizeof(positions));
offset += sizeof(positions);
ALIGN_UP(offset, __alignof(time));
cuParamSetf(createVertices, offset, time);
offset += sizeof(time);
ALIGN_UP(offset, __alignof(width));
cuParamSeti(createVertices, offset, width);
offset += sizeof(width);
ALIGN_UP(offset, __alignof(height));
cuParamSeti(createVertices, offset, height);
offset += sizeof(height);
cuParamSetSize(createVertices, offset);
int threadsPerBlock = 16;
cuFuncSetBlockShape(createVertices,
threadsPerBlock, threadsPerBlock, 1);
cuLaunchGrid(createVertices,
width / threadsPerBlock, height / threadsPerBlock);

// Unmap buffer object
Chapter 3. Programming Interface


70 CUDA C Programming Guide Version 3.2

cuGraphicsUnmapResources(1, &positionsVBO_CUDA, 0);

// Render from buffer object
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, positionsVBO);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, width * height);
glDisableClientState(GL_VERTEX_ARRAY);

// Swap buffers
glutSwapBuffers();
glutPostRedisplay();
}

void deleteVBO()
{
cuGraphicsUnregisterResource(positionsVBO_CUDA);
glDeleteBuffers(1, &positionsVBO);
}
On Windows and for Quadro GPUs, cuWGLGetDevice() can be used to retrieve
the CUDA device associated to the handle returned by wglEnumGpusNV().
3.3.10.2 Direct3D Interoperability
Interoperability with Direct3D requires that the Direct3D device be specified when
the CUDA context is created. This is done by creating the CUDA context using
cuD3D9CtxCreate() or cuD3D9CtxCreateOnDevice() (resp.
cuD3D10CtxCreate()or cuD3D10CtxCreateOnDevice() and
cuD3D11CtxCreate()or cuD3D11CtxCreateOnDevice()) instead of
cuCtxCreate().
Two sets of calls are also available to allow the creation of CUDA devices with
interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate
Frame Rendering) mode. These two new sets of calls are
cuD3D[9|10|11]CtxCreateOnDevice() and
cuD3D[9|10|11]GetDevices(). A call to
cuD3D[9|10|11]GetDevices()should be used to obtain a list of CUDA device
handles that can be passed as the last parameter to
cuD3D[9|10|11]CtxCreateOnDevice().
Applications that intend to support interoperability between Direct3D devices in
SLI configurations and CUDA should be written to only use these calls instead of
the cuD3D[9|10|11]CtxCreate() calls. In addition, they can call
cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context
active at a given time.
See Section 4.3 for general recommendations related to interoperability between
Direct3D devices using SLI and CUDA contexts.
The Direct3D resources that may be mapped into the address space of CUDA are
Direct3D buffers, textures, and surfaces. These resources are registered using
cuGraphicsD3D9RegisterResource(),
cuGraphicsD3D10RegisterResource(), and
cuGraphicsD3D11RegisterResource().
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 71

The following code sample is the driver version of the host code of the sample from
Section 3.2.7.2.
Direct3D 9 Version:
IDirect3D9* D3D;
IDirect3DDevice9 device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
IDirect3DVertexBuffer9* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Initialize Direct3D
D3D = Direct3DCreate9(D3D_SDK_VERSION);

// Get a CUDA-enabled adapter
unsigned int adapter = 0;
for (; adapter < g_pD3D->GetAdapterCount(); adapter++) {
D3DADAPTER_IDENTIFIER9 adapterId;
g_pD3D->GetAdapterIdentifier(adapter, 0, &adapterId);
int dev;
if (cuD3D9GetDevice(&dev, adapterId.DeviceName)
== cudaSuccess)
break;
}

// Create device
...
D3D->CreateDevice(adapter, D3DDEVTYPE_HAL, hWnd,
D3DCREATE_HARDWARE_VERTEXPROCESSING,
&params, &device);

// Initialize driver API
...

// Create context
CUdevice cuDevice;
CUcontext cuContext;
cuD3D9CtxCreate(&cuContext, &cuDevice, 0, device);

// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “createVertices.ptx”);

// Get function handle from module
cuModuleGetFunction(&createVertices,
cuModule, "createVertices");

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
device->CreateVertexBuffer(size, 0, D3DFVF_CUSTOMVERTEX,
D3DPOOL_DEFAULT, &positionsVB, 0);
cuGraphicsD3D9RegisterResource(&positionsVB_CUDA,
Chapter 3. Programming Interface


72 CUDA C Programming Guide Version 3.2

positionsVB,
cudaGraphicsRegisterFlagsNone);
cuGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cuGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cuGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));

// Execute kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(positions));
cuParamSetv(createVertices, offset,
&positions, sizeof(positions));
offset += sizeof(positions);
ALIGN_UP(offset, __alignof(time));
cuParamSetf(createVertices, offset, time);
offset += sizeof(time);
ALIGN_UP(offset, __alignof(width));
cuParamSeti(createVertices, offset, width);
offset += sizeof(width);
ALIGN_UP(offset, __alignof(height));
cuParamSeti(createVertices, offset, height);
offset += sizeof(height);
cuParamSetSize(createVertices, offset);
int threadsPerBlock = 16;
cuFuncSetBlockShape(createVertices,
threadsPerBlock, threadsPerBlock, 1);
cuLaunchGrid(createVertices,
width / threadsPerBlock, height / threadsPerBlock);

// Unmap vertex buffer
cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

// Draw and present
...
}

void releaseVB()
{
cuGraphicsUnregisterResource(positionsVB_CUDA);
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 73

positionsVB->Release();
}
Direct3D 10 Version:
ID3D10Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D10Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
int dev;
if (cuD3D10GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();

// Create swap chain and device
...
D3D10CreateDeviceAndSwapChain(adapter,
D3D10_DRIVER_TYPE_HARDWARE, 0,
D3D10_CREATE_DEVICE_DEBUG,
D3D10_SDK_VERSION,
&swapChainDesc &swapChain,
&device);
adapter->Release();

// Initialize driver API
...

// Create context
CUdevice cuDevice;
CUcontext cuContext;
cuD3D10CtxCreate(&cuContext, &cuDevice, 0, device);

// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “createVertices.ptx”);

// Get function handle from module
cuModuleGetFunction(&createVertices,
cuModule, "createVertices");

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D10_BUFFER_DESC bufferDesc;
Chapter 3. Programming Interface


74 CUDA C Programming Guide Version 3.2

bufferDesc.Usage = D3D10_USAGE_DEFAULT;
bufferDesc.ByteWidth = size;
bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cuGraphicsD3D10RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cuGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cuGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cuGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));

// Execute kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(positions));
cuParamSetv(createVertices, offset,
&positions, sizeof(positions));
offset += sizeof(positions);
ALIGN_UP(offset, __alignof(time));
cuParamSetf(createVertices, offset, time);
offset += sizeof(time);
ALIGN_UP(offset, __alignof(width));
cuParamSeti(createVertices, offset, width);
offset += sizeof(width);
ALIGN_UP(offset, __alignof(height));
cuParamSeti(createVertices, offset, height);
offset += sizeof(height);
cuParamSetSize(createVertices, offset);
int threadsPerBlock = 16;
cuFuncSetBlockShape(createVertices,
threadsPerBlock, threadsPerBlock, 1);
cuLaunchGrid(createVertices,
width / threadsPerBlock, height / threadsPerBlock);

// Unmap vertex buffer
cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 75

// Draw and present
...
}

void releaseVB()
{
cuGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
Direct3D 11 Version:
ID3D11Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D11Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;

int main()
{
// Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
int dev;
if (cuD3D11GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();

// Create swap chain and device
...
sFnPtr_D3D11CreateDeviceAndSwapChain(adapter,
D3D11_DRIVER_TYPE_HARDWARE,
0,
D3D11_CREATE_DEVICE_DEBUG,
featureLevels, 3,
D3D11_SDK_VERSION,
&swapChainDesc, &swapChain,
&device,
&featureLevel,
&deviceContext);
adapter->Release();

// Initialize driver API
...

// Create context
CUdevice cuDevice;
CUcontext cuContext;
cuD3D11CtxCreate(&cuContext, &cuDevice, 0, device);

Chapter 3. Programming Interface


76 CUDA C Programming Guide Version 3.2

// Create module from binary file
CUmodule cuModule;
cuModuleLoad(&cuModule, “createVertices.ptx”);

// Get function handle from module
cuModuleGetFunction(&createVertices,
cuModule, "createVertices");

// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D11_BUFFER_DESC bufferDesc;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth = size;
bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cuGraphicsD3D11RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cuGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);

// Launch rendering loop
while (...) {
...
Render();
...
}
}

void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cuGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cuGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));

// Execute kernel
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1)
int offset = 0;
ALIGN_UP(offset, __alignof(positions));
cuParamSetv(createVertices, offset,
&positions, sizeof(positions));
offset += sizeof(positions);
ALIGN_UP(offset, __alignof(time));
cuParamSetf(createVertices, offset, time);
offset += sizeof(time);
ALIGN_UP(offset, __alignof(width));
cuParamSeti(createVertices, offset, width);
offset += sizeof(width);
ALIGN_UP(offset, __alignof(height));
cuParamSeti(createVertices, offset, height);
Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 77

offset += sizeof(height);
cuParamSetSize(createVertices, offset);
int threadsPerBlock = 16;
cuFuncSetBlockShape(createVertices,
threadsPerBlock, threadsPerBlock, 1);
cuLaunchGrid(createVertices,
width / threadsPerBlock, height / threadsPerBlock);

// Unmap vertex buffer
cuGraphicsUnmapResources(1, &positionsVB_CUDA, 0);

// Draw and present
...
}

void releaseVB()
{
cuGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
3.3.11 Error Handling
All driver functions return an error code, but for an asynchronous function (see
Section 3.2.6), this error code cannot possibly report any of the asynchronous errors
that could occur on the device since the function returns before the device has
completed the task; the error code only reports errors that occur on the host prior
to executing the task, typically related to parameter validation; if an asynchronous
error occurs, it will be reported by some subsequent unrelated runtime function call.
The only way to check for asynchronous errors just after some asynchronous
function call is therefore to synchronize just after the call by calling
cuCtxSynchronize() (or by using any other synchronization mechanisms
described in Section 3.3.9) and checking the error code returned by
cuCtxSynchronize().
3.3.12 Call Stack
On devices of compute capability 2.x, the size of the call stack can be queried using
cuCtxGetLimit() and set using cuCtxSetLimit().
3.4 Interoperability between Runtime and Driver
APIs
An application can mix runtime API code with driver API code.
If a context is created and made current via the driver API, subsequent runtime calls
will pick up this context instead of creating a new one.
Chapter 3. Programming Interface


78 CUDA C Programming Guide Version 3.2

If the runtime is initialized (implicitly as mentioned in Section 3.2),
cuCtxAttach() can be used to retrieve the context created during initialization.
This context can be used by subsequent driver API calls.
Device memory can be allocated and freed using either API. CUdeviceptr can be
cast to regular pointers and vice-versa:
CUdeviceptr devPtr;
float* d_data;

// Allocation using driver API
cuMemAlloc(&devPtr, size);
d_data = (float*)devPtr;

// Allocation using runtime API
cudaMalloc(&d_data, size);
devPtr = (CUdeviceptr)d_data;
In particular, this means that applications written using the driver API can invoke
libraries written using the runtime API (such as CUFFT, CUBLAS, …).
All functions from the device and version management sections of the reference
manual can be used interchangeably.
3.5 Versioning and Compatibility
There are two version numbers that developers should care about when developing
a CUDA application: The compute capability that describes the general
specifications and features of the compute device (see Section 2.5) and the version
of the CUDA driver API that describes the features supported by the driver API
and runtime.
The version of the driver API is defined in the driver header file as
CUDA_VERSION. It allows developers to check whether their application requires a
newer driver than the one currently installed. This is important, because the driver
API is backward compatible, meaning that applications, plug-ins, and libraries
(including the C runtime) compiled against a particular version of the driver API will
continue to work on subsequent driver releases as illustrated in Figure 3-4. The
driver API is not forward compatible, which means that applications, plug-ins, and
libraries (including the C runtime) compiled against a particular version of the driver
API will not work on previous versions of the driver.
It is important to note that mixing and matching versions is not supported;
specifically:
 All applications, plug-ins, and libraries on a system must use the same version of
the CUDA driver API, since only one version of the CUDA driver can be
installed on a system.
 All plug-ins and libraries used by an application must use the same version of
the runtime.
 All plug-ins and libraries used by an application must use the same version of
any libraries that use the runtime (such as CUFFT, CUBLAS, …).

Chapter 3. Programming Interface

CUDA C Programming Guide Version 3.2 79

1.0
Driver
Apps,
Libs &
Plug-ins
1.1
Driver
Apps,
Libs &
Plug-ins
2.0
Driver
Apps,
Libs &
Plug-ins
Compatible Incompatible
...
...

Figure 3-4. The Driver API is Backward, but Not Forward
Compatible
3.6 Compute Modes
On Tesla solutions running Linux, one can set any device in a system in one of the
three following modes using NVIDIA‟s System Management Interface (nvidia-smi),
which is a tool distributed as part of the Linux driver:
 Default compute mode: Multiple host threads can use the device (by calling
cudaSetDevice() on this device, when using the runtime API, or by making
current a context associated to the device, when using the driver API) at the
same time.
 Exclusive compute mode: Only one host thread can use the device at any given
time.
 Prohibited compute mode: No host thread can use the device.
This means, in particular, that a host thread using the runtime API without explicitly
calling cudaSetDevice() might be associated with a device other than device 0 if
device 0 turns out to be in prohibited compute mode or in exclusive compute mode
and used by another host thread. cudaSetValidDevices() can be used to set a
device from a prioritized list of devices.
Applications may query the compute mode of a device by calling
cudaGetDeviceProperties() and checking the computeMode property or
checking the CU_DEVICE_COMPUTE_MODE attribute using
cuDeviceGetAttribute().
3.7 Mode Switches
GPUs dedicate some DRAM memory to the so-called primary surface, which is used
to refresh the display device whose output is viewed by the user. When users initiate
Chapter 3. Programming Interface


80 CUDA C Programming Guide Version 3.2

a mode switch of the display by changing the resolution or bit depth of the display
(using NVIDIA control panel or the Display control panel on Windows), the
amount of memory needed for the primary surface changes. For example, if the user
changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the
system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Full-
screen graphics applications running with anti-aliasing enabled may require much
more display memory for the primary surface.) On Windows, other events that may
initiate display mode switches include launching a full-screen DirectX application,
hitting Alt+Tab to task switch away from a full-screen DirectX application, or
hitting Ctrl+Alt+Del to lock the computer.
If a mode switch increases the amount of memory needed for the primary surface,
the system may have to cannibalize memory allocations dedicated to CUDA
applications. Therefore, a mode switch results in any call to the CUDA runtime to
fail and return an invalid context error.






CUDA C Programming Guide Version 3.1 81

Chapter 4.
Hardware Implementation
The CUDA architecture is built around a scalable array of multithreaded Streaming
Multiprocessors (SMs). When a CUDA program on the host CPU invokes a kernel
grid, the blocks of the grid are enumerated and distributed to multiprocessors with
available execution capacity. The threads of a thread block execute concurrently on
one multiprocessor, and multiple thread blocks can execute concurrently on one
multiprocessor. As thread blocks terminate, new blocks are launched on the vacated
multiprocessors.
A multiprocessor is designed to execute hundreds of threads concurrently. To
manage such a large amount of threads, it employs a unique architecture called
SIMT (Single-Instruction, Multiple-Thread) that is described in Section 4.1. To maximize
utilization of its functional units, it leverages thread-level parallelism by using
hardware multithreading as detailed in Section 4.2, more so than instruction-level
parallelism within a single thread (instructions are pipelined, but unlike CPU cores
they are executed in order and there is no branch prediction and no speculative
execution).
Sections 4.1 and 4.2 describe the architecture features of the streaming
multiprocessor that are common to all devices. Sections G.3.1 and G.4.1 provide the
specifics for devices of compute capabilities 1.x and 2.x, respectively.
4.1 SIMT Architecture
The multiprocessor creates, manages, schedules, and executes threads in groups of
32 parallel threads called warps. Individual threads composing a warp start together
at the same program address, but they have their own instruction address counter
and register state and are therefore free to branch and execute independently. The
term warp originates from weaving, the first parallel thread technology. A half-warp is
either the first or second half of a warp. A quarter-warp is either the first, second,
third, or fourth quarter of a warp.
When a multiprocessor is given one or more thread blocks to execute, it partitions
them into warps that get scheduled by a warp scheduler for execution. The way a block
is partitioned into warps is always the same; each warp contains threads of
consecutive, increasing thread IDs with the first warp containing thread 0.
Section 2.2 describes how thread IDs relate to thread indices in the block.
Chapter 4. Hardware Implementation


82 CUDA C Programming Guide Version 3.2

A warp executes one common instruction at a time, so full efficiency is realized
when all 32 threads of a warp agree on their execution path. If threads of a warp
diverge via a data-dependent conditional branch, the warp serially executes each
branch path taken, disabling threads that are not on that path, and when all paths
complete, the threads converge back to the same execution path. Branch divergence
occurs only within a warp; different warps execute independently regardless of
whether they are executing common or disjoint code paths.
The SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector
organizations in that a single instruction controls multiple processing elements. A
key difference is that SIMD vector organizations expose the SIMD width to the
software, whereas SIMT instructions specify the execution and branching behavior
of a single thread. In contrast with SIMD vector machines, SIMT enables
programmers to write thread-level parallel code for independent, scalar threads, as
well as data-parallel code for coordinated threads. For the purposes of correctness,
the programmer can essentially ignore the SIMT behavior; however, substantial
performance improvements can be realized by taking care that the code seldom
requires threads in a warp to diverge. In practice, this is analogous to the role of
cache lines in traditional code: Cache line size can be safely ignored when designing
for correctness but must be considered in the code structure when designing for
peak performance. Vector architectures, on the other hand, require the software to
coalesce loads into vectors and manage divergence manually.
If a non-atomic instruction executed by a warp writes to the same location in global
or shared memory for more than one of the threads of the warp, the number of
serialized writes that occur to that location varies depending on the compute
capability of the device (see Sections G.3.2, G.3.3, G.4.2, and G.4.3) and which
thread performs the final write is undefined.
If an atomic instruction (see Section B.11) executed by a warp reads, modifies, and
writes to the same location in global memory for more than one of the threads of
the warp, each read, modify, write to that location occurs and they are all serialized,
but the order in which they occur is undefined.
4.2 Hardware Multithreading
The execution context (program counters, registers, etc) for each warp processed by
a multiprocessor is maintained on-chip during the entire lifetime of the warp.
Switching from one execution context to another therefore has no cost, and at every
instruction issue time, a warp scheduler selects a warp that has threads ready to
execute its next instruction (active threads) and issues the instruction to those threads.
In particular, each multiprocessor has a set of 32-bit registers that are partitioned
among the warps, and a parallel data cache or shared memory that is partitioned among
the thread blocks.
The number of blocks and warps that can reside and be processed together on the
multiprocessor for a given kernel depends on the amount of registers and shared
memory used by the kernel and the amount of registers and shared memory
available on the multiprocessor. There are also a maximum number of resident
blocks and a maximum number of resident warps per multiprocessor. These limits
as well the amount of registers and shared memory available on the multiprocessor
Chapter 4: Hardware Implementation


CUDA C Programming Guide Version 3.2 83

are a function of the compute capability of the device and are given in Appendix G.
If there are not enough registers or shared memory available per multiprocessor to
process at least one block, the kernel will fail to launch.
The total number of warps Wblock in a block is as follows:
) 1 , (
size
block
W
T
ceil W =
 T is the number of threads per block,
 Wsize is the warp size, which is equal to 32,
 ceil(x, y) is equal to x rounded up to the nearest multiple of y.
The total number of registers Rblock allocated for a block is as follows:
For devices of compute capability 1.x:
) , ) , ( (
T k size W block block
G R W G W ceil ceil R × × =

For devices of compute capability 2.x:
block T size k block
W G W R ceil R × × = ) , (
 GW is the warp allocation granularity, equal to 2 (compute capability 1.x only),
 Rk is the number of registers used by the kernel,
 GT is the thread allocation granularity, equal to 256 for devices of compute
capability 1.0 and 1.1, and 512 for devices of compute capability 1.2 and 1.3,
and 64 for devices of compute capability 2.x.
The total amount of shared memory Sblock in bytes allocated for a block is as follows:
) , (
S k block
G S ceil S =
 Sk is the amount of shared memory used by the kernel in bytes,
 GS is the shared memory allocation granularity, which is equal to 512 for devices
of compute capability 1.x and 128 for devices of compute capability 2.x.
4.3 Multiple Devices
In a system with multiple GPUs, all CUDA-enabled GPUs are accessible via the
CUDA driver and runtime as separate devices. There are however special
considerations as described below when the system is in SLI mode.
First, an allocation in one CUDA device on one GPU will consume memory on
other GPUs that are part of the SLI configuration of the Direct3D device. Because
of this, allocations may fail earlier than otherwise expected.
Second, applications have to create multiple CUDA contexts, one for each GPU in
the SLI configuration and deal with the fact that a different GPU is used for
rendering by the Direct3D device at every frame. The application can use the
cuD3D[9|10|11]GetDevices() set of calls to identify the CUDA device
handle(s) for the GPU(s) that are performing the rendering in the current and next
frame. Given this information the application will typically map Direct3D resources
to the CUDA context corresponding to the CUDA device returned by
cuD3D[9|10|11]GetDevices() when the deviceList parameter is set to
Chapter 4. Hardware Implementation


84 CUDA C Programming Guide Version 3.2

CU_D3D10_DEVICE_LIST_CURRENT_FRAME. See Sections 3.2.7.2 and 3.3.10.2
for details on how to use CUDA-Direct3D interoperability.







CUDA C Programming Guide Version 3.1 85

Chapter 5.
Performance Guidelines
5.1 Overall Performance Optimization Strategies
Performance optimization revolves around three basic strategies:
 Maximize parallel execution to achieve maximum utilization;
 Optimize memory usage to achieve maximum memory throughput;
 Optimize instruction usage to achieve maximum instruction throughput.
Which strategies will yield the best performance gain for a particular portion of an
application depends on the performance limiters for that portion; optimizing
instruction usage of a kernel that is mostly limited by memory accesses will not yield
any significant performance gain, for example. Optimization efforts should
therefore be constantly directed by measuring and monitoring the performance
limiters, for example using the CUDA profiler. Also, comparing the floating-point
operation throughput or memory throughput – whichever makes more sense – of a
particular kernel to the corresponding peak theoretical throughput of the device
indicates how much room for improvement there is for the kernel.
5.2 Maximize Utilization
To maximize utilization the application should be structured in a way that it exposes
as much parallelism as possible and efficiently maps this parallelism to the various
components of the system to keep them busy most of the time.
5.2.1 Application Level
At a high level, the application should maximize parallel execution between the host,
the devices, and the bus connecting the host to the devices, by using asynchronous
functions calls and streams as described in Section 3.2.6. It should assign to each
processor the type of work it does best: serial workloads to the host; parallel
workloads to the devices.
For the parallel workloads, at points in the algorithm where parallelism is broken
because some threads need to synchronize in order to share data with each other,
there are two cases: Either these threads belong to the same block, in which case
Chapter 5. Performance Guidelines


86 CUDA C Programming Guide Version 3.2

they should use __syncthreads() and share data through shared memory within
the same kernel invocation, or they belong to different blocks, in which case they
must share data through global memory using two separate kernel invocations, one
for writing to and one for reading from global memory. The second case is much
less optimal since it adds the overhead of extra kernel invocations and global
memory traffic. Its occurrence should therefore be minimized by mapping the
algorithm to the CUDA programming model in such a way that the computations
that require inter-thread communication are performed within a single thread block
as much as possible.
5.2.2 Device Level
At a lower level, the application should maximize parallel execution between the
multiprocessors of a device.
For devices of compute capability 1.x, only one kernel can execute on a device at
one time, so the kernel should be launched with at least as many thread blocks as
there are multiprocessors in the device.
For devices of compute capability 2.x, multiple kernels can execute concurrently on
a device, so maximum utilization can also be achieved by using streams to enable
enough kernels to execute concurrently as described in Section 3.2.6.
5.2.3 Multiprocessor Level
At an even lower level, the application should maximize parallel execution between
the various functional units within a multiprocessor.
As described in Section 4.2, a GPU multiprocessor relies on thread-level parallelism
to maximize utilization of its functional units. Utilization is therefore directly linked
to the number of resident warps. At every instruction issue time, a warp scheduler
selects a warp that is ready to execute its next instruction, if any, and issues the
instruction to the active threads of the warp. The number of clock cycles it takes for
a warp to be ready to execute its next instruction is called the latency, and full
utilization is achieved when all warp schedulers always have some instruction to
issue for some warp at every clock cycle during that latency period, or in other
words, when latency is completely “hidden”. The number of instructions required to
hide a latency of L clock cycles depends on the respective throughputs of these
instructions (see Section 5.4.1 for the throughputs of various arithmetic
instructions); assuming maximum throughput for all instructions, it is:
 L/4 (rounded up to nearest integer) for devices of compute capability 1.x since
a multiprocessor issues one instruction per warp over 4 clock cycles, as
mentioned in Section G.3.1,
 L (rounded up to nearest integer) for devices of compute capability 2.0 since a
multiprocessor issues one instruction per warp over 2 clock cycles for 2 warps
at a time, as mentioned in Section G.4.1,
 2L (rounded up to nearest integer) for devices of compute capability 2.1 since a
multiprocessor issues a pair of instructions per warp over 2 clock cycles for 2
warps at a time, as mentioned in Section G.4.1.
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 87

For devices of compute capability 2.0, the two instructions issued every other cycle
are for two different warps. For devices of compute capability 2.1, the four
instructions issued every other cycle are two pairs for two different warps, each pair
being for the same warp.
The most common reason a warp is not ready to execute its next instruction is that
the instruction‟s input operands are not yet available.
If all input operands are registers, latency is caused by register dependencies, i.e.
some of the input operands are written by some previous instruction(s) whose
execution has not completed yet. In the case of a back-to-back register dependency
(i.e. some input operand is written by the previous instruction), the latency is equal
to the execution time of the previous instruction and the warp scheduler must
schedule instructions for different warps during that time. Execution time varies
depending on the instruction, but it is typically about 22 clock cycles, which
translates to 6 warps for devices of compute capability 1.x and 22 warps for devices
of compute capability 2.x.
If some input operand resides in off-chip memory, the latency is much higher: 400
to 800 clock cycles. The number of warps required to keep the warp schedulers busy
during such high latency periods depends on the kernel code; in general, more warps
are required if the ratio of the number of instructions with no off-chip memory
operands (i.e. arithmetic instructions most of the time) to the number of
instructions with off-chip memory operands is low (this ratio is commonly called
the arithmetic intensity of the program). If this ratio is 15, for example, then to hide
latencies of about 600 clock cycles, about 10 warps are required for devices of
compute capability 1.x and about 40 for devices of compute capability 2.x.
Another reason a warp is not ready to execute its next instruction is that it is waiting
at some memory fence (Section B.5) or synchronization point (Section B.6). A
synchronization point can force the multiprocessor to idle as more and more warps
wait for other warps in the same block to complete execution of instructions prior
to the synchronization point. Having multiple resident blocks per multiprocessor
can help reduce idling in this case, as warps from different blocks do not need to
wait for each other at synchronization points.
The number of blocks and warps residing on each multiprocessor for a given kernel
call depends on the execution configuration of the call (Section B.16), the memory
resources of the multiprocessor, and the resource requirements of the kernel as
described in Section 4.2. To assist programmers in choosing thread block size based
on register and shared memory requirements, the CUDA Software Development
Kit provides a spreadsheet, called the CUDA Occupancy Calculator, where
occupancy is defined as the ratio of the number of resident warps to the maximum
number of resident warps (given in Appendix G for various compute capabilities).
Register, local, shared, and constant memory usages are reported by the compiler
when compiling with the --ptxas-options=-v option.
The total amount of shared memory required for a block is equal to the sum of the
amount of statically allocated shared memory, the amount of dynamically allocated
shared memory, and for devices of compute capability 1.x, the amount of shared
memory used to pass the kernel‟s arguments (see Section B.1.4).
The number of registers used by a kernel can have a significant impact on the
number of resident warps. For example, for devices of compute capability 1.2, if a
Chapter 5. Performance Guidelines


88 CUDA C Programming Guide Version 3.2

kernel uses 16 registers and each block has 512 threads and requires very little
shared memory, then two blocks (i.e. 32 warps) can reside on the multiprocessor
since they require 2x512x16 registers, which exactly matches the number of registers
available on the multiprocessor. But as soon as the kernel uses one more register,
only one block (i.e. 16 warps) can be resident since two blocks would require
2x512x17 registers, which are more registers than are available on the
multiprocessor. Therefore, the compiler attempts to minimize register usage while
keeping register spilling (see Section 5.3.2.2) and the number of instructions to a
minimum. Register usage can be controlled using the -maxrregcount compiler
option or launch bounds as described in Section B.17.
Each double variable (on devices that supports native double precision, i.e. devices
of compute capability 1.2 and higher) and each long long variable uses two
registers. However, devices of compute capability 1.2 and higher have at least twice
as many registers per multiprocessor as devices with lower compute capability.
The effect of execution configuration on performance for a given kernel call
generally depends on the kernel code. Experimentation is therefore recommended.
Applications can also parameterize execution configurations based on register file
size and shared memory size, which depends on the compute capability of the
device, as well as on the number of multiprocessors and memory bandwidth of the
device, all of which can be queried using the runtime or driver API (see reference
manual).
The number of threads per block should be chosen as a multiple of the warp size to
avoid wasting computing resources with under-populated warps as much as
possible.
5.3 Maximize Memory Throughput
The first step in maximizing overall memory throughput for the application is to
minimize data transfers with low bandwidth.
That means minimizing data transfers between the host and the device, as detailed
in Section 5.3.1, since these have much lower bandwidth than data transfers
between global memory and the device.
That also means minimizing data transfers between global memory and the device
by maximizing use of on-chip memory: shared memory and caches (i.e. L1/L2
caches available on devices of compute capability 2.x, texture cache and constant
cache available on all devices).
Shared memory is equivalent to a user-managed cache: The application explicitly
allocates and accesses it. As illustrated in Section 3.2.2, a typical programming
pattern is to stage data coming from device memory into shared memory; in other
words, to have each thread of a block:
 Load data from device memory to shared memory,
 Synchronize with all the other threads of the block so that each thread can
safely read shared memory locations that were populated by different threads,
 Process the data in shared memory,
 Synchronize again if necessary to make sure that shared memory has been
updated with the results,
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 89

 Write the results back to device memory.
For some applications (e.g. for which global memory accesses are data-dependent), a
traditional hardware-managed cache is more appropriate to exploit data locality. As
mentioned in Section G.4.1, for devices of compute capability 2.x, the same on-chip
memory is used for both L1 and shared memory, and how much of it is dedicated to
L1 versus shared memory is configurable for each kernel call.
The throughput of memory accesses by a kernel can vary by an order of magnitude
depending on access pattern for each type of memory. The next step in maximizing
memory throughput is therefore to organize memory accesses as optimally as
possible based on the optimal memory access patterns described in Sections 5.3.2.1,
5.3.2.3, 5.3.2.4, and 5.3.2.5. This optimization is especially important for global
memory accesses as global memory bandwidth is low, so non-optimal global
memory accesses have a higher impact on performance.
5.3.1 Data Transfer between Host and Device
Applications should strive to minimize data transfer between the host and the
device. One way to accomplish this is to move more code from the host to the
device, even if that means running kernels with low parallelism computations.
Intermediate data structures may be created in device memory, operated on by the
device, and destroyed without ever being mapped by the host or copied to host
memory.
Also, because of the overhead associated with each transfer, batching many small
transfers into a single large transfer always performs better than making each
transfer separately.
On systems with a front-side bus, higher performance for data transfers between
host and device is achieved by using page-locked host memory as described in
Section 3.2.4.1.4.
In addition, when using mapped page-locked memory (Section 3.2.5.3), there is no
need to allocate any device memory and explicitly copy data between device and
host memory. Data transfers are implicitly performed each time the kernel accesses
the mapped memory. For maximum performance, these memory accesses must be
coalesced as with accesses to global memory (see Section 5.3.2.1). Assuming that
they are and that the mapped memory is read or written only once, using mapped
page-locked memory instead of explicit copies between device and host memory can
be a win for performance.
On integrated systems where device memory and host memory are physically the
same, any copy between host and device memory is superfluous and mapped page-
locked memory should be used instead. Applications may query whether a device is
integrated or not by calling cudaGetDeviceProperties() and checking the
integrated property or checking the CU_DEVICE_ATTRIBUTE_INTEGRATED
attribute using cuDeviceGetAttribute().
5.3.2 Device Memory Accesses
An instruction that accesses addressable memory (i.e. global, local, shared, constant,
or texture memory) might need to be re-issued multiple times depending on the
Chapter 5. Performance Guidelines


90 CUDA C Programming Guide Version 3.2

distribution of the memory addresses across the threads within the warp. How the
distribution affects the instruction throughput this way is specific to each type of
memory and described in the following sections. For example, for global memory,
as a general rule, the more scattered the addresses are, the more reduced the
throughput is.
5.3.2.1 Global Memory
Global memory resides in device memory and device memory is accessed via 32-,
64-, or 128-byte memory transactions. These memory transactions must be naturally
aligned: Only the 32-, 64-, or 128-byte segments of device memory that are aligned
to their size (i.e. whose first address is a multiple of their size) can be read or written
by memory transactions.
When a warp executes an instruction that accesses global memory, it coalesces the
memory accesses of the threads within the warp into one or more of these memory
transactions depending on the size of the word accessed by each thread and the
distribution of the memory addresses across the threads. In general, the more
transactions are necessary, the more unused words are transferred in addition to the
words accessed by the threads, reducing the instruction throughput accordingly. For
example, if a 32-byte memory transaction is generated for each thread‟s 4-byte
access, throughput is divided by 8.
How many transactions are necessary and how throughput is ultimately affected
varies with the compute capability of the device. For devices of compute capability
1.0 and 1.1, the requirements on the distribution of the addresses across the threads
to get any coalescing at all are very strict. They are much more relaxed for devices of
higher compute capabilities. For devices of compute capability 2.x, the memory
transactions are cached, so data locality is exploited to reduce impact on throughput.
Sections G.3.2 and G.4.2 give more details on how global memory accesses are
handled for various compute capabilities.
To maximize global memory throughput, it is therefore important to maximize
coalescing by:
 Following the most optimal access patterns based on Sections G.3.2 and G.4.2,
 Using data types that meet the size and alignment requirement detailed in
Section 5.3.2.1.1,
 Padding data in some cases, for example, when accessing a two-dimensional
array as described in Section 5.3.2.1.2.
5.3.2.1.1 Size and Alignment Requirement
Global memory instructions support reading or writing words of size equal to 1, 2,
4, 8, or 16 bytes. Any access (via a variable or a pointer) to data residing in global
memory compiles to a single global memory instruction if and only if the size of the
data type is 1, 2, 4, 8, or 16 bytes and the data is naturally aligned (i.e. its address is a
multiple of that size).
If this size and alignment requirement is not fulfilled, the access compiles to
multiple instructions with interleaved access patterns that prevent these instructions
from fully coalescing. It is therefore recommended to use types that meet this
requirement for data that resides in global memory.
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 91

The alignment requirement is automatically fulfilled for the built-in types of
Section B.3.1 like float2 or float4.
For structures, the size and alignment requirements can be enforced by the compiler
using the alignment specifiers __align__(8) or __align__(16), such as
struct __align__(8) {
float x;
float y;
};
or
struct __align__(16) {
float x;
float y;
float z;
};
Any address of a variable residing in global memory or returned by one of the
memory allocation routines from the driver or runtime API is always aligned to at
least 256 bytes.
Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results
(off by a few words), so special care must be taken to maintain alignment of the
starting address of any value or array of values of these types. A typical case where
this might be easily overlooked is when using some custom global memory
allocation scheme, whereby the allocations of multiple arrays (with multiple calls to
cudaMalloc() or cuMemAlloc()) is replaced by the allocation of a single large
block of memory partitioned into multiple arrays, in which case the starting address
of each array is offset from the block‟s starting address.
5.3.2.1.2 Two-Dimensional Arrays
A common global memory access pattern is when each thread of index (tx,ty)
uses the following address to access one element of a 2D array of width width,
located at address BaseAddress of type type* (where type meets the
requirement described in Section 5.3.2.1.1):
BaseAddress + width * ty + tx
For these accesses to be fully coalesced, both the width of the thread block and the
width of the array must be a multiple of the warp size (or only half the warp size for
devices of compute capability 1.x).
In particular, this means that an array whose width is not a multiple of this size will
be accessed much more efficiently if it is actually allocated with a width rounded up
to the closest multiple of this size and its rows padded accordingly. The
cudaMallocPitch() and cuMemAllocPitch() functions and associated
memory copy functions described in the reference manual enable programmers to
write non-hardware-dependent code to allocate arrays that conform to these
constraints.
5.3.2.2 Local Memory
Local memory accesses only occur for some automatic variables as mentioned in
Section B.2.4. Automatic variables that the compiler is likely to place in local
memory are:
Chapter 5. Performance Guidelines


92 CUDA C Programming Guide Version 3.2

 Arrays for which it cannot determine that they are indexed with constant
quantities,
 Large structures or arrays that would consume too much register space,
 Any variable if the kernel uses more registers than available (this is also known
as register spilling).
Inspection of the PTX assembly code (obtained by compiling with the –ptx or
-keep option) will tell if a variable has been placed in local memory during the first
compilation phases as it will be declared using the .local mnemonic and accessed
using the ld.local and st.local mnemonics. Even if it has not, subsequent
compilation phases might still decide otherwise though if they find it consumes too
much register space for the targeted architecture: Inspection of the cubin object
using cuobjdump will tell if this is the case. Also, the compiler reports total local
memory usage per kernel (lmem) when compiling with the --ptxas-options=-v
option. Note that some mathematical functions have implementation paths that
might access local memory.
The local memory space resides in device memory, so local memory accesses have
same high latency and low bandwidth as global memory accesses and are subject to
the same requirements for memory coalescing as described in Section 5.3.2.1. Local
memory is however organized such that consecutive 32-bit words are accessed by
consecutive thread IDs. Accesses are therefore fully coalesced as long as all threads
in a warp access the same relative address (e.g. same index in an array variable, same
member in a structure variable).
On devices of compute capability 2.x, local memory accesses are always cached in
L1 and L2 in the same way as global memory accesses (see Section G.4.2).
5.3.2.3 Shared Memory
Because it is on-chip, the shared memory space is much faster than the local and
global memory spaces. In fact, for all threads of a warp, accessing shared memory is
fast as long as there are no bank conflicts between the threads, as detailed below.
To achieve high bandwidth, shared memory is divided into equally-sized memory
modules, called banks, which can be accessed simultaneously. Any memory read or
write request made of n addresses that fall in n distinct memory banks can therefore
be serviced simultaneously, yielding an overall bandwidth that is n times as high as
the bandwidth of a single module.
However, if two addresses of a memory request fall in the same memory bank, there
is a bank conflict and the access has to be serialized. The hardware splits a memory
request with bank conflicts into as many separate conflict-free requests as necessary,
decreasing throughput by a factor equal to the number of separate memory requests.
If the number of separate memory requests is n, the initial memory request is said to
cause n-way bank conflicts.
To get maximum performance, it is therefore important to understand how memory
addresses map to memory banks in order to schedule the memory requests so as to
minimize bank conflicts. This is described in Sections G.3.3 and G.4.3 for devices of
compute capability 1.x and 2.x, respectively.
5.3.2.4 Constant Memory
The constant memory space resides in device memory and is cached in the constant
cache mentioned in Sections G.3.1 and G.4.1.
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 93

For devices of compute capability 1.x, a constant memory request for a warp is first
split into two requests, one for each half-warp, that are issued independently.
A request is then split into as many separate requests as there are different memory
addresses in the initial request, decreasing throughput by a factor equal to the
number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in
case of a cache hit, or at the throughput of device memory otherwise.
5.3.2.5 Texture and Surface Memory
The texture and surface memory spaces reside in device memory and are cached in
texture cache, so a texture fetch or surface read costs one memory read from device
memory only on a cache miss, otherwise it just costs one read from texture cache.
The texture cache is optimized for 2D spatial locality, so threads of the same warp
that read texture or surface addresses that are close together in 2D will achieve best
performance. Also, it is designed for streaming fetches with a constant latency; a
cache hit reduces DRAM bandwidth demand but not fetch latency.
Reading device memory through texture or surface fetching present some benefits
that can make it an advantageous alternative to reading device memory from global
or constant memory:
 If the memory reads do not follow the access patterns that global or constant
memory reads must respect to get good performance (see Sections 5.3.2.1 and
5.3.2.4), higher bandwidth can be achieved providing that there is locality in the
texture fetches or surface reads (this is less likely for devices of compute
capability 2.x given that global memory reads are cached on these devices);
 Addressing calculations are performed outside the kernel by dedicated units;
 Packed data may be broadcast to separate variables in a single operation;
 8-bit and 16-bit integer input data may be optionally converted to 32-bit
floating-point values in the range [0.0, 1.0] or [-1.0, 1.0] (see Section 3.2.4.1.1).
5.4 Maximize Instruction Throughput
To maximize instruction throughput the application should:
 Minimize the use of arithmetic instructions with low throughput; this includes
trading precision for speed when it does not affect the end result, such as using
intrinsic instead of regular functions (intrinsic functions are listed in
Section C.2), single-precision instead of double-precision, or flushing
denormalized numbers to zero;
 Minimize divergent warps caused by control flow instructions as detailed in
Section 5.4.2;
 Reduce the number of instructions, for example, by optimizing out
synchronization points whenever possible as described in Section 5.4.3 or by
using restricted pointers as described in Section E.3.
In this section, throughputs are given in number of operations per clock cycle per
multiprocessor. For a warp size of 32, one instruction results in 32 operations.
Therefore, if T is the number of operations per clock cycle, the instruction
throughput is one instruction every 32/T clock cycles.
Chapter 5. Performance Guidelines


94 CUDA C Programming Guide Version 3.2

All throughputs are for one multiprocessor. They must be multiplied by the number
of multiprocessors in the device to get throughput for the whole device.
5.4.1 Arithmetic Instructions
Table 5-1 gives the throughputs of the arithmetic instructions that are natively
supported in hardware for devices of various compute capabilities.
Table 5-1. Throughput of Native Arithmetic Instructions
(Operations per Clock Cycle per Multiprocessor)
Compute
Capability 1.x
Compute
Capability 2.0
Compute
Capability 2.1
32-bit floating-point
add, multiply, multiply-add
8 32 48
64-bit floating-point
add, multiply, multiply-add
1 16 4
32-bit integer
add, logical operation
8 32 48
32-bit integer
shift, compare
8 16 16
32-bit integer
multiply, multiply-add, sum of
absolute difference
Multiple
instructions
16 16
24-bit integer multiply
(__[u]mul24)
8
Multiple
instructions
Multiple
instructions
32-bit floating-point
reciprocal, reciprocal square
root,
base-2 logarithm (__log2f),
base-2 exponential (exp2f),
sine (__sinf), cosine
(__cosf)
2 4 8
Type conversions 8 16 16

Other instructions and functions are implemented on top of the native instructions.
The implementation may be different for devices of compute capability 1.x and
devices of compute capability 2.x, and the number of native instructions after
compilation may fluctuate with every compiler version. For complicated functions,
there can be multiple code paths depending on input. cuobjdump can be used to
inspect a particular implementation in a cubin object.
The implementation of some functions are readily available on the CUDA header
files (math_functions.h, device_functions.h, …).
In general, code compiled with -ftz=true (denormalized numbers are flushed to
zero) tends to have higher performance than code compiled with -ftz=false.
Similarly, code compiled with -prec-div=false (less precise division) tends to
have higher performance code than code compiled with -prec-div=true, and
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 95

code compiled with -prec-sqrt=false (less precise square root) tends to have
higher performance than code compiled with -prec-sqrt=true. The nvcc user
manual describes these compilation flags in more details.
Single-Precision Floating-Point Addition and Multiplication Intrinsics
__fadd_r[d,u], __fmul_r[d,u], and __fmaf_r[n,z,d,u] (see
Section C.2.1) compile to tens of instructions for devices of compute capability 1.x,
but map to a single native instruction for devices of compute capability 2.x.
Single-Precision Floating-Point Division
__fdividef(x, y) (see Section C.2.1) provides faster single-precision floating-
point division than the division operator.
Single-Precision Floating-Point Reciprocal Square Root
To preserve IEEE-754 semantics the compiler can optimize 1.0/sqrtf() into
rsqrtf() only when both reciprocal and square root are approximate, (i.e. with
-prec-div=false and -prec-sqrt=false). It is therefore recommended to
invoke rsqrtf() directly where desired.
Single-Precision Floating-Point Square Root
Single-precision floating-point square root is implemented as a reciprocal square
root followed by a reciprocal instead of a reciprocal square root followed by a
multiplication so that it gives correct results for 0 and infinity. Therefore, its
throughput is 1 operation per clock cycle for devices of compute capability 1.x and
2 operations per clock cycle for devices of compute capability 2.x.
Sine and Cosine
sinf(x), cosf(x), tanf(x), sincosf(x), and corresponding double-
precision instructions are much more expensive and even more so if the argument x
is large in magnitude.
More precisely, the argument reduction code (see math_functions.h for
implementation) comprises two code paths referred to as the fast path and the slow
path, respectively.
The fast path is used for arguments sufficiently small in magnitude and essentially
consists of a few multiply-add operations. The slow path is used for arguments large
in magnitude and consists of lengthy computations required to achieve correct
results over the entire argument range.
At present, the argument reduction code for the trigonometric functions selects the
fast path for arguments whose magnitude is less than 48039.0f for the single-
precision functions, and less than 2147483648.0 for the double-precision functions.
As the slow path requires more registers than the fast path, an attempt has been
made to reduce register pressure in the slow path by storing some intermediate
variables in local memory, which may affect performance because of local memory
high latency and bandwidth (see Section 5.3.2.2). At present, 28 bytes of local
memory are used by single-precision functions, and 44 bytes are used by double-
precision functions. However, the exact amount is subject to change.
Chapter 5. Performance Guidelines


96 CUDA C Programming Guide Version 3.2

Due to the lengthy computations and use of local memory in the slow path, the
throughput of these trigonometric functions is lower by one order of magnitude
when the slow path reduction is required as opposed to the fast path reduction.
Integer Arithmetic
On devices of compute capability 1.x, 32-bit integer multiplication is implemented
using multiple instructions as it is not natively supported. 24-bit integer
multiplication is natively supported however via the __[u]mul24 intrinsic (see
Section C.2.3). Using __[u]mul24 instead of the 32-bit multiplication operator
whenever possible usually improves performance for instruction bound kernels. It
can have the opposite effect however in cases where the use of __[u]mul24
inhibits compiler optimizations.
On devices of compute capability 2.x, 32-bit integer multiplication is natively
supported, but 24-bit integer multiplication is not. __[u]mul24 is therefore
implemented using multiple instructions and should not be used.
Integer division and modulo operation are costly: tens of instructions on devices of
compute capability 1.x, below 20 instructions on devices of compute capability 2.x.
They can be replaced with bitwise operations in some cases: If n is a power of 2,
(i/n) is equivalent to (i>>log2(n)) and (i%n) is equivalent to (i&(n-1));
the compiler will perform these conversions if n is literal.
__brev, __brevll, __popc, and __popcll (see Section C.2.3) compile to tens
of instructions for devices of compute capability 1.x, but __brev and __popc map
to a single instruction for devices of compute capability 2.x and __brevll and
__popcll to just a few.
__clz, __clzll, __ffs, and __ffsll (see Section C.2.3) compile to fewer
instructions for devices of compute capability 2.x than for devices of compute
capability 1.x.
Type Conversion
Sometimes, the compiler must insert conversion instructions, introducing additional
execution cycles. This is the case for:
 Functions operating on variables of type char or short whose operands
generally need to be converted to int,
 Double-precision floating-point constants (i.e. those constants defined without
any type suffix) used as input to single-precision floating-point computations (as
mandated by C/C++ standards).
This last case can be avoided by using single-precision floating-point constants,
defined with an f suffix such as 3.141592653589793f, 1.0f, 0.5f.
5.4.2 Control Flow Instructions
Any flow control instruction (if, switch, do, for, while) can significantly
impact the effective instruction throughput by causing threads of the same warp to
diverge (i.e. to follow different execution paths). If this happens, the different
executions paths have to be serialized, increasing the total number of instructions
executed for this warp. When all the different execution paths have completed, the
threads converge back to the same execution path.
Chapter 5. Performance Guidelines


CUDA C Programming Guide Version 3.2 97

To obtain best performance in cases where the control flow depends on the thread
ID, the controlling condition should be written so as to minimize the number of
divergent warps. This is possible because the distribution of the warps across the
block is deterministic as mentioned in Section 4.1. A trivial example is when the
controlling condition only depends on (threadIdx / warpSize) where
warpSize is the warp size. In this case, no warp diverges since the controlling
condition is perfectly aligned with the warps.
Sometimes, the compiler may unroll loops or it may optimize out if or switch
statements by using branch predication instead, as detailed below. In these cases, no
warp can ever diverge. The programmer can also control loop unrolling using the
#pragma unroll directive (see Section E.2).
When using branch predication none of the instructions whose execution depends
on the controlling condition gets skipped. Instead, each of them is associated with a
per-thread condition code or predicate that is set to true or false based on the
controlling condition and although each of these instructions gets scheduled for
execution, only the instructions with a true predicate are actually executed.
Instructions with a false predicate do not write results, and also do not evaluate
addresses or read operands.
The compiler replaces a branch instruction with predicated instructions only if the
number of instructions controlled by the branch condition is less or equal to a
certain threshold: If the compiler determines that the condition is likely to produce
many divergent warps, this threshold is 7, otherwise it is 4.
5.4.3 Synchronization Instruction
Throughput for __syncthreads() is 8 operations per clock cycle for devices of
compute capability 1.x and 16 operations per clock cycle for devices of compute
capability 2.x.
Note that __syncthreads() can impact performance by forcing the
multiprocessor to idle as detailed in Section 5.2.3.
Because a warp executes one common instruction at a time, threads within a warp
are implicitly synchronized and this can sometimes be used to omit
__syncthreads() for better performance.
In the following code sample, for example, both calls to __syncthreads() are
required to get the expected result (i.e. result[i] = 2 * myArray[i] for
i > 0). Without synchronization, any of the two references to myArray[tid]
could return either 2 or the value initially stored in myArray, depending on whether
the memory read occurs before or after the memory write from
myArray[tid + 1] = 2.
// myArray is an array of integers located in global or shared
// memory
__global__ void MyKernel(int* result) {
int tid = threadIdx.x;
...
int ref1 = myArray[tid];
__syncthreads();
myArray[tid + 1] = 2;
__syncthreads();
Chapter 5. Performance Guidelines


98 CUDA C Programming Guide Version 3.2

int ref2 = myArray[tid];
result[tid] = ref1 * ref2;
...
}
However, in the following slightly modified code sample, threads are guaranteed to
belong to the same warp, so that there is no need for any __syncthreads().
// myArray is an array of integers located in global or shared
// memory
__global__ void MyKernel(int* result) {
int tid = threadIdx.x;
...
if (tid < warpSize) {
int ref1 = myArray[tid];
myArray[tid + 1] = 2;
int ref2 = myArray[tid];
result[tid] = ref1 * ref2;
}
...
}
Simply removing the __syncthreads() is not enough however; myArray must
also be declared as volatile as described in Section B.2.5.







CUDA C Programming Guide Version 3.1 99

Appendix A.
CUDA-Enabled GPUs
Table A-1 lists all CUDA-enabled devices with their compute capability, number of
multiprocessors, and number of CUDA cores.
These, as well as the clock frequency and the total amount of device memory, can
be queried using the runtime or driver API (see reference manual).
Table A-1. CUDA-Enabled Devices with Compute Capability,
Number of Multiprocessors, and Number of CUDA
Cores
Compute
Capability
Number of
Multiprocessors
Number of
CUDA Cores
GeForce GTX 460 2.1 7 336
GeForce GTX 470M 2.1 6 288
GeForce GTS 450, GTX 460M 2.1 4 192
GeForce GT 445M 2.1 3 144
GeForce GT 435M, GT 425M,
GT 420M
2.1 2 96
GeForce GT 415M 2.1 1 48
GeForce GTX 580 2.0 16 512
GeForce GTX 480 2.0 15 480
GeForce GTX 470 2.0 14 448
GeForce GTX 465, GTX 480M 2.0 11 352
GeForce GTX 295 1.3 2x30 2x240
GeForce GTX 285, GTX 280,
GTX 275
1.3 30 240
GeForce GTX 260 1.3 24 192
GeForce 9800 GX2 1.1 2x16 2x128
GeForce GTS 250, GTS 150,
9800 GTX, 9800 GTX+,
8800 GTS 512, GTX 285M,
GTX 280M
1.1 16 128
GeForce 8800 Ultra, 8800 GTX 1.0 16 128
GeForce 9800 GT, 8800 GT,
GTX 260M, 9800M GTX
1.1 14 112
Appendix A. CUDA-Enabled GPUs


100 CUDA C Programming Guide Version 3.2

Compute
Capability
Number of
Multiprocessors
Number of
CUDA Cores
GeForce GT 240, GTS 360M,
GTS 350M
1.2 12 96
GeForce GT 130, 9600 GSO,
8800 GS, 8800M GTX, GTS 260M,
GTS 250M, 9800M GT
1.1 12 96
GeForce 8800 GTS 1.0 12 96
GeForce GT 335M 1.2 9 72
GeForce 9600 GT, 8800M GTS,
9800M GTS
1.1 8 64
GeForce GT 220, GT 330M,
GT 325M, GT 240M
1.2 6 48
GeForce 9700M GT, GT 230M 1.1 6 48
GeForce GT 120, 9500 GT,
8600 GTS, 8600 GT, 9700M GT,
9650M GS, 9600M GT, 9600M GS,
9500M GS, 8700M GT, 8600M GT,
8600M GS
1.1 4 32
GeForce 210, 310M, 305M 1.2 2 16
GeForce G100, 8500 GT, 8400 GS,
8400M GT, 9500M G, 9300M G,
8400M GS, 9400 mGPU,
9300 mGPU, 8300 mGPU,
8200 mGPU, 8100 mGPU, G210M,
G110M
1.1 2 16
GeForce 9300M GS, 9200M GS,
9100M G, 8400M G, G105M
1.1 1 8
Tesla C2050 2.0 14 448
Tesla S1070 1.3 4x30 4x240
Tesla C1060 1.3 30 240
Tesla S870 1.0 4x16 4x128
Tesla D870 1.0 2x16 2x128
Tesla C870 1.0 16 128
Quadro 2000 2.1 4 192
Quadro 600 2.1 2 96
Quadro 6000 2.0 14 448
Quadro 5000 2.0 11 352
Quadro 5000M 2.0 10 320
Quadro 4000 2.0 8 256
Quadro Plex 2200 D2 1.3 2x30 2x240
Quadro Plex 2100 D4 1.1 4x14 4x112
Quadro Plex 2100 Model S4 1.0 4x16 4x128
Quadro Plex 1000 Model IV 1.0 2x16 2x128
Quadro FX 5800 1.3 30 240
Quadro FX 4800 1.3 24 192
Quadro FX 4700 X2 1.1 2x14 2x112
Quadro FX 3700M, FX 3800M 1.1 16 128
Quadro FX 5600 1.0 16 128
Appendix A. CUDA-Enabled GPUs


CUDA C Programming Guide Version 3.2 101

Compute
Capability
Number of
Multiprocessors
Number of
CUDA Cores
Quadro FX 3700 1.1 14 112
Quadro FX 2800M 1.1 12 96
Quadro FX 4600 1.0 12 96
Quadro FX 1800M 1.2 9 72
Quadro FX 3600M 1.1 8 64
Quadro FX 880M, NVS 5100M 1.2 6 48
Quadro FX 2700M 1.1 6 48
Quadro FX 1700, FX 570,
NVS 320M, FX 1700M, FX 1600M,
FX 770M, FX 570M
1.1 4 32
Quadro FX 380 LP, FX 380M,
NVS 3100M, NVS 2100M
1.2 2 16
Quadro FX 370, NVS 290,
NVS 160M, NVS 150M, NVS 140M,
NVS 135M, FX 360M
1.1 2 16
Quadro FX 370M, NVS 130M 1.1 1 8







CUDA C Programming Guide Version 3.1 103

Appendix B.
C Language Extensions
B.1 Function Type Qualifiers
Function type qualifiers specify whether a function executes on the host or on the
device and whether it is callable from the host or from the device.
B.1.1 __device__
The __device__ qualifier declares a function that is:
 Executed on the device
 Callable from the device only.
In device code compiled for devices of compute capability 1.x, a __device__
function is always inlined by default. The __noinline__ function qualifier
however can be used as a hint for the compiler not to inline the function if possible
(see Section E.1).
B.1.2 __global__
The __global__ qualifier declares a function as being a kernel. Such a function is:
 Executed on the device,
 Callable from the host only.
__global__ functions must have void return type.
Any call to a __global__ function must specify its execution configuration as
described in Section B.16.
A call to a __global__ function is asynchronous, meaning it returns before the
device has completed its execution.
B.1.3 __host__
The __host__ qualifier declares a function that is:
Appendix B. C Language Extensions


104 CUDA C Programming Guide Version 3.2

 Executed on the host,
 Callable from the host only.
It is equivalent to declare a function with only the __host__ qualifier or to declare
it without any of the __host__, __device__, or __global__ qualifier; in either
case the function is compiled for the host only.
The __global__ and __host__ qualifiers cannot be used together.
The __device__ and __host__ qualifiers can be used together however, in
which case the function is compiled for both the host and the device. The
__CUDA_ARCH__ macro introduced in Section 3.1.4 can be used to differentiate
code paths between host and device:
__host__ __device__ func()
{
#if __CUDA_ARCH__ == 100
// Device code path for compute capability 1.0
#elif __CUDA_ARCH__ == 200
// Device code path for compute capability 2.0
#elif !defined(__CUDA_ARCH__)
// Host code path
#endif
}
B.1.4 Restrictions
B.1.4.1 Functions Parameters
__global__ function parameters are passed to the device:
 via shared memory and are limited to 256 bytes on devices of compute
capability 1.x,
 via constant memory and are limited to 4 KB on devices of compute
capability 2.x.
B.1.4.2 Variadic Functions
__device__ and __global__ functions cannot have a variable number of
arguments.
B.1.4.3 Static Variables
__device__ and __global__ functions cannot declare static variables inside
their body.
B.1.4.4 Function Pointers
Function pointers to __global__ functions are supported, but function pointers
to __device__ functions are only supported in device code compiled for devices
of compute capability 2.x.
It is not allowed to take the address of a __device__ function in host code.
B.1.4.5 Recursion
__global__ functions do not support recursion.
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 105

__device__ functions only support recursion in device code compiled for devices
of compute capability 2.x.
B.2 Variable Type Qualifiers
Variable type qualifiers specify the memory location on the device of a variable.
B.2.1 __device__
The __device__ qualifier declares a variable that resides on the device.
At most one of the other type qualifiers defined in the next three sections may be
used together with __device__ to further specify which memory space the
variable belongs to. If none of them is present, the variable:
 Resides in global memory space,
 Has the lifetime of an application,
 Is accessible from all the threads within the grid and from the host through the
runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() /
cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime
API and cuModuleGetGlobal() for the driver API).
B.2.2 __constant__
The __constant__ qualifier, optionally used together with __device__,
declares a variable that:
 Resides in constant memory space,
 Has the lifetime of an application,
 Is accessible from all the threads within the grid and from the host through the
runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() /
cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime
API and cuModuleGetGlobal() for the driver API).
B.2.3 __shared__
The __shared__ qualifier, optionally used together with __device__, declares a
variable that:
 Resides in the shared memory space of a thread block,
 Has the lifetime of the block,
 Is only accessible from all the threads within the block.
When declaring a variable in shared memory as an external array such as
extern __shared__ float shared[];
the size of the array is determined at launch time (see Section B.16). All variables
declared in this fashion, start at the same address in memory, so that the layout of
Appendix B. C Language Extensions


106 CUDA C Programming Guide Version 3.2

the variables in the array must be explicitly managed through offsets. For example, if
one wants the equivalent of
short array0[128];
float array1[64];
int array2[256];
in dynamically allocated shared memory, one could declare and initialize the arrays
the following way:
extern __shared__ float array[];
__device__ void func() // __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[128];
int* array2 = (int*)&array1[64];
}
Note that pointers need to be aligned to the type they point to, so the following
code, for example, does not work since array1 is not aligned to 4 bytes.
extern __shared__ float array[];
__device__ void func() // __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[127];
}
Alignment requirements for the built-in vector types are listed in Table B-1.
B.2.4 Restrictions
The __device__, __shared__ and __constant__ qualifiers are not allowed
on struct and union members, on formal parameters and on local variables
within a function that executes on the host.
B.2.4.1 Storage and Scope
__shared__ and __constant__ variables have implied static storage.
__device__, __shared__ and __constant__ variables cannot be defined as
external using the extern keyword. The only exception is for dynamically allocated
__shared__ variables as described in Section B.2.3.
__device__ and __constant__ variables are only allowed at file scope.
B.2.4.2 Assignment
__constant__ variables cannot be assigned to from the device, only from the
host through host runtime functions (Sections 3.2.1 and 3.3.4).
__shared__ variables cannot have an initialization as part of their declaration.
B.2.4.3 Automatic Variable
An automatic variable declared in device code without any of the __device__,
__shared__ and __constant__ qualifiers generally resides in a register.
However in some cases the compiler might choose to place it in local memory,
which can have adverse performance consequences as detailed in Section 5.3.2.2.
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 107

B.2.4.4 Pointers
For devices of compute capability 1.x, pointers in code that is executed on the
device are supported as long as the compiler is able to resolve whether they point to
either the shared memory space or the global memory space, otherwise they are
restricted to only point to memory allocated or declared in the global memory space.
For devices of compute capability 2.x, pointers are supported without any
restriction.
Dereferencing a pointer either to global or shared memory in code that is executed
on the host or to host memory in code that is executed on the device results in an
undefined behavior, most often in a segmentation fault and application termination.
The address obtained by taking the address of a __device__, __shared__ or
__constant__ variable can only be used in device code. The address of a
__device__ or __constant__ variable obtained through
cudaGetSymbolAddress() as described in Section 3.3.4 can only be used in
host code.
B.2.5 volatile
Only after the execution of a __threadfence_block(), __threadfence(),
or __syncthreads() (Sections B.5 and B.6) are prior writes to global or shared
memory guaranteed to be visible by other threads. As long as this requirement is
met, the compiler is free to optimize reads and writes to global or shared memory.
For example, in the code sample below, the first reference to myArray[tid]
compiles into a global or shared memory read instruction, but the second reference
does not as the compiler simply reuses the result of the first read.
// myArray is an array of non-zero integers
// located in global or shared memory
__global__ void MyKernel(int* result) {
int tid = threadIdx.x;
int ref1 = myArray[tid] * 1;
myArray[tid + 1] = 2;
int ref2 = myArray[tid] * 1;
result[tid] = ref1 * ref2;
}
Therefore, ref2 cannot possibly be equal to 2 in thread tid as a result of thread
tid-1 overwriting myArray[tid] by 2.
This behavior can be changed using the volatile keyword: If a variable located in
global or shared memory is declared as volatile, the compiler assumes that its value
can be changed at any time by another thread and therefore any reference to this
variable compiles to an actual memory read instruction.
Note that even if myArray is declared as volatile in the code sample above, there is
no guarantee, in general, that ref2 will be equal to 2 in thread tid since thread
tid might read myArray[tid] into ref2 before thread tid-1 overwrites its
value by 2. Synchronization is required as mentioned in Section 5.4.3.
Appendix B. C Language Extensions


108 CUDA C Programming Guide Version 3.2

B.3 Built-in Vector Types
B.3.1 char1, uchar1, char2, uchar2, char3, uchar3,
char4, uchar4, short1, ushort1, short2, ushort2,
short3, ushort3, short4, ushort4, int1, uint1, int2,
uint2, int3, uint3, int4, uint4, long1, ulong1,
long2, ulong2, long3, ulong3, long4, ulong4,
longlong1, ulonglong1, longlong2, ulonglong2,
float1, float2, float3, float4, double1, double2
These are vector types derived from the basic integer and floating-point types. They
are structures and the 1
st
, 2
nd
, 3
rd
, and 4
th
components are accessible through the
fields x, y, z, and w, respectively. They all come with a constructor function of the
form make_<type name>; for example,
int2 make_int2(int x, int y);
which creates a vector of type int2 with value (x, y).
In host code, the alignment requirement of a vector type is equal to the alignment
requirement of its base type. This is not always the case in device code as detailed in
Table B-1.
Table B-1. Alignment Requirements in Device Code
Type Alignment
char1, uchar1 1
char2, uchar2 2
char3, uchar3 1
char4, uchar4 4
short1, ushort1 2
short2, ushort2 4
short3, ushort3 2
short4, ushort4 8
int1, uint1 4
int2, uint2 8
int3, uint3 4
int4, uint4 16
long1, ulong1 4 if sizeof(long) is equal to sizeof(int),
8, otherwise
long2, ulong2 8 if sizeof(long) is equal to sizeof(int),
16, otherwise
long3, ulong3 4 if sizeof(long) is equal to sizeof(int),
8, otherwise
long4, ulong4 16
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 109

longlong1, ulonglong1 8
longlong2, ulonglong2 16
float1 4
float2 8
float3 4
float4 16
double1 8
double2 16
B.3.2 dim3
This type is an integer vector type based on uint3 that is used to specify
dimensions. When defining a variable of type dim3, any component left unspecified
is initialized to 1.
B.4 Built-in Variables
Built-in variables specify the grid and block dimensions and the block and thread
indices. They are only valid within functions that are executed on the device.
B.4.1 gridDim
This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the
grid.
B.4.2 blockIdx
This variable is of type uint3 (see Section B.3.1) and contains the block index
within the grid.
B.4.3 blockDim
This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the
block.
B.4.4 threadIdx
This variable is of type uint3 (see Section B.3.1) and contains the thread index
within the block.
Appendix B. C Language Extensions


110 CUDA C Programming Guide Version 3.2

B.4.5 warpSize
This variable is of type int and contains the warp size in threads (see Section 4.1
for the definition of a warp).
B.4.6 Restrictions
 It is not allowed to take the address of any of the built-in variables.
 It is not allowed to assign values to any of the built-in variables.
B.5 Memory Fence Functions
void __threadfence_block();
waits until all global and shared memory accesses made by the calling thread prior to
__threadfence_block() are visible to all threads in the thread block.
void __threadfence();
waits until all global and shared memory accesses made by the calling thread prior to
__threadfence() are visible to:
 All threads in the thread block for shared memory accesses,
 All threads in the device for global memory accesses.
void __threadfence_system();
waits until all global and shared memory accesses made by the calling thread prior to
__threadfence_system() are visible to:
 All threads in the thread block for shared memory accesses,
 All threads in the device for global memory accesses,
 Host threads for page-locked host memory accesses (see Section 3.2.5.3).
__threadfence_system() is only supported by devices of compute
capability 2.x.
In general, when a thread issues a series of writes to memory in a particular order,
other threads may see the effects of these memory writes in a different order.
__threadfence_block(), __threadfence(), and
__threadfence_system() can be used to enforce some ordering.
One use case is when threads consume some data produced by other threads as
illustrated by the following code sample of a kernel that computes the sum of an
array of N numbers in one call. Each block first sums a subset of the array and
stores the result in global memory. When all blocks are done, the last block done
reads each of these partial sums from global memory and sums them to obtain the
final result. In order to determine which block is finished last, each block atomically
increments a counter to signal that it is done with computing and storing its partial
sum (see Section B.11 about atomic functions). The last block is the one that
receives the counter value equal to gridDim.x-1. If no fence is placed between
storing the partial sum and incrementing the counter, the counter might increment
before the partial sum is stored and therefore, might reach gridDim.x-1 and let
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 111

the last block start reading partial sums before they have been actually updated in
memory.
__device__ unsigned int count = 0;
__shared__ bool isLastBlockDone;
__global__ void sum(const float* array, unsigned int N,
float* result)
{
// Each block sums a subset of the input array
float partialSum = calculatePartialSum(array, N);

if (threadIdx.x == 0) {

// Thread 0 of each block stores the partial sum
// to global memory
result[blockIdx.x] = partialSum;

// Thread 0 makes sure its result is visible to
// all other threads
__threadfence();

// Thread 0 of each block signals that it is done
unsigned int value = atomicInc(&count, gridDim.x);

// Thread 0 of each block determines if its block is
// the last block to be done
isLastBlockDone = (value == (gridDim.x - 1));
}

// Synchronize to make sure that each thread reads
// the correct value of isLastBlockDone
__syncthreads();

if (isLastBlockDone) {

// The last block sums the partial sums
// stored in result[0 .. gridDim.x-1]
float totalSum = calculateTotalSum(result);

if (threadIdx.x == 0) {

// Thread 0 of last block stores total sum
// to global memory and resets count so that
// next kernel call works properly
result[0] = totalSum;
count = 0;
}
}
}
B.6 Synchronization Functions
void __syncthreads();
Appendix B. C Language Extensions


112 CUDA C Programming Guide Version 3.2

waits until all threads in the thread block have reached this point and all global and
shared memory accesses made by these threads prior to __syncthreads() are
visible to all threads in the block.
__syncthreads() is used to coordinate communication between the threads of
the same block. When some threads within a block access the same addresses in
shared or global memory, there are potential read-after-write, write-after-read, or
write-after-write hazards for some of these memory accesses. These data hazards
can be avoided by synchronizing threads in-between these accesses.
__syncthreads() is allowed in conditional code but only if the conditional
evaluates identically across the entire thread block, otherwise the code execution is
likely to hang or produce unintended side effects.
Devices of compute capability 2.x support three variations of __syncthreads()
described below.
int __syncthreads_count(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates
predicate for all threads of the block and returns the number of threads for
which predicate evaluates to non-zero.
int __syncthreads_and(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates
predicate for all threads of the block and returns non-zero if and only if
predicate evaluates to non-zero for all of them.
int __syncthreads_or(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates
predicate for all threads of the block and returns non-zero if and only if
predicate evaluates to non-zero for any of them.
B.7 Mathematical Functions
Section C.1 contains a comprehensive list of the C/C++ standard library
mathematical functions that are currently supported in device code, along with their
respective error bounds. When executed in host code, a given function uses the C
runtime implementation if available.
For some of the functions of Section C.1, a less accurate, but faster version exists in
the device runtime component; it has the same name prefixed with __ (such as
__sinf(x)). These intrinsic functions are listed in Section C.2, along with their
respective error bounds.
The compiler has an option (-use_fast_math) that forces each function in Table
B-2 to compile to its intrinsic counterpart. In addition to reduce accuracy of the
affected functions, it may also cause some differences in special case handling. A
more robust approach is to selectively replace mathematical function calls by calls to
intrinsic functions only where it is merited by the performance gains and where
changed properties such as reduced accuracy and different special case handling can
be tolerated.
Table B-2. Functions Affected by –use_fast_math
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 113

Operator/Function Device Function
x/y __fdividef(x,y)
sinf(x) __sinf(x)
cosf(x) __cosf(x)
tanf(x) __tanf(x)
sincosf(x,sptr,cptr) __sincosf(x,sptr,cptr)
logf(x) __logf(x)
log2f(x) __log2f(x)
log10f(x) __log10f(x)
expf(x) __expf(x)
exp10f(x) __exp10f(x)
powf(x,y) __powf(x,y)
B.8 Texture Functions
For texture functions, a combination of the texture reference‟s immutable (i.e.
compile-time) and mutable (i.e. runtime) attributes determine how the texture
coordinates are interpreted, what processing occurs during the texture fetch, and the
return value delivered by the texture fetch. Immutable attributes are described in
Section 3.2.4.1.1. Mutable attributes are described in Section 3.2.4.1.2. Texture
fetching is described in Appendix F.
B.8.1 tex1Dfetch()
template<class Type>
Type tex1Dfetch(
texture<Type, 1, cudaReadModeElementType> texRef,
int x);

float tex1Dfetch(
texture<unsigned char, 1, cudaReadModeNormalizedFloat> texRef,
int x);

float tex1Dfetch(
texture<signed char, 1, cudaReadModeNormalizedFloat> texRef,
int x);

float tex1Dfetch(
texture<unsigned short, 1, cudaReadModeNormalizedFloat> texRef,
int x);

float tex1Dfetch(
texture<signed short, 1, cudaReadModeNormalizedFloat> texRef,
int x);
fetch the region of linear memory bound to texture reference texRef using integer
texture coordinate x. No texture filtering and addressing modes are supported. For
integer types, these functions may optionally promote the integer to single-precision
floating point.
Appendix B. C Language Extensions


114 CUDA C Programming Guide Version 3.2

Besides the functions shown above, 2-, and 4-tuples are supported; for example:
float4 tex1Dfetch(
texture<uchar4, 1, cudaReadModeNormalizedFloat> texRef,
int x);
fetches the region of linear memory bound to texture reference texRef using
texture coordinate x.
B.8.2 tex1D()
template<class Type, enum cudaTextureReadMode readMode>
Type tex1D(texture<Type, 1, readMode> texRef,
float x);
fetches the CUDA array bound to texture reference texRef using texture
coordinate x.
B.8.3 tex2D()
template<class Type, enum cudaTextureReadMode readMode>
Type tex2D(texture<Type, 2, readMode> texRef,
float x, float y);
fetches the CUDA array or the region of linear memory bound to texture reference
texRef using texture coordinates x and y.
B.8.4 tex3D()
template<class Type, enum cudaTextureReadMode readMode>
Type tex3D(texture<Type, 3, readMode> texRef,
float x, float y, float z);
fetches the CUDA array bound to texture reference texRef using texture
coordinates x, y, and z.
B.9 Surface Functions
Surface functions are only supported by devices of compute capability 2.0 and
higher.
Surface reference declaration is described in Section 3.2.4.2.1 and surface binding in
Section 3.2.4.2.2.
In the sections below, boundaryMode specifies the boundary mode, that is how
out-of-range surface coordinates are handled; it is equal to either
cudaBoundaryModeClamp, in which case out-of-range coordinates are clamped
to the valid range, or cudaBoundaryModeZero, in which case out-of-range reads
return zero and out-of-range writes are ignored, or cudaBoundaryModeTrap, in
which case out-of-range accesses cause the kernel execution to fail.
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 115

B.9.1 surf1Dread()
template<class Type>
Type surf1Dread(surface<void, 1> surfRef, int x,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to surface reference surfRef using coordinate x.
B.9.2 surf1Dwrite()
template<class Type>
void surf1Dwrite(Type data, surface<void, 1> surfRef, int x,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to surface reference surfRef at
coordinate x.
B.9.3 surf2Dread()
template<class Type>
Type surf2Dread(surface<void, 2> surfRef,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to surface reference surfRef using coordinates x
and y.
B.9.4 surf2Dwrite()
template<class Type>
void surf2Dwrite(Type data, surface<void, 2> surfRef,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to surface reference surfRef at
coordinate x and y.
B.10 Time Function
clock_t clock();
when executed in device code, returns the value of a per-multiprocessor counter
that is incremented every clock cycle. Sampling this counter at the beginning and at
the end of a kernel, taking the difference of the two samples, and recording the
result per thread provides a measure for each thread of the number of clock cycles
taken by the device to completely execute the thread, but not of the number of
clock cycles the device actually spent executing thread instructions. The former
number is greater that the latter since threads are time sliced.
Appendix B. C Language Extensions


116 CUDA C Programming Guide Version 3.2

B.11 Atomic Functions
An atomic function performs a read-modify-write atomic operation on one 32-bit or
64-bit word residing in global or shared memory. For example, atomicAdd()
reads a 32-bit word at some address in global or shared memory, adds a number to
it, and writes the result back to the same address. The operation is atomic in the
sense that it is guaranteed to be performed without interference from other threads.
In other words, no other thread can access this address until the operation is
complete.
Atomic functions can only be used in device functions and are only available for
devices of compute capability 1.1 and above.
Atomic functions operating on shared memory and atomic functions operating on
64-bit words are only available for devices of compute capability 1.2 and above.
Atomic functions operating on 64-bit words in shared memory are only available for
devices of compute capability 2.x and higher.
Atomic functions operating on mapped page-locked memory (Section 3.2.5.3) are
not atomic from the point of view of the host or other devices.
Atomic operations only work with signed and unsigned integers with the exception
of atomicAdd() for devices of compute capability 2.x and atomicExch() for all
devices, that also work for single-precision floating-point numbers. Note however
that any atomic operation can be implemented based on atomicCAS() (Compare
And Swap). For example, atomicAdd() for double-precision floating-point
numbers can be implemented as follows:
__device__ double atomicAdd(double* address, double val)
{
double old = *address, assumed;
do {
assumed = old;
old =
__longlong_as_double(
atomicCAS((unsigned long long int*)address,
__double_as_longlong(assumed),
__double_as_longlong(val + assumed)));
} while (assumed != old);
return old;
}
B.11.1 Arithmetic Functions
B.11.1.1 atomicAdd()
int atomicAdd(int* address, int val);
unsigned int atomicAdd(unsigned int* address,
unsigned int val);
unsigned long long int atomicAdd(unsigned long long int* address,
unsigned long long int val);
float atomicAdd(float* address, float val);
reads the 32-bit or 64-bit word old located at the address address in global or
shared memory, computes (old + val), and stores the result back to memory at
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 117

the same address. These three operations are performed in one atomic transaction.
The function returns old.
The floating-point version of atomicAdd() is only supported by devices of
compute capability 2.x.
B.11.1.2 atomicSub()
int atomicSub(int* address, int val);
unsigned int atomicSub(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes (old - val), and stores the result back to memory at the
same address. These three operations are performed in one atomic transaction. The
function returns old.
B.11.1.3 atomicExch()
int atomicExch(int* address, int val);
unsigned int atomicExch(unsigned int* address,
unsigned int val);
unsigned long long int atomicExch(unsigned long long int* address,
unsigned long long int val);
float atomicExch(float* address, float val);
reads the 32-bit or 64-bit word old located at the address address in global or
shared memory and stores val back to memory at the same address. These two
operations are performed in one atomic transaction. The function returns old.
B.11.1.4 atomicMin()
int atomicMin(int* address, int val);
unsigned int atomicMin(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes the minimum of old and val, and stores the result back to
memory at the same address. These three operations are performed in one atomic
transaction. The function returns old.
B.11.1.5 atomicMax()
int atomicMax(int* address, int val);
unsigned int atomicMax(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes the maximum of old and val, and stores the result back to
memory at the same address. These three operations are performed in one atomic
transaction. The function returns old.
B.11.1.6 atomicInc()
unsigned int atomicInc(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes ((old >= val) ? 0 : (old+1)), and stores the result
back to memory at the same address. These three operations are performed in one
atomic transaction. The function returns old.
Appendix B. C Language Extensions


118 CUDA C Programming Guide Version 3.2

B.11.1.7 atomicDec()
unsigned int atomicDec(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes (((old == 0) | (old > val)) ? val : (old-1)),
and stores the result back to memory at the same address. These three operations
are performed in one atomic transaction. The function returns old.
B.11.1.8 atomicCAS()
int atomicCAS(int* address, int compare, int val);
unsigned int atomicCAS(unsigned int* address,
unsigned int compare,
unsigned int val);
unsigned long long int atomicCAS(unsigned long long int* address,
unsigned long long int compare,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or
shared memory, computes (old == compare ? val : old), and stores the
result back to memory at the same address. These three operations are performed in
one atomic transaction. The function returns old (Compare And Swap).
B.11.2 Bitwise Functions
B.11.2.1 atomicAnd()

int atomicAnd(int* address, int val);
unsigned int atomicAnd(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes (old & val), and stores the result back to memory at the
same address. These three operations are performed in one atomic transaction. The
function returns old.
B.11.2.2 atomicOr()

int atomicOr(int* address, int val);
unsigned int atomicOr(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared
memory, computes (old | val), and stores the result back to memory at the
same address. These three operations are performed in one atomic transaction. The
function returns old.
B.11.2.3 atomicXor()

int atomicXor(int* address, int val);
unsigned int atomicXor(unsigned int* address,
unsigned int val);
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 119

reads the 32-bit word old located at the address address in global or shared
memory, computes (old ^ val), and stores the result back to memory at the
same address. These three operations are performed in one atomic transaction. The
function returns old.
B.12 Warp Vote Functions
Warp vote functions are only supported by devices of compute capability 1.2 and
higher (see Section 4.1 for the definition of a warp).
int __all(int predicate);
evaluates predicate for all threads of the warp and returns non-zero if and only if
predicate evaluates to non-zero for all of them.
int __any(int predicate);
evaluates predicate for all threads of the warp and returns non-zero if and only if
predicate evaluates to non-zero for any of them.
unsigned int __ballot(int predicate);
evaluates predicate for all threads of the warp and returns an integer whose N
th

bit is set if and only if predicate evaluates to non-zero for the N
th
thread of the
warp. This function is only supported by devices of compute capability 2.x.
B.13 Profiler Counter Function
Each multiprocessor has a set of sixteen hardware counters that an application can
increment with a single instruction by calling the __prof_trigger() function.
void __prof_trigger(int counter);
increments by one per warp the per-multiprocessor hardware counter of index
counter. Counters 8 to 15 are reserved and should not be used by applications.
The value of counters 0, 1, …, 7 for the first multiprocessor can be obtained via the
CUDA profiler by listing prof_trigger_00, prof_trigger_01, …,
prof_trigger_07, etc. in the profiler.conf file (see the profiler manual for
more details). All counters are reset before each kernel call (note that when an
application is run via a CUDA debugger or profiler (cuda-gdb, CUDA Visual
Profiler, Parallel Nsight), all launches are synchronous).
B.14 Formatted Output
Formatted output is only supported by devices of compute capability 2.x.
int printf(const char *format[, arg, ...]);
prints formatted output from a kernel to a host-side output stream.
The in-kernel printf() function behaves in a similar way to the standard C-library
printf() function, and the user is referred to the host system‟s manual pages for a
complete description of printf() behavior. In essence, the string passed in as
format is output to a stream on the host, with substitutions made from the
Appendix B. C Language Extensions


120 CUDA C Programming Guide Version 3.2

argument list wherever a format specifier is encountered. Supported format
specifiers are listed below.
The printf() command is executed as any other device-side function: per-thread,
and in the context of the calling thread. From a multi-threaded kernel, this means
that a straightforward call to printf() will be executed by every thread, using that
thread‟s data as specified. Multiple versions of the output string will then appear at
the host stream, once for each thread which encountered the printf().
It is up to the programmer to limit the output to a single thread if only a single
output string is desired (see Section B.14.4 for an illustrative example).
Unlike the C-standard printf(), which returns the number of characters printed,
CUDA‟s printf() returns the number of arguments parsed. If no arguments
follow the format string, 0 is returned. If the format string is NULL, -1 is returned.
If an internal error occurs, -2 is returned.
B.14.1 Format Specifiers
As for standard printf(), format specifiers take the form:
%[flags][width][.precision][size]type
The following fields are supported (see widely-available documentation for a
complete description of all behaviors):
 Flags: „#‟ „ „ „0‟ „+‟ „-„
 Width: „*‟ „0-9‟
 Precision: „0-9‟
 Size: „h‟ „l‟ „ll‟
 Type: „%cdiouxXpeEfgGaAs‟
Note that CUDA‟s printf() will accept any combination of flag, width, precision,
size and type, whether or not overall they form a valid format specifier. In other
words, “%hd” will be accepted and printf will expect a double-precision variable in
the corresponding location in the argument list.
B.14.2 Limitations
Final formatting of the printf() output takes place on the host system. This
means that the format string must be understood by the host-system‟s compiler and
C library. Every effort has been made to ensure that the format specifiers supported
by CUDA‟s printf function form a universal subset from the most common host
compilers, but exact behavior will be host-O/S-dependent.
As described in Section B.14.1, printf() will accept all combinations of valid flags
and types. This is because it cannot determine what will and will not be valid on the
host system where the final output is formatted. The effect of this is that output
may be undefined if the program emits a format string which contains invalid
combinations.
The output buffer for printf() is set to a fixed size before kernel launch (see
below). This buffer is circular, and is flushed at any host-side synchronisation point
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 121

and at when the context is explicitly destroyed; if more output is produced during
kernel execution than can fit in the buffer, older output is overwritten.
The printf() command can accept at most 32 arguments in addition to the
format string. Additional arguments beyond this will be ignored, and the format
specifier output as-is.
Owing to the differing size of the long type on 64-bit Windows platforms (four
bytes on 64-bit Windows platforms, eight bytes on other 64-bit platforms), a kernel
which is compiled on a non-Windows 64-bit machine but then run on a win64
machine will see corrupted output for all format strings which include “%ld”. It is
recommended that the compilation platform matches the execution platform to
ensure safety.
The output buffer for printf() is not flushed automatically to the output stream,
but instead is flushed only when one of these actions is performed:
 Kernel launch via <<<>>> or cuLaunch(),
 Synchronization via cudaThreadSynchronize(),
cuCtxSynchronize(), cudaStreamSynchronize(), or
cuStreamSynchronize(),
 Module loading/unloading via cuModuleLoad() or cuModuleUnload(),
 Context destruction via cudaThreadExit() or cuCtxDestroy().
Note that the buffer is not flushed automatically when the program exits. The user
must call cudaThreadExit() or cuCtxDestroy() explicitly, as shown in the
examples below.
B.14.3 Associated Host-Side API
The following API functions get and set the size of the buffer used to transfer the
printf() arguments and internal metadata to the host (default is 1 megabyte):
 Driver API:
cuCtxGetLimit(size_t* size, CU_LIMIT_PRINTF_FIFO_SIZE)
cuCtxSetLimit(CU_LIMIT_PRINTF_FIFO_SIZE, size_t size)
 Runtime API:
cudaThreadGetLimit(size_t* size,cudaLimitPrintfFifoSize)
cudaThreadSetLimit(cudaLimitPrintfFifoSize, size_t size)
B.14.4 Examples
The following code sample:
__global__ void helloCUDA(float f)
{
printf(“Hello thread %d, f=%f\n”, threadIdx.x, f) ;
}

void main()
{
helloCUDA<<<1, 5>>>(1.2345f);
Appendix B. C Language Extensions


122 CUDA C Programming Guide Version 3.2

cudaThreadExit();
}
will output:
Hello thread 0, f=1.2345
Hello thread 1, f=1.2345
Hello thread 2, f=1.2345
Hello thread 3, f=1.2345
Hello thread 4, f=1.2345
Notice how each thread encounters the printf() command, so there are as many
lines of output as there were threads launched in the grid. As expected, global values
(i.e. float f) are common between all threads, and local values (i.e.
threadIdx.x) are distinct per-thread.
The following code sample:
__global__ void helloCUDA(float f)
{
if (threadIdx.x == 0)
printf(“Hello thread %d, f=%f\n”, threadIdx.x, f) ;
}

void main()
{
helloCUDA<<<1, 5>>>(1.2345f);
cudaThreadExit();
}
will output:
Hello thread 0, f=1.2345
Self-evidently, the if() statement limits which threads will call printf, so that
only a single line of output is seen.
B.15 Dynamic Global Memory Allocation
void* malloc(size_t size);
void free(void* ptr);
allocate and free memory dynamically from a fixed-size heap in global memory.
The CUDA in-kernel malloc() function allocates at least size bytes from the
device heap and returns a pointer to the allocated memory or NULL if insufficient
memory exists to fulfill the request. The returned pointer is guaranteed to be aligned
to a 16-byte boundary.
The CUDA in-kernel free() function deallocates the memory pointed to by ptr,
which must have been returned by a previous call to malloc(). If ptr is NULL,
the call to free() is ignored. Repeated calls to free() with the same ptr has
undefined behavior.
The memory allocated by a given CUDA thread via malloc() remains allocated
for the lifetime of the CUDA context, or until it is explicitly released by a call to
free(). It can be used by any other CUDA threads even from subsequent kernel
launches. Any CUDA thread may free memory allocated by another thread, but care
should be taken to ensure that the same pointer is not freed more than once.
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 123

B.15.1 Heap Memory Allocation
The device memory heap has a fixed size that must be specified before any program
using malloc() or free() is loaded into the context. A default heap of eight
megabytes is allocated if any program uses malloc() without explicitly specifying
the heap size.
The following API functions get and set the heap size:
 Driver API:
cuCtxGetLimit(size_t* size, CU_LIMIT_MALLOC_HEAP_SIZE)
cuCtxSetLimit(CU_LIMIT_MALLOC_HEAP_SIZE, size_t size)
 Runtime API:
cudaThreadGetLimit(size_t* size, cudaLimitMallocHeapSize)
cudaThreadSetLimit(cudaLimitMallocHeapSize, size_t size)
The heap size granted will be at least size bytes. cuCtxGetLimit() and
cudaThreadGetLimit() return the currently requested heap size.
The actual memory allocation for the heap occurs when a module is loaded into the
context, either explicitly via the CUDA driver API (see Section 3.3.2), or implicitly
via the CUDA runtime API (see Section 3.2). If the memory allocation fails, the
module load will generate a CUDA_ERROR_SHARED_OBJECT_INIT_FAILED
error.
Heap size cannot be changed once a module load has occurred and it does not
resize dynamically according to need.
Memory reserved for the device heap is in addition to memory allocated through
host-side CUDA API calls such as cudaMalloc().
B.15.2 Interoperability with Host Memory API
Memory allocated via malloc() cannot be freed using the runtime or driver API
(i.e. by calling any of the free memory functions from Sections 3.2.1 and 3.3.4).
Similarly, memory allocated via the runtime or driver API (i.e. by calling any of the
memory allocation functions from Sections 3.2.1 and 3.3.4) cannot be freed via
free().
Memory allocated via malloc() can be copied using the runtime or driver API (i.e.
by calling any of the copy memory functions from Sections 3.2.1 and 3.3.4).
B.15.3 Examples
B.15.3.1 Per Thread Allocation
The following code sample:
__global__ void mallocTest()
{
char* ptr = (char*)malloc(123);
printf(“Thread %d got pointer: %p\n”, threadIdx.x, ptr);
free(ptr);
Appendix B. C Language Extensions


124 CUDA C Programming Guide Version 3.2

}

void main()
{
// Set a heap size of 128 megabytes. Note that this must
// be done before any kernel is launched.
cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
mallocTest<<<1, 5>>>();
cudaThreadSynchronize();
}

will output:
Thread 0 got pointer: 00057020
Thread 1 got pointer: 0005708c
Thread 2 got pointer: 000570f8
Thread 3 got pointer: 00057164
Thread 4 got pointer: 000571d0
Notice how each thread encounters the malloc() command and so receives its
own allocation. (Exact pointer values will vary: these are illustrative.)
B.15.3.2 Per Thread Block Allocation
__global__ void mallocTest()
{
__shared__ int* data;

// The first thread in the block does the allocation
// and then shares the pointer with all other threads
// through shared memory, so that access can easily be
// coalesced. 64 bytes per thread are allocated.
if (threadIdx.x == 0)
data = (int*)malloc(blockDim.x * 64);
__syncthreads();

// Check for failure
if (data == NULL)
return;

// Threads index into the memory, ensuring coalescence
int* ptr = data;
for (int i = 0; i < 64; ++i)
ptr[i * blockDim.x + threadIdx.x] = threadIdx.x;

// Ensure all threads complete before freeing
__syncthreads();

// Only one thread may free the memory!
if (threadIdx.x == 0)
free(data);
}

void main()
{
cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
mallocTest<<<10, 128>>>();
cudaThreadSynchronize();
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 125

}
B.15.3.3 Allocation Persisting Between Kernel Launches
#define NUM_BLOCKS 20

__device__ int* dataptr[NUM_BLOCKS]; // Per-block pointer

__global__ void allocmem()
{
// Only the first thread in the block does the allocation
// since we want only one allocation per block.
if (threadIdx.x == 0)
dataptr[blockIdx.x] = (int*)malloc(blockDim.x * 4);
__syncthreads();

// Check for failure
if (dataptr[blockIdx.x] == NULL)
return;

// Zero the data with all threads in parallel
dataptr[blockIdx.x][threadIdx.x] = 0;
}

// Simple example: store thread ID into each element
__global__ void usemem()
{
int* ptr = dataptr[blockIdx.x];
if (ptr != NULL)
ptr[threadIdx.x] += threadIdx.x;
}

// Print the content of the buffer before freeing it
__global__ void freemem()
{
int* ptr = dataptr[blockIdx.x];
if (ptr != NULL)
printf(“Block %d, Thread %d: final value = %d\n”,
blockIdx.x, threadIdx.x, ptr[threadIdx.x]);

// Only free from one thread!
if (threadIdx.x == 0)
free(ptr);
}

void main()
{
cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);

// Allocate memory
allocmem<<< NUM_BLOCKS, 10 >>>();

// Use memory
usemem<<< NUM_BLOCKS, 10 >>>();
usemem<<< NUM_BLOCKS, 10 >>>();
usemem<<< NUM_BLOCKS, 10 >>>();

// Free memory
Appendix B. C Language Extensions


126 CUDA C Programming Guide Version 3.2

freemem<<< NUM_BLOCKS, 10 >>>();

cudaThreadSynchronize();
}
B.16 Execution Configuration
Any call to a __global__ function must specify the execution configuration for that
call. The execution configuration defines the dimension of the grid and blocks that
will be used to execute the function on the device, as well as the associated stream
(see Section 3.3.9.1 for a description of streams).
When using the driver API, the execution configuration is specified through a series
of driver function calls as detailed in Section 3.3.3.
When using the runtime API (Section 3.2), the execution configuration is specified
by inserting an expression of the form <<< Dg, Db, Ns, S >>> between the
function name and the parenthesized argument list, where:
 Dg is of type dim3 (see Section B.3.2) and specifies the dimension and size of
the grid, such that Dg.x * Dg.y equals the number of blocks being launched;
Dg.z must be equal to 1;
 Db is of type dim3 (see Section B.3.2) and specifies the dimension and size of
each block, such that Db.x * Db.y * Db.z equals the number of threads
per block;
 Ns is of type size_t and specifies the number of bytes in shared memory that
is dynamically allocated per block for this call in addition to the statically
allocated memory; this dynamically allocated memory is used by any of the
variables declared as an external array as mentioned in Section B.2.3; Ns is an
optional argument which defaults to 0;
 S is of type cudaStream_t and specifies the associated stream; S is an
optional argument which defaults to 0.
As an example, a function declared as
__global__ void Func(float* parameter);
must be called like this:
Func<<< Dg, Db, Ns >>>(parameter);
The arguments to the execution configuration are evaluated before the actual
function arguments and like the function arguments, are currently passed via shared
memory to the device.
The function call will fail if Dg or Db are greater than the maximum sizes allowed
for the device as specified in Appendix G, or if Ns is greater than the maximum
amount of shared memory available on the device, minus the amount of shared
memory required for static allocation, functions arguments (for devices of compute
capability 1.x), and execution configuration.
Appendix B. C Language Extensions


CUDA C Programming Guide Version 3.2 127

B.17 Launch Bounds
As discussed in detail in Section 5.2.3, the fewer registers a kernel uses, the more
threads and thread blocks are likely to reside on a multiprocessor, which can
improve performance.
Therefore, the compiler uses heuristics to minimize register usage while keeping
register spilling (see Section 5.3.2.2) and instruction count to a minimum. An
application can optionally aid these heuristics by providing additional information to
the compiler in the form of launch bounds that are specified using the
__launch_bounds__() qualifier in the definition of a __global__ function:
__global__ void
__launch_bounds__(maxThreadsPerBlock, minBlocksPerMultiprocessor)
MyKernel(...)
{
...
}
 maxThreadsPerBlock specifies the maximum number of threads per block
with which the application will ever launch MyKernel(); it compiles to the
.maxntid PTX directive;
 minBlocksPerMultiprocessor is optional and specifies the desired
minimum number of resident blocks per multiprocessor; it compiles to the
.minnctapersm PTX directive.
If launch bounds are specified, the compiler first derives from them the upper limit
L on the number of registers the kernel should use to ensure that
minBlocksPerMultiprocessor blocks (or a single block if
minBlocksPerMultiprocessor is not specified) of maxThreadsPerBlock
threads can reside on the multiprocessor (see Section 4.2 for the relationship
between the number of registers used by a kernel and the number of registers
allocated per block). The compiler then optimizes register usage in the following
way:
 If the initial register usage is higher than L, the compiler reduces it further until
it becomes less or equal to L, usually at the expense of more local memory
usage and/or higher number of instructions;
 If the initial register usage is lower than L,
 If maxThreadsPerBlock is specified and
minBlocksPerMultiprocessor is not, the compiler uses
maxThreadsPerBlock to determine the register usage thresholds for the
transitions between n and n+1 resident blocks (i.e. when using one less
register makes room for an additional resident block as in the example of
Section 5.2.3) and then applies similar heuristics as when no launch bounds
are specified;
 If both minBlocksPerMultiprocessor and maxThreadsPerBlock
are specified, the compiler may increase register usage as high as L to
reduce the number of instructions and better hide single thread instruction
latency.
A kernel will fail to launch if it is executed with more threads per block than its
launch bound maxThreadsPerBlock.
Appendix B. C Language Extensions


128 CUDA C Programming Guide Version 3.2

Optimal launch bounds for a given kernel will usually differ across major
architecture revisions. The sample code below shows how this is typically handled in
device code using the __CUDA_ARCH__ macro introduced in Section 3.1.4.
#define THREADS_PER_BLOCK 256
#if __CUDA_ARCH__ >= 200
#define MY_KERNEL_MAX_THREADS (2 * THREADS_PER_BLOCK)
#define MY_KERNEL_MIN_BLOCKS 3
#else
#define MY_KERNEL_MAX_THREADS THREADS_PER_BLOCK
#define MY_KERNEL_MIN_BLOCKS 2
#endif

// Device code
__global__ void
__launch_bounds__(MY_KERNEL_MAX_THREADS, MY_KERNEL_MIN_BLOCKS)
MyKernel(...)
{
...
}
In the common case where MyKernel is invoked with the maximum number of
threads per block (specified as the first parameter of __launch_bounds__()), it
is tempting to use MY_KERNEL_MAX_THREADS as the number of threads per block
in the execution configuration:
// Host code
MyKernel<<<blocksPerGrid, MY_KERNEL_MAX_THREADS>>>(...);
This will not work however since __CUDA_ARCH__ is undefined in host code as
mentioned in Section 3.1.4, so MyKernel will launch with 256 threads per block
even when __CUDA_ARCH__ is greater or equal to 200. Instead the number of
threads per block should be determined:
 Either at compile time using a macro that does not depend on
__CUDA_ARCH__, for example
// Host code
MyKernel<<<blocksPerGrid, THREADS_PER_BLOCK>>>(...);
 Or at runtime based on the compute capability
// Host code
cudaGetDeviceProperties(&deviceProp, device);
int threadsPerBlock =
(deviceProp.major >= 2 ?
2 * THREADS_PER_BLOCK : THREADS_PER_BLOCK);
MyKernel<<<blocksPerGrid, threadsPerBlock>>>(...);
Register usage is reported by the --ptxas-options=-v compiler option. The
number of resident blocks can be derived from the occupancy reported by the
CUDA profiler (see Section 5.2.3 for a definition of occupancy).
Register usage can also be controlled for all __global__ functions in a file using
the -maxrregcount compiler option. The value of -maxrregcount is ignored
for functions with launch bounds.








CUDA C Programming Guide Version 3.1 129

Appendix C.
Mathematical Functions
Functions from Section C.1 can be used in both host and device code whereas
functions from Section C.2 can only be used in device code.
Note that floating-point functions are overloaded, so that in general, there are three
prototypes for a given function <func-name>:
(1) double <func-name>(double), e.g. double log(double)
(2) float <func-name>(float), e.g. float log(float)
(3) float <func-name>f(float), e.g. float logf(float)
This means, in particular, that passing a float argument always results in a float
result (variants (2) and (3) above).
C.1 Standard Functions
This section lists all the mathematical standard library functions supported in device
code. It also specifies the error bounds of each function when executed on the
device. These error bounds also apply when the function is executed on the host in
the case where the host does not supply the function. They are generated from
extensive but not exhaustive tests, so they are not guaranteed bounds.
C.1.1 Single-Precision Floating-Point Functions
Addition and multiplication are IEEE-compliant, so have a maximum error of
0.5 ulp. However, on the device, the compiler often combines them into a single
multiply-add instruction (FMAD) and for devices of compute capability 1.x, FMAD
truncates the intermediate result of the multiplication as mentioned in Section G.2.
This combination can be avoided by using the __fadd_rn() and __fmul_rn()
intrinsic functions (see Section C.2).
The recommended way to round a single-precision floating-point operand to an
integer, with the result being a single-precision floating-point number is rintf(),
not roundf(). The reason is that roundf() maps to an 8-instruction sequence on
the device, whereas rintf() maps to a single instruction. truncf(), ceilf(),
and floorf() each map to a single instruction as well.
Appendix C. Mathematical Functions


130 CUDA C Programming Guide Version 3.2

Table C-1. Mathematical Standard Library Functions with
Maximum ULP Error
The maximum error is stated as the absolute value of the difference
in ulps between a correctly rounded single-precision result and the
result returned by the CUDA library function.
Function Maximum ulp error
x+y 0 (IEEE-754 round-to-nearest-even)
(except for devices of compute capability 1.x when addition is
merged into an FMAD)
x*y 0 (IEEE-754 round-to-nearest-even)
(except for devices of compute capability 1.x when
multiplication is merged into an FMAD)
x/y 0 for compute capability ≥ 2 when compiled with
-prec-div=true
2 (full range), otherwise
1/x 0 for compute capability ≥ 2 when compiled with
-prec-div=true
1 (full range), otherwise
rsqrtf(x)
1/sqrtf(x)
2 (full range)
Applies to 1/sqrtf(x) only when it is converted to
rsqrtf(x) by the compiler.
sqrtf(x) 0 for compute capability ≥ 2 when compiled with
-prec-sqrt=true
3 (full range), otherwise
cbrtf(x) 1 (full range)
rcbrtf(x) 2 (full range)
hypotf(x,y) 3 (full range)
expf(x) 2 (full range)
exp2f(x) 2 (full range)
exp10f(x) 2 (full range)
expm1f(x) 1 (full range)
logf(x) 1 (full range)
log2f(x) 3 (full range)
log10f(x) 3 (full range)
log1pf(x) 2 (full range)
sinf(x) 2 (full range)
cosf(x) 2 (full range)
tanf(x) 4 (full range)
sincosf(x,sptr,cptr) 2 (full range)
sinpif(x) 2 (full range)
asinf(x) 4 (full range)
acosf(x) 3 (full range)
atanf(x) 2 (full range)
atan2f(y,x) 3 (full range)
sinhf(x) 3 (full range)
coshf(x) 2 (full range)
Appendix C. Mathematical Functions


CUDA C Programming Guide Version 3.2 131

Function Maximum ulp error
tanhf(x) 2 (full range)
asinhf(x) 3 (full range)
acoshf(x) 4 (full range)
atanhf(x) 3 (full range)
powf(x,y) 8 (full range)
erff(x) 3 (full range)
erfcf(x) 6 (full range)
erfinvf(x) 3 (full range)
erfcinvf(x) 7 (full range)
lgammaf(x) 6 (outside interval -10.001 ... -2.264; larger inside)
tgammaf(x) 11 (full range)
fmaf(x,y,z) 0 (full range)
frexpf(x,exp) 0 (full range)
ldexpf(x,exp) 0 (full range)
scalbnf(x,n) 0 (full range)
scalblnf(x,l) 0 (full range)
logbf(x) 0 (full range)
ilogbf(x) 0 (full range)
fmodf(x,y) 0 (full range)
remainderf(x,y) 0 (full range)
remquof(x,y,iptr) 0 (full range)
modff(x,iptr) 0 (full range)
fdimf(x,y) 0 (full range)
truncf(x) 0 (full range)
roundf(x) 0 (full range)
rintf(x) 0 (full range)
nearbyintf(x) 0 (full range)
ceilf(x) 0 (full range)
floorf(x) 0 (full range)
lrintf(x) 0 (full range)
lroundf(x) 0 (full range)
llrintf(x) 0 (full range)
llroundf(x) 0 (full range)
signbit(x) N/A
isinf(x) N/A
isnan(x) N/A
isfinite(x) N/A
copysignf(x,y) N/A
fminf(x,y) N/A
fmaxf(x,y) N/A
fabsf(x) N/A
nanf(cptr) N/A
Appendix C. Mathematical Functions


132 CUDA C Programming Guide Version 3.2

Function Maximum ulp error
nextafterf(x,y) N/A
C.1.2 Double-Precision Floating-Point Functions
The errors listed below only apply when compiling for devices with native double-
precision support. When compiling for devices without such support, such as
devices of compute capability 1.2 and lower, the double type gets demoted to
float by default and the double-precision math functions are mapped to their
single-precision equivalents.
The recommended way to round a double-precision floating-point operand to an
integer, with the result being a double-precision floating-point number is rint(),
not round(). The reason is that round() maps to an 8-instruction sequence on
the device, whereas rint() maps to a single instruction. trunc(), ceil(), and
floor() each map to a single instruction as well.
Table C-2. Mathematical Standard Library Functions with
Maximum ULP Error
The maximum error is stated as the absolute value of the difference
in ulps between a correctly rounded double-precision result and the
result returned by the CUDA library function.
Function Maximum ulp error
x+y 0 (IEEE-754 round-to-nearest-even)
x*y 0 (IEEE-754 round-to-nearest-even)
x/y 0 (IEEE-754 round-to-nearest-even)
1/x 0 (IEEE-754 round-to-nearest-even)
sqrt(x) 0 (IEEE-754 round-to-nearest-even)
rsqrt(x) 1 (full range)
cbrt(x) 1 (full range)
rcbrt(x) 1 (full range)
hypot(x,y) 2 (full range)
exp(x) 1 (full range)
exp2(x) 1 (full range)
exp10(x) 1 (full range)
expm1(x) 1 (full range)
log(x) 1 (full range)
log2(x) 1 (full range)
log10(x) 1 (full range)
log1p(x) 1 (full range)
sin(x) 2 (full range)
cos(x) 2 (full range)
tan(x) 2 (full range)
sincos(x,sptr,cptr) 2 (full range)
sinpi(x) 2 (full range)
Appendix C. Mathematical Functions


CUDA C Programming Guide Version 3.2 133

Function Maximum ulp error
asin(x) 2 (full range)
acos(x) 2 (full range)
atan(x) 2 (full range)
atan2(y,x) 2 (full range)
sinh(x) 1 (full range)
cosh(x) 1 (full range)
tanh(x) 1 (full range)
asinh(x) 2 (full range)
acosh(x) 2 (full range)
atanh(x) 2 (full range)
pow(x,y) 2 (full range)
erf(x) 2 (full range)
erfc(x) 5 (full range)
erfinv(x) 8 (full range)
erfcinv(x) 8 (full range)
lgamma(x) 4 (outside interval -11.0001 ... -2.2637; larger inside)
tgamma(x) 8 (full range)
fma(x,y,z) 0 (IEEE-754 round-to-nearest-even)
frexp(x,exp) 0 (full range)
ldexp(x,exp) 0 (full range)
scalbn(x,n) 0 (full range)
scalbln(x,l) 0 (full range)
logb(x) 0 (full range)
ilogb(x) 0 (full range)
fmod(x,y) 0 (full range)
remainder(x,y) 0 (full range)
remquo(x,y,iptr) 0 (full range)
modf(x,iptr) 0 (full range)
fdim(x,y) 0 (full range)
trunc(x) 0 (full range)
round(x) 0 (full range)
rint(x) 0 (full range)
nearbyint(x) 0 (full range)
ceil(x) 0 (full range)
floor(x) 0 (full range)
lrint(x) 0 (full range)
lround(x) 0 (full range)
llrint(x) 0 (full range)
llround(x) 0 (full range)
signbit(x) N/A
isinf(x) N/A
isnan(x) N/A
Appendix C. Mathematical Functions


134 CUDA C Programming Guide Version 3.2

Function Maximum ulp error
isfinite(x) N/A
copysign(x,y) N/A
fmin(x,y) N/A
fmax(x,y) N/A
fabs(x) N/A
nan(cptr) N/A
nextafter(x,y) N/A
C.1.3 Integer Functions
Integer min(x,y) and max(x,y) are supported and map to a single instruction on
the device.
C.2 Intrinsic Functions
This section lists the intrinsic functions that are only supported in device code.
Among these functions are the less accurate, but faster versions of some of the
functions of Section C.1; they have the same name prefixed with __ (such as
__sinf(x)).
Functions suffixed with _rn operate using the round-to-nearest-even rounding
mode.
Functions suffixed with _rz operate using the round-towards-zero rounding mode.
Functions suffixed with _ru operate using the round-up (to positive infinity)
rounding mode.
Functions suffixed with _rd operate using the round-down (to negative infinity)
rounding mode.
C.2.1 Single-Precision Floating-Point Functions
__fadd_rn() and __fmul_rn() map to addition and multiplication operations
that the compiler never merges into FMADs. By contrast, additions and
multiplications generated from the '*' and '+' operators will frequently be combined
into FMADs.
The accuracy of floating-point division varies depending on the compute capability
of the device and whether the code is compiled with -prec-div=false or
-prec-div=true. For devices of compute capability 1.x or for devices of
compute capability 2.x when the code is compiled with -prec-div=false, both
the regular division “/” operator and __fdividef(x,y) have the same accuracy,
but for 2
126
< y < 2
128
, __fdividef(x,y) delivers a result of zero, whereas the
“/” operator delivers the correct result to within the accuracy stated in Table C-3.
Also, for 2
126
< y < 2
128
, if x is infinity, __fdividef(x,y) delivers a NaN (as a
result of multiplying infinity by zero), while the “/” operator returns infinity. For
Appendix C. Mathematical Functions


CUDA C Programming Guide Version 3.2 135

devices of compute capability 2.x when the code is compiled with
-prec-div=true, the “/” operator is IEEE compliant as mentioned in
Section C.1.1.
__saturate(x) returns 0 if x is less than 0, 1 if x is more than 1, and x
otherwise.
__float2ll_[rn,rz,ru,rd](x) (respectively
__float2ull_[rn,rz,ru,rd](x)) converts single-precision floating-point
parameter x to 64-bit signed (respectively unsigned) integer with specified IEEE-
754 rounding modes.
Table C-3. Single-Precision Floating-Point Intrinsic Functions
Supported by the CUDA Runtime Library with
Respective Error Bounds
Function Error bounds
__fadd_[rn,rz,ru,rd](x,y) IEEE-compliant.
__fmul_[rn,rz,ru,rd](x,y) IEEE-compliant.
__fmaf_[rn,rz,ru,rd](x,y,z) IEEE-compliant.
__frcp_[rn,rz,ru,rd](x) IEEE-compliant.
__fsqrt_[rn,rz,ru,rd](x) IEEE-compliant.
__fdiv_[rn,rz,ru,rd](x,y) IEEE-compliant.
__fdividef(x,y) For y in [2
-126
, 2
126
], the maximum ulp error is
2.
__expf(x) The maximum ulp error is
2 + floor(abs(1.16 * x)).
__exp10f(x) The maximum ulp error is
2 + floor(abs(2.95 * x)).
__logf(x) For x in [0.5, 2], the maximum absolute error
is 2
-21.41
, otherwise, the maximum ulp error is
3.
__log2f(x) For x in [0.5, 2], the maximum absolute error
is 2
-22
, otherwise, the maximum ulp error is 2.
__log10f(x) For x in [0.5, 2], the maximum absolute error
is 2
-24
, otherwise, the maximum ulp error is 3.
__sinf(x) For x in [-t, t], the maximum absolute error
is 2
-21.41
, and larger otherwise.
__cosf(x) For x in [-t, t], the maximum absolute error
is 2
-21.19
, and larger otherwise.
__sincosf(x,sptr,cptr) Same as sinf(x) and cosf(x).
__tanf(x) Derived from its implementation as
__sinf(x) * (1 / __cosf(x)).
__powf(x, y) Derived from its implementation as
exp2f(y * __log2f(x)).
__saturate(x) N/A
Appendix C. Mathematical Functions


136 CUDA C Programming Guide Version 3.2

C.2.2 Double-Precision Floating-Point Functions
__dadd_rn() and __dmul_rn() map to addition and multiplication operations
that the compiler never merges into FMADs. By contrast, additions and
multiplications generated from the '*' and '+' operators will frequently be combined
into FMADs.
Table C-4. Double-Precision Floating-Point Intrinsic
Functions Supported by the CUDA Runtime
Library with Respective Error Bounds
Function Error bounds
__dadd_[rn,rz,ru,rd](x,y) IEEE-compliant.
__dmul_[rn,rz,ru,rd](x,y) IEEE-compliant.
__fma_[rn,rz,ru,rd](x,y,z) IEEE-compliant.
__ddiv_[rn,rz,ru,rd](x,y)(x,y) IEEE-compliant.
Requires compute capability ≥ 2.
__drcp_[rn,rz,ru,rd](x) IEEE-compliant.
Requires compute capability ≥ 2
__dsqrt_[rn,rz,ru,rd](x) IEEE-compliant.
Requires compute capability ≥ 2
C.2.3 Integer Functions
__[u]mul24(x,y) computes the product of the 24 least significant bits of the
integer parameters x and y and delivers the 32 least significant bits of the result. The
8 most significant bits of x or y are ignored.
__[u]mulhi(x,y) computes the product of the integer parameters x and y and
delivers the 32 most significant bits of the 64-bit result.
__[u]mul64hi(x,y) computes the product of the 64-bit integer parameters x
and y and delivers the 64 most significant bits of the 128-bit result.
__[u]sad(x,y,z) (Sum of Absolute Difference) returns the sum of integer
parameter z and the absolute value of the difference between integer parameters x
and y.
__clz(x) returns the number, between 0 and 32 inclusive, of consecutive zero bits
starting at the most significant bit (i.e. bit 31) of integer parameter x.
__clzll(x) returns the number, between 0 and 64 inclusive, of consecutive zero
bits starting at the most significant bit (i.e. bit 63) of 64-bit integer parameter x.
__ffs(x) returns the position of the first (least significant) bit set in integer
parameter x. The least significant bit is position 1. If x is 0, __ffs() returns 0.
Note that this is identical to the Linux function ffs.
__ffsll(x) returns the position of the first (least significant) bit set in 64-bit
integer parameter x. The least significant bit is position 1. If x is 0, __ffsll()
returns 0. Note that this is identical to the Linux function ffsll.
Appendix C. Mathematical Functions


CUDA C Programming Guide Version 3.2 137

__popc(x) returns the number of bits that are set to 1 in the binary representation
of 32-bit integer parameter x.
__popcll(x) returns the number of bits that are set to 1 in the binary
representation of 64-bit integer parameter x.
__brev(x) reverses the bits of 32-bit unsigned integer parameter x, i.e. bit N of
the result corresponds to bit 31-N of x.
__brevll(x) reverses the bits of 64-bit unsigned long long parameter x, i.e. bit N
of the result corresponds to bit 63-N of x.
__byte_perm(x,y,s) returns, as a 32-bit integer r, four bytes from eight input
bytes provided in the two input integers x and y. The input bytes are indexed as
follows:
input[0] = x<0:7> input[1] = x<8:15>
input[2] = x<16:23> input[3] = x<24:31>
input[4] = y<0:7> input[5] = y<8:15>
input[6] = y<16:23> input[7] = y<24:31>
The selector indices are stored in 4-bit nibbles (with the upper 16-bit of the selector
not being used):
selector[0] = s<0:3> selector[1] = s<4:7>
selector[2] = s<8:11> selector[3] = s<12:15>
The returned value r is computed to be:
result[n] := input[selector[n]]
where result[n] is the n
th
byte of r.
C.2.4 Type Casting Functions
There are two categories of type casting functions: the type conversion functions
(Table C-5) and the type reinterpretation functions (Table C-6).
A type reinterpretation function does not change the binary representation of its
input value. For example, __int_as_float(0xC0000000) is equal to -2.0f,
__float_as_int(1.0f) is equal to 0x3f800000.
A type conversion function may change the binary representation of its input value.
For example, __int2float_rn(0xC0000000) is equal to -1073741824.0f,
__float2int_rn(1.0f) is equal to 1.
Table C-5. Type Conversion Functions
__float2int_[rn,rz,ru,rd](x)
__float2uint_[rn,rz,ru,rd](x)
__int2float_[rn,rz,ru,rd](x)
__uint2float_[rn,rz,ru,rd](x)
__float2ll_[rn,rz,ru,rd](x)
__float2ull_[rn,rz,ru,rd](x)
Appendix C. Mathematical Functions


138 CUDA C Programming Guide Version 3.2

__ll2float_[rn,rz,ru,rd](x)
__ull2float_[rn,rz,ru,rd](x)
__float2half_rn(x)
__half2float(x)
__double2float_[rn,rz,ru,rd](x)
__double2int_[rn,rz,ru,rd](x)
__double2uint_[rn,rz,ru,rd](x)
__double2ll_[rn,rz,ru,rd](x)
__double2ull_[rn,rz,ru,rd](x)
__int2double_rn(x)
__uint2double_rn(x)
__ll2double_[rn,rz,ru,rd](x)
__ull2double_[rn,rz,ru,rd](x)
Table C-6. Type Reinterpretation Functions
__int_as_float(x)
__float_as_int(x)
__double_as_longlong(x)
__longlong_as_double(x)
__double2hiint(x)
__double2loint(x)
__hiloint2double(hi, lo)








CUDA C Programming Guide Version 3.1 139

Appendix D.
C++ Language Constructs
CUDA supports the following C++ language constructs for device code:
 Polymorphism
 Default Parameters
 Operator Overloading
 Namespaces
 Function Templates
 Classes for devices of compute capability 2.x
These C++ constructs are implemented as specified in “The C++ Programming
Langue” reference. It is valid to use any of these constructs in .cu CUDA files for
host, device, and kernel (__global__) functions. Any restrictions detailed in previous
parts of this programming guide, like the lack of support for recursion, still apply.
The following subsections provide examples of the various constructs.
D.1 Polymorphism
Generally, polymorphism is the ability to define that functions or operators behave
differently in different contexts. This is also referred to as function (and operator,
see below) overloading.
In practical terms, this means that it is permissible to define two different functions
within the same scope (namespace) as long as they have a distinguishable function
signature. That means that the two functions either consume a different number of
parameters or parameters of different types. When either of the multiple functions
gets invoked the compiler resolves to the function‟s implementation that matches
the function signature.
Because of implicit typecasting, a compiler may encounter multiple potential
matches for a function invocation and in that case the matching rules as described in
the C++ Language Standard apply. In practice this means that the compiler will pick
the closest match in case of multiple potential matches.
Example: The following is valid CUDA code:
__device__ void f(float x)
{
Appendix D. C++ Language Constructs


140 CUDA C Programming Guide Version 3.2

// do something with x
}

__device__ void f(int i)
{
// do something with i
}

__device__ void f(double x, double y)
{
// do something with x and y
}
D.2 Default Parameters
With support for polymorphism as described in the previous subsection and the
function signature matching rules in place it becomes possible to provide support
for default values for function parameters.
Example:
__device__ void f(float x = 0.0f)
{
// do something with x
}
Kernel or other device functions can now invoke this version of f in one of two
ways:
f();
// or
float x = /* some value */;
f(x);

Default parameters can only be given for the last n parameters of a function.
D.3 Operator Overloading
Operator overloading allows programmers to define operators for new data-types.
Examples of overloadable operators in C++ are: +, -, *, /, +=, &, [], etc.
Example: The following is valid CUDA code, implementing the + operation
between two uchar4 vectors:
__device__ uchar4 operator+ (const uchar4 & a, const uchar4 & b)
{
uchar4 r;
r.x = a.x + b.x;
...
return r;
}
This new operator can now be used like this:
uchar4 a, b, c;
a = b = /* some initial value */;
Appendix D. C++ Language Constructs


CUDA C Programming Guide Version 3.2 141

c = a + b;
D.4 Namespaces
Namespaces in C++ allow for the creation of a hierarchy of scopes of visibility. All
the symbols inside a namespace can be used within this namespaces without
additional syntax.
The use of namespaces can be used to solve the problem of name-clashes (two
different symbols using identical names), which commonly occurs when using
multiple function libraries from different sources.
Example: The following code defines two functions “f()” in two separate
namespaces (“nvidia” and “other”):
namespace nvidia {
__device__ void f(float x)
{ /* do something with x */ ;}
}

namespace other {
__device__ void f(float x)
{ /* do something with x */ ;}
}

The functions can now be used anywhere via fully qualified names:
nvidia::f(0.5f);
All the symbols in a namespace can be imported into another namespace (scope)
like this:
using namespace nvidia;
f(0.5f);
D.5 Function Templates
Function templates are a form of meta-programming that allows writing a generic
function in a data-type independent fashion. CUDA supports function templates to
the full extent of the C++ standard, including the following concepts:
 Implicit template parameter deduction.
 Explicit instantiation.
 Template specialization.
Example:
template <T>
__device__ bool f(T x)
{ return /* some clever code that turns x into a bool here */ }
This function will convert x of any data-type to a bool as long as the code in the
function‟s body can be compiled for the actually type (T) of the variable x.
f() can be invoked in two ways:
int x = 1;
Appendix D. C++ Language Constructs


142 CUDA C Programming Guide Version 3.2

bool result = f(x);
This first type of invocation relies on the compiler‟s ability to implicitly deduce the
correct function type for T. In this case the compiler would deduce T to be int and
instantiate f<int>(x).
The second type of invoking the template function is via explicit instantiation like
this:
bool result = f<double>(0.5);

Function templates may be specialized:
template <T>
__device__ bool f(T x)
{ return false; }

template <>
__device__ bool
f<int>(T x)
{ return true; }

In this case the implementation for T representing the int type are specialized to
return true, all other types will be caught by the more general template and return
false.
The complete set of matching rules (for implicitly deducing template parameters)
and matching polymorphous functions apply as specified in the C++ standard.
D.6 Classes
Code compiled for devices with compute capability 2.x and higher may make use of
C++ classes, as long as none of the member functions are virtual (this restriction
will be removed in some future release).
There are two common use cases for classes without virtual member functions:
 Small-data aggregations. E.g. data types like pixels (r, g, b, a), 2D and 3D points,
vectors, etc.
 Functor classes. The use of functors is necessitated by the fact that device-
function pointers are not supported and thus it is not possible to pass functions
as template parameters. A workaround for this restriction is the use of functor
classes (see code sample below).
D.6.1 Example 1 Pixel Data Type
The following is an example of a data type for RGBA pixels with 8 bit per channel
depth:
class PixelRGBA
{
public:
__device__
PixelRGBA(): r_(0), g_(0), b_(0), a_(0)
{ ; }
Appendix D. C++ Language Constructs


CUDA C Programming Guide Version 3.2 143


__device__
PixelRGBA(unsigned char r, unsigned char g, unsigned char b,
unsigned char a = 255): r_(r), g_(g), b_(b), a_(a)
{ ; }

// other methods and operators left out for sake of brevity

private:
unsigned char r_, g_, b_, a_;

friend PixelRGBA operator+(const PixelRGBA &,
const PixelRGBA &);
};

__device__
PixelRGBA operator+(const PixelRGBA & p1, const PixelRGBA & p2)
{
return PixelRGBA(p1.r_ + p2.r_,
p1.g_ + p2.g_,
p1.b_ + p2.b_,
p1.a_ + p2.a_);
}

Other device code can now make use of this new data type as one would expect:
PixelRGBA p1, p2;

// [...] initialization of p1 and p2 here

PixelRGBA p3 = p1 + p2;
D.6.2 Example 2 Functor Class
The following example shows how functors may be used as function template
parameters to implement a set of vector arithmetic operations.
Here are two functors for float addition and subtraction:
class Add
{
public:
__device__
float
operator() (float a, float b)
const
{
return a + b;
}
};

class Sub
{
public:
__device__
float
Appendix D. C++ Language Constructs


144 CUDA C Programming Guide Version 3.2

operator() (float a, float b)
const
{
return a - b;
}
};
The following templatized kernel makes use of the functors like the ones above in
order to implement operations on vectors of floats:
// Device code
template<class O>
__global__
void
VectorOperation(const float * A, const float * B,
float * C, unsigned int N, O op)
{
unsigned int iElement = blockDim.x * blockIdx.x + threadIdx.x;
if (iElement < N)
{
C[iElement] = op(A[iElement], B[iElement]);
}
}
The VectorOperation kernel may now be launched like this in order to get a
vector addition:
// Host code
VectorOperation<<<blocks, threads>>>(v1, v2, v3, N, Add());







CUDA C Programming Guide Version 3.1 145

Appendix E.
NVCC Specifics
E.1 __noinline__ and __forceinline__
When compiling code for devices of compute capability 1.x, a __device__
function is always inlined by default. When compiling code for devices of compute
capability 2.x, a __device__ function is only inlined when deemed appropriate by
the compiler.
The __noinline__ function qualifier can be used as a hint for the compiler not to
inline the function if possible. The function body must still be in the same file where
it is called. For devices of compute capability 1.x, the compiler will not honor the
__noinline__ qualifier for functions with pointer parameters and for functions
with large parameter lists. For devices of compute capability 2.x, the compiler will
always honor the __noinline__ qualifier.
The __forceinline__ function qualifier can be used to force the compiler to
inline the function.
E.2 #pragma unroll
By default, the compiler unrolls small loops with a known trip count. The #pragma
unroll directive however can be used to control unrolling of any given loop. It
must be placed immediately before the loop and only applies to that loop. It is
optionally followed by a number that specifies how many times the loop must be
unrolled.
For example, in this code sample:
#pragma unroll 5
for (int i = 0; i < n; ++i)
the loop will be unrolled 5 times. The compiler will also insert code to ensure
correctness (in the example above, to ensure that there will only be n iterations if n
is less than 5, for example). It is up to the programmer to make sure that the
specified unroll number gives the best performance.
#pragma unroll 1 will prevent the compiler from ever unrolling a loop.
Appendix E. NVCC Specifics


146 CUDA C Programming Guide Version 3.2

If no number is specified after #pragma unroll, the loop is completely unrolled
if its trip count is constant, otherwise it is not unrolled at all.
E.3 __restrict__
nvcc supports restricted pointers via the __restrict__ keyword.
Restricted pointers were introduced in C99 to alleviate the aliasing problem that
exists in C-type languages, and which inhibits all kind of optimization from code re-
ordering to common sub-expression elimination.
Here is an example subject to the aliasing issue, where use of restricted pointer can
help the compiler to reduce the number of instructions:
void foo(const float* a,
const float* b,
float* c)
{
c[0] = a[0] * b[0];
c[1] = a[0] * b[0];
c[2] = a[0] * b[0] * a[1];
c[3] = a[0] * a[1];
c[4] = a[0] * b[0];
c[5] = b[0];
...
}
In C-type languages, the pointers a, b, and c may be aliased, so any write through c
could modify elements of a or b. This means that to guarantee functional
correctness, the compiler cannot load a[0] and b[0] into registers, multiply them,
and store the result to both c[0] and c[1], because the results would differ from
the abstract execution model if, say, a[0] is really the same location as c[0]. So
the compiler cannot take advantage of the common sub-expression. Likewise,
the compiler cannot just reorder the computation of c[4] into the proximity of the
computation of c[0] and c[1] because the preceding write to c[3] could change
the inputs to the computation of c[4].
By making a, b, and c restricted pointers, the programmer asserts to the compiler
that the pointers are in fact not aliased, which in this case means writes through c
would never overwrite elements of a or b. This changes the function prototype as
follows:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c);
Note that all pointer arguments need to be made restricted for the compiler
optimizer to derive any benefit. With the __restrict keywords added, the
compiler can now reorder and do common sub-expression elimination at will, while
retaining functionality identical with the abstract execution model:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c)
{
float t0 = a[0];
Appendix E. NVCC Specifics


CUDA C Programming Guide Version 3.2 147

float t1 = b[0];
float t2 = t0 * t2;
float t3 = a[1];
c[0] = t2;
c[1] = t2;
c[4] = t2;
c[2] = t2 * t3;
c[3] = t0 * t3;
c[5] = t1;
...
}
The effects here are a reduced number of memory accesses and reduced number of
computations. This is balanced by an increase in register pressure due to "cached"
loads and common sub-expressions.
Since register pressure is a critical issue in many CUDA codes, use of restricted
pointers can have negative performance impact on CUDA code, due to reduced
occupancy.








CUDA C Programming Guide Version 3.1 149

Appendix F.
Texture Fetching
This appendix gives the formula used to compute the value returned by the texture
functions of Section B.8 depending on the various attributes of the texture reference
(see Section 3.2.4).
The texture bound to the texture reference is represented as an array T of N texels
for a one-dimensional texture, M N× texels for a two-dimensional texture, or
L M N × × texels for a three-dimensional texture. It is fetched using texture
coordinates x , y , and z .
A texture coordinate must fall within T ‟s valid addressing range before it can be
used to address T . The addressing mode specifies how an out-of-range texture
coordinate x is remapped to the valid range. If x is non-normalized, only the clamp
addressing mode is supported and x is replaced by 0 if 0 < x and 1 ÷ N if x N s . If
x is normalized:
 In clamp addressing mode, x is replaced by 0 if 0 < x and
N
1
1÷ if x s 1 ,
 In wrap addressing mode, x is replaced by ) (x frac , where
) ( ) ( x floor x x frac ÷ = and ) (x floor is the largest integer not greater than x .
In the remaining of the appendix, x , , and z are the non-normalized texture
coordinates remapped to ‟s valid addressing range. , , and z are derived from
the normalized texture coordinates xˆ , yˆ , and zˆ as such: x N x ˆ = , y M y ˆ = , and
z L z ˆ = .
y
T x y
Appendix F. Texture Fetching


150 CUDA C Programming Guide Version 3.2

F.1 Nearest-Point Sampling
In this filtering mode, the value returned by the texture fetch is
 ] [ ) ( i T x tex = for a one-dimensional texture,
 ] , [ ) , ( j i T y x tex = for a two-dimensional texture,
 ] , , [ ) , , ( k j i T z y x tex = for a three-dimensional texture,
where ) (x floor i = , ) (y floor j = , and ) (z floor k = .
Figure D-1 illustrates nearest-point sampling for a one-dimensional texture with
4 = N .
For integer textures, the value returned by the texture fetch can be optionally
remapped to [0.0, 1.0] (see Section 3.2.4.1.1).


Figure F-1. Nearest-Point Sampling of a One-Dimensional
Texture of Four Texels
F.2 Linear Filtering
In this filtering mode, which is only available for floating-point textures, the value
returned by the texture fetch is
 ] 1 [ ] [ ) 1 ( ) ( + + ÷ = i T i T x tex o o for a one-dimensional texture,
0 4 1 2 3
T[0]
T[1]
T[2]
T[3]
x
0 1 0.25 0.5 0.75
Non-Normalized
Normalized
tex(x)
Appendix F. Texture Fetching


CUDA C Programming Guide Version 3.2 151

 ] 1 , 1 [ ] 1 , [ ) 1 ( ] , 1 [ ) 1 ( ] , [ ) 1 )( 1 ( ) , ( + + + + ÷ + + ÷ + ÷ ÷ = j i T j i T j i T j i T y x tex o| | o | o | o
for a two-dimensional texture,
 = ) , , ( z y x tex

] 1 , 1 , 1 [ ] 1 , 1 , [ ) 1 (
] 1 , , 1 [ ) 1 ( ] 1 , , [ ) 1 )( 1 (
] , 1 , 1 [ ) 1 ( ] , 1 , [ ) 1 ( ) 1 (
] , , 1 [ ) 1 )( 1 ( ] , , [ ) 1 )( 1 )( 1 (
+ + + + + + ÷
+ + + ÷ + + ÷ ÷
+ + + ÷ + + ÷ ÷
+ + ÷ ÷ + ÷ ÷ ÷
k j i T k j i T
k j i T k j i T
k j i T k j i T
k j i T k j i T
o|¸ |¸ o
¸ | o ¸ | o
¸ o| ¸ | o
¸ | o ¸ | o

for a three-dimensional texture,
where:
 ) (
B
x floor i = , ) (
B
x frac = o , 5 . 0 ÷ = x x
B
,
 ) (
B
y floor j = , ) (
B
y frac = | , 5 . 0 ÷ = y y
B
,
 ) (
B
z floor k = , ) (
B
z frac = ¸ , 5 . 0 ÷ = z z
B
.
o , | , and ¸ are stored in 9-bit fixed point format with 8 bits of fractional value
(so 1.0 is exactly represented).
Figure F-2 illustrates nearest-point sampling for a one-dimensional texture with
4 = N .

Figure F-2. Linear Filtering of a One-Dimensional Texture of
Four Texels in Clamp Addressing Mode
0 4 1 2 3
T[0]
T[1]
T[2]
T[3]
tex(x)
x
0 1 0.25 0.5 0.75
Non-Normalized
Normalized
Appendix F. Texture Fetching


152 CUDA C Programming Guide Version 3.2

F.3 Table Lookup
A table lookup ) (x TL where x spans the interval ] , 0 [ R can be implemented as
) 5 . 0
1
( ) ( +
÷
= x
R
N
tex x TL in order to ensure that ] 0 [ ) 0 ( T TL = and ] 1 [ ) ( ÷ = N T R TL .
Figure F-3 illustrates the use of texture filtering to implement a table lookup with
4 = R or 1 = R from a one-dimensional texture with .


Figure F-3. One-Dimensional Table Lookup Using Linear
Filtering

4 = N
0 4 4/3 8/3
T[0]
T[1]
T[2]
T[3]
TL(x)
x
0 1 1/3 2/3






CUDA C Programming Guide Version 3.1 153

Appendix G.
Compute Capabilities
The general specifications and features of a compute device depend on its compute
capability (see Section 2.5).
Section G.1 gives the features and technical specifications associated to each
compute capability.
Section G.2 reviews the compliance with the IEEE floating-point standard.
Section G.3 and 0 give more details on the architecture of devices of compute
capability 1.x and 2.x, respectively.

Appendix G. Compute Capabilities


154 CUDA C Programming Guide Version 3.2

G.1 Features and Technical Specifications
Compute Capability
Feature Support
(Unlisted features are supported
for all compute capabilities)
1.0 1.1 1.2 1.3 2.x
Integer atomic functions operating on
32-bit words in global memory
(Section B.11)
No yes
Integer atomic functions operating on
64-bit words in global memory
(Section B.11)
No Yes Integer atomic functions operating on
32-bit words in shared memory
(Section B.11)
Warp vote functions (Section B.12)
Double-precision floating-point
numbers
No Yes
Floating-point atomic addition
operating on 32-bit words in global
and shared memory (Section B.11)
No Yes
__ballot() (Section B.12)
__threadfence_system() (Section B.5)
__syncthreads_count(),
__syncthreads_and(),
__syncthreads_or() (Section B.6)
Surface functions (Section B.9)


Compute Capability
Technical Specifications 1.0 1.1 1.2 1.3 2.x
Maximum x- or y-dimension of a grid
of thread blocks
65535
Maximum number of threads per
block
512 1024
Maximum x- or y-dimension of a
block
512 1024
Maximum z-dimension of a block 64
Warp size 32
Maximum number of resident blocks
per multiprocessor
8
Maximum number of resident warps
per multiprocessor
24 32 48
Maximum number of resident threads
per multiprocessor
768 1024 1536
Number of 32-bit registers per
multiprocessor
8 K 16 K 32 K
Maximum amount of shared memory 16 KB 48 KB
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 155

Compute Capability
Technical Specifications 1.0 1.1 1.2 1.3 2.x
per multiprocessor
Number of shared memory banks 16 32
Amount of local memory per thread 16 KB 512 KB
Constant memory size 64 KB
Cache working set per multiprocessor
for constant memory
8 KB
Cache working set per multiprocessor
for texture memory
Device dependent, between 6 KB and 8 KB
Maximum width for a 1D texture
reference bound to a CUDA array
8192 32768
Maximum width for a 1D texture
reference bound to linear memory
2
27

Maximum width and height for a 2D
texture reference bound to linear
memory or to a CUDA array
65536 x 32768
65536 x
65535
Maximum width, height, and depth
for a 3D texture reference bound to
linear memory or a CUDA array
2048 x 2048 x 2048
Maximum number of textures that
can be bound to a kernel
128
Maximum width for a 1D surface
reference bound to a CUDA array
N/A
8192
Maximum width and height for a 2D
surface reference bound to a CUDA
array
8192 x
8192
Maximum number of surfaces that
can be bound to a kernel
8
Maximum number of instructions per
kernel
2 million

G.2 Floating-Point Standard
All compute devices follow the IEEE 754-2008 standard for binary floating-point
arithmetic with the following deviations:
 There is no dynamically configurable rounding mode; however, most of the
operations support multiple IEEE rounding modes, exposed via device
intrinsics;
 There is no mechanism for detecting that a floating-point exception has
occurred and all operations behave as if the IEEE-754 exceptions are always
masked, and deliver the masked response as defined by IEEE-754 if there is an
exceptional event; for the same reason, while SNaN encodings are supported,
they are not signaling and are handled as quiet;
 The result of a single-precision floating-point operation involving one or more
input NaNs is the quiet NaN of bit pattern 0x7fffffff;
Appendix G. Compute Capabilities


156 CUDA C Programming Guide Version 3.2

 Double-precision floating-point absolute value and negation are not compliant
with IEEE-754 with respect to NaNs; these are passed through unchanged;
 For single-precision floating-point numbers on devices of compute
capability 1.x:
 Denormalized numbers are not supported; floating-point arithmetic and
comparison instructions convert denormalized operands to zero prior to
the floating-point operation;
 Underflowed results are flushed to zero;
 Some instructions are not IEEE-compliant:
 Addition and multiplication are often combined into a single multiply-
add instruction (FMAD), which truncates (i.e. without rounding) the
intermediate mantissa of the multiplication;
 Division is implemented via the reciprocal in a non-standard-compliant
way;
 Square root is implemented via the reciprocal square root in a non-
standard-compliant way;
 For addition and multiplication, only round-to-nearest-even and
round-towards-zero are supported via static rounding modes; directed
rounding towards +/- infinity is not supported;
To mitigate the impact of these restrictions, IEEE-compliant software (and
therefore slower) implementations are provided through the following
intrinsics (c.f. Section C.2.1):
 __fmaf_r{n,z,u,d}(float, float, float): single-precision
fused multiply-add with IEEE rounding modes,
 __frcp_r[n,z,u,d](float): single-precision reciprocal with
IEEE rounding modes,
 __fdiv_r[n,z,u,d](float, float): single-precision division
with IEEE rounding modes,
 __fsqrt_r[n,z,u,d](float): single-precision square root with
IEEE rounding modes,
 __fadd_r[u,d](float, float): single-precision addition with
IEEE directed rounding,
 __fmul_r[u,d](float, float): single-precision multiplication
with IEEE directed rounding;
 For double-precision floating-point numbers on devices of compute
capability 1.x:
 Round-to-nearest-even is the only supported IEEE rounding mode for
reciprocal, division, and square root.
When compiling for devices without native double-precision floating-point support,
i.e. devices of compute capability 1.2 and lower, each double variable is converted
to single-precision floating-point format (but retains its size of 64 bits) and double-
precision floating-point arithmetic gets demoted to single-precision floating-point
arithmetic.
For devices of compute capability 2.x, code must be compiled with -ftz=false,
-prec-div=true, and -prec-sqrt=true to ensure IEEE compliance (this is
the default setting; see the nvcc user manual for description of these compilation
flags); code compiled with -ftz=true, -prec-div=false, and
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 157

-prec-sqrt=false comes closest to the code generated for devices of compute
capability 1.x.
Addition and multiplication are often combined into a single multiply-add
instruction:
 FMAD for single precision on devices of compute capability 1.x,
 FFMA for single precision on devices of compute capability 2.x.
As mentioned above, FMAD truncates the mantissa prior to use it in the addition.
FFMA, on the other hand, is an IEEE-754(2008) compliant fused multiply-add
instruction, so the full-width product is being used in the addition and a single
rounding occurs during generation of the final result. While FFMA in general has
superior numerical properties compared to FMAD, the switch from FMAD to
FFMA can cause slight changes in numeric results and can in rare circumstances
lead to slighty larger error in final results.
In accordance to the IEEE-754R standard, if one of the input parameters to
fminf(), fmin(), fmaxf(), or fmax() is NaN, but not the other, the result is
the non-NaN parameter.
The conversion of a floating-point value to an integer value in the case where the
floating-point value falls outside the range of the integer format is left undefined by
IEEE-754. For compute devices, the behavior is to clamp to the end of the
supported range. This is unlike the x86 architecture behavior.
G.3 Compute Capability 1.x
G.3.1 Architecture
For devices of compute capability 1.x, a multiprocessor consists of:
 8 CUDA cores for integer and single-precision floating-point arithmetic
operations,
 1 double-precision floating-point unit for double-precision floating-point
arithmetic operations,
 2 special function units for single-precision floating-point transcendental
functions (these units can also handle single-precision floating-point
multiplications),
 1 warp scheduler.
To execute an instruction for all threads of a warp, the warp scheduler must
therefore issue the instruction over:
 4 clock cycles for an integer or single-precision floating-point arithmetic
instruction,
 32 clock cycles for a double-precision floating-point arithmetic instruction,
 16 clock cycles for a single-precision floating-point transcendental instruction.
A multiprocessor also has a read-only constant cache that is shared by all functional
units and speeds up reads from the constant memory space, which resides in device
memory.
Appendix G. Compute Capabilities


158 CUDA C Programming Guide Version 3.2

Multiprocessors are grouped into Texture Processor Clusters (TPCs). The number of
multiprocessors per TPC is:
 2 for devices of compute capabilities 1.0 and 1.1,
 3 for devices of compute capabilities 1.2 and 1.3.
Each TPC has a read-only texture cache that is shared by all multiprocessors and
speeds up reads from the texture memory space, which resides in device memory.
Each multiprocessor accesses the texture cache via a texture unit that implements
the various addressing modes and data filtering mentioned in Section 3.2.4.
The local and global memory spaces reside in device memory and are not cached.
G.3.2 Global Memory
A global memory request for a warp is split into two memory requests, one for each
half-warp, that are issued independently. Sections G.3.2.1 and G.3.2.2 describe how
the memory accesses of threads within a half-warp are coalesced into one or more
memory transactions depending on the compute capability of the device. Figure G-1
shows some examples of global memory accesses and corresponding memory
transactions based on compute capability.
The resulting memory transactions are serviced at the throughput of device
memory.
G.3.2.1 Devices of Compute Capability 1.0 and 1.1
To coalesce, the memory request for a half-warp must satisfy the following
conditions:
 The size of the words accessed by the threads must be 4, 8, or 16 bytes;
 If this size is:
 4, all 16 words must lie in the same 64-byte segment,
 8, all 16 words must lie in the same 128-byte segment,
 16, the first 8 words must lie in the same 128-byte segment and the last 8
words in the following 128-byte segment;
 Threads must access the words in sequence: The k
th
thread in the half-warp
must access the k
th
word.
If the half-warp meets these requirements, a 64-byte memory transaction, a 128-byte
memory transaction, or two 128-byte memory transactions are issued if the size of
the words accessed by the threads is 4, 8, or 16, respectively. Coalescing is achieved
even if the warp is divergent, i.e. there are some inactive threads that do not actually
access memory.
If the half-warp does not meet these requirements, 16 separate 32-byte memory
transactions are issued.
G.3.2.2 Devices of Compute Capability 1.2 and 1.3
Threads can access any words in any order, including the same words, and a single
memory transaction for each segment addressed by the half-warp is issued. This is
in contrast with devices of compute capabilities 1.0 and 1.1 where threads need to
access words in sequence and coalescing only happens if the half-warp addresses a
single segment.
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 159

More precisely, the following protocol is used to determine the memory transactions
necessary to service all threads in a half-warp:
 Find the memory segment that contains the address requested by the lowest
numbered active thread. The segment size depends on the size of the words
accessed by the threads:
 32 bytes for 1-byte words,
 64 bytes for 2-byte words,
 128 bytes for 4-, 8- and 16-byte words.
 Find all other active threads whose requested address lies in the same segment.
 Reduce the transaction size, if possible:
 If the transaction size is 128 bytes and only the lower or upper half is used,
reduce the transaction size to 64 bytes;
 If the transaction size is 64 bytes (originally or after reduction from 128
bytes) and only the lower or upper half is used, reduce the transaction size
to 32 bytes.
 Carry out the transaction and mark the serviced threads as inactive.
 Repeat until all threads in the half-warp are serviced.
G.3.3 Shared Memory
Shared memory has 16 banks that are organized such that successive 32-bit words
are assigned to successive banks, i.e. interleaved. Each bank has a bandwidth of 32
bits per two clock cycles.
A shared memory request for a warp is split into two memory requests, one for each
half-warp, that are issued independently. As a consequence, there can be no bank
conflict between a thread belonging to the first half of a warp and a thread
belonging to the second half of the same warp.
If a non-atomic instruction executed by a warp writes to the same location in shared
memory for more than one of the threads of the warp, only one thread per half-
warp performs a write and which thread performs the final write is undefined.
G.3.3.1 32-Bit Strided Access
A common access pattern is for each thread to access a 32-bit word from an array
indexed by the thread ID tid and with some stride s:
__shared__ float shared[32];
float data = shared[BaseIndex + s * tid];
In this case, threads tid and tid+n access the same bank whenever s*n is a
multiple of the number of banks (i.e. 16) or, equivalently, whenever n is a multiple
of 16/d where d is the greatest common divisor of 16 and s. As a consequence,
there will be no bank conflict only if half the warp size (i.e. 16) is less than or equal
to 16/d., that is only if d is equal to 1, i.e. s is odd.
Figure G-2 shows some examples of strided access for devices of compute
capability 2.x. The same examples apply for devices of compute capability 1.x, but
with 16 banks instead of 32.
Appendix G. Compute Capabilities


160 CUDA C Programming Guide Version 3.2

G.3.3.2 32-Bit Broadcast Access
Shared memory features a broadcast mechanism whereby a 32-bit word can be read
and broadcast to several threads simultaneously when servicing one memory read
request. This reduces the number of bank conflicts when several threads read from
an address within the same 32-bit word. More precisely, a memory read request
made of several addresses is serviced in several steps over time by servicing one
conflict-free subset of these addresses per step until all addresses have been
serviced; at each step, the subset is built from the remaining addresses that have yet
to be serviced using the following procedure:
 Select one of the words pointed to by the remaining addresses as the broadcast
word;
 Include in the subset:
 All addresses that are within the broadcast word,
 One address for each bank (other than the broadcasting bank) pointed to
by the remaining addresses.
Which word is selected as the broadcast word and which address is picked up for
each bank at each cycle are unspecified.
A common conflict-free case is when all threads of a half-warp read from an address
within the same 32-bit word.
Figure G-3 shows some examples of memory read accesses that involve the
broadcast mechanism. The same examples apply for devices of compute capability
1.x, but with 16 banks instead of 32.
G.3.3.3 8-Bit and 16-Bit Access
8-bit and 16-bit accesses typically generate bank conflicts. For example, there are
bank conflicts if an array of char is accessed the following way:
__shared__ char shared[32];
char data = shared[BaseIndex + tid];
because shared[0], shared[1], shared[2], and shared[3], for example,
belong to the same bank. There are no bank conflicts however, if the same array is
accessed the following way:
char data = shared[BaseIndex + 4 * tid];
G.3.3.4 Larger Than 32-Bit Access
Accesses that are larger than 32-bit per thread are split into 32-bit accesses that
typically generate bank conflicts.
For example, there are 2-way bank conflicts for arrays of doubles accessed as
follows:
__shared__ double shared[32];
double data = shared[BaseIndex + tid];
as the memory request is compiled into two separate 32-bit requests with a stride of
two. One way to avoid bank conflicts in this case is two split the double operands
like in the following sample code:
__shared__ int shared_lo[32];
__shared__ int shared_hi[32];

Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 161

double dataIn;
shared_lo[BaseIndex + tid] = __double2loint(dataIn);
shared_hi[BaseIndex + tid] = __double2hiint(dataIn);

double dataOut =
__hiloint2double(shared_hi[BaseIndex + tid],
shared_lo[BaseIndex + tid]);
This might not always improve performance however and does perform worse on
devices of compute capabilities 2.x.
The same applies to structure assignments. The following code, for example:
__shared__ struct type shared[32];
struct type data = shared[BaseIndex + tid];
results in:
 Three separate reads without bank conflicts if type is defined as
struct type {
float x, y, z;
};
since each member is accessed with an odd stride of three 32-bit words;
 Two separate reads with bank conflicts if type is defined as
struct type {
float x, y;
};
since each member is accessed with an even stride of two 32-bit words.
G.4 Compute Capability 2.x
G.4.1 Architecture
For devices of compute capability 2.x, a multiprocessor consists of:
 For devices of compute capability 2.0:
 32 CUDA cores for integer and floating-point arithmetic operations,
 4 special function units for single-precision floating-point transcendental
functions,
 For devices of compute capability 2.1:
 48 CUDA cores for integer and floating-point arithmetic operations,
 8 special function units for single-precision floating-point transcendental
functions,
 2 warp schedulers.
At every instruction issue time, each scheduler issues:
 One instruction for devices of compute capability 2.0,
 Two instructions for devices of compute capability 2.1,
for some warp that is ready to execute, if any. The first scheduler is in charge of the
warps with an odd ID and the second scheduler is in charge of the warps with an
Appendix G. Compute Capabilities


162 CUDA C Programming Guide Version 3.2

even ID. Note that when a scheduler issues a double-precision floating-point
instruction, the other scheduler cannot issue any instruction.
A warp scheduler can issue an instruction to only half of the CUDA cores. To
execute an instruction for all threads of a warp, a warp scheduler must therefore
issue the instruction over two clock cycles for an integer or floating-point arithmetic
instruction.
A multiprocessor also has a read-only uniform cache that is shared by all functional
units and speeds up reads from the constant memory space, which resides in device
memory.
There is an L1 cache for each multiprocessor and an L2 cache shared by all
multiprocessors, both of which are used to cache accesses to local or global
memory, including temporary register spills. The cache behavior (e.g. whether reads
are cached in both L1 and L2 or in L2 only) can be partially configured on a per-
access basis using modifiers to the load or store instruction.
The same on-chip memory is used for both L1 and shared memory: It can be
configured as 48 KB of shared memory and 16 KB of L1 cache or as 16 KB of
shared memory and 48 KB of L1 cache, using
cudaFuncSetCacheConfig()/cuFuncSetCacheConfig():
// Device code
__global__ void MyKernel()
{
...
}

// Host code

// Runtime API
// cudaFuncCachePreferShared: shared memory is 48 KB
// cudaFuncCachePreferL1: shared memory is 16 KB
// cudaFuncCachePreferNone: no preference
cudaFuncSetCacheConfig(MyKernel, cudaFuncCachePreferShared)

// Driver API
// CU_FUNC_CACHE_PREFER_SHARED: shared memory is 48 KB
// CU_FUNC_CACHE_PREFER_L1: shared memory is 16 KB
// CU_FUNC_CACHE_PREFER_NONE: no preference
CUfunction myKernel;
cuFuncSetCacheConfig(myKernel, CU_FUNC_CACHE_PREFER_SHARED)
The default cache configuration is "prefer none," meaning "no preference." If a
kernel is configured to have no preference, then it will default to the preference of
the current thread/context, which is set using
cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() (see the
reference manual for details). If the current thread/context also has no preference
(which is again the default setting), then whichever cache configuration was most
recently used for any kernel will be the one that is used, unless a different cache
configuration is required to launch the kernel (e.g., due to shared memory
requirements). The initial configuration is 48KB of shared memory and 16KB of L1
cache.
Multiprocessors are grouped into Graphics Processor Clusters (GPCs). A GPC includes
four multiprocessors.
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 163

Each multiprocessor has a read-only texture cache to speed up reads from the
texture memory space, which resides in device memory. It accesses the texture cache
via a texture unit that implements the various addressing modes and data filtering
mentioned in Section 3.2.4.
G.4.2 Global Memory
Global memory accesses are cached. Using the –dlcm compilation flag, they can be
configured at compile time to be cached in both L1 and L2 (-Xptxas -dlcm=ca)
(this is the default setting) or in L2 only (-Xptxas -dlcm=cg).
A cache line is 128 bytes and maps to a 128-byte aligned segment in device memory.
Memory accesses that are cached in both L1 and L2 are serviced with 128-byte
memory transactions whereas memory accesses that are cached in L2 only are
serviced with 32-byte memory transactions. Caching in L2 only can therefore reduce
over-fetch, for example, in the case of scattered memory accesses.
If the size of the words accessed by each thread is more than 4 bytes, a memory
request by a warp is first split into separate 128-byte memory requests that are
issued independently:
 Two memory requests, one for each half-warp, if the size is 8 bytes,
 Four memory requests, one for each quarter-warp, if the size is 16 bytes.
Each memory request is then broken down into cache line requests that are issued
independently. A cache line request is serviced at the throughput of L1 or L2 cache
in case of a cache hit, or at the throughput of device memory, otherwise.
Note that threads can access any words in any order, including the same words.
If a non-atomic instruction executed by a warp writes to the same location in global
memory for more than one of the threads of the warp, only one thread performs a
write and which thread does it is undefined.

Appendix G. Compute Capabilities


164 CUDA C Programming Guide Version 3.2


Figure G-1. Examples of Global Memory Accesses by a Warp,
4-Byte Word per Thread, and Associated Memory
Transactions Based on Compute Capability
128 160 192 256 224 96 288 Addresses:
0 31

Threads:
Compute capability:
Memory transactions:
1.0 and 1.1
8 x 32B at 128
8 x 32B at 160
8 x 32B at 192
8 x 32B at 224
1.2 and 1.3
1 x 64B at 128
1 x 64B at 192


2.0
1 x 128B at 128
Aligned and non-sequential
Uncached Cached
128 160 192 256 224 96 288 Addresses:
0 31

Threads:
Compute capability:
Memory transactions:
1.0 and 1.1
1 x 64B at 128
1 x 64B at 192
1.2 and 1.3
1 x 64B at 128
1 x 64B at 192


2.0
1 x 128B at 128
Aligned and sequential
Uncached Cached
128 160 192 256 224 96 288 Addresses:
0 31

Threads:
Compute capability:
Memory transactions:
1.0 and 1.1
7 x 32B at 128
8 x 32B at 160
8 x 32B at 192
8 x 32B at 224
1 x 32B at 256

1.2 and 1.3
1 x 128B at 128
1 x 64B at 192
1 x 32B at 256


2.0
1 x 128B at 128
1 x 128B at 256
Misaligned and sequential
Uncached Cached
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 165

G.4.3 Shared Memory
Shared memory has 32 banks that are organized such that successive 32-bit words
are assigned to successive banks, i.e. interleaved. Each bank has a bandwidth of 32
bits per two clock cycles. Therefore, unlike for devices of lower compute capability,
there may be bank conflicts between a thread belonging to the first half of a warp
and a thread belonging to the second half of the same warp.
A bank conflict only occurs if two or more threads access any bytes within different
32-bit words belonging to the same bank. If two or more threads access any bytes
within the same 32-bit word, there is no bank conflict between these threads: For
read accesses, the word is broadcast to the requesting threads (unlike for devices of
compute capability 1.x, multiple words can be broadcast in a single transaction); for
write accesses, each byte is written by only one of the threads (which thread
performs the write is undefined).
This means, in particular, that unlike for devices of compute capability 1.x, there are
no bank conflicts if an array of char is accessed as follows, for example:
__shared__ char shared[32];
char data = shared[BaseIndex + tid];
G.4.3.1 32-Bit Strided Access
A common access pattern is for each thread to access a 32-bit word from an array
indexed by the thread ID tid and with some stride s:
__shared__ float shared[32];
float data = shared[BaseIndex + s * tid];
In this case, threads tid and tid+n access the same bank whenever s*n is a
multiple of the number of banks (i.e. 32) or, equivalently, whenever n is a multiple
of 32/d where d is the greatest common divisor of 32 and s. As a consequence,
there will be no bank conflict only if the warp size (i.e. 32) is less than or equal to
32/d., that is only if d is equal to 1, i.e. s is odd.
Figure G-2 shows some examples of strided access.
G.4.3.2 Larger Than 32-Bit Access
64-bit and 128-bit accesses are specifically handled to minimize bank conflicts as
described below.
Other accesses larger than 32-bit are split into 32-bit, 64-bit, or 128-bit accesses.
The following code, for example:
struct type {
float x, y, z;
};

__shared__ struct type shared[32];
struct type data = shared[BaseIndex + tid];
results in three separate 32-bit reads without bank conflicts since each member is
accessed with a stride of three 32-bit words.
64-Bit Accesses
For 64-bit accesses, a bank conflict only occurs if two or more threads in either of
the half-warps access different addresses belonging to the same bank.
Appendix G. Compute Capabilities


166 CUDA C Programming Guide Version 3.2

Unlike for devices of compute capability 1.x, there are no bank conflicts for arrays
of doubles accessed as follows, for example:
__shared__ double shared[32];
double data = shared[BaseIndex + tid];
128-Bit Accesses
The majority of 128-bit accesses will cause 2-way bank conflicts, even if no two
threads in a quarter-warp access different addresses belonging to the same bank.
Therefore, to determine the ways of bank conflicts, one must add 1 to the
maximum number of threads in a quarter-warp that access different addresses
belonging to the same bank.
G.4.4 Constant Memory
In addition to the constant memory space supported by devices of all compute
capabilities (where __constant__ variables reside), devices of compute
capability 2.x support the LDU (LoaD Uniform) instruction that the compiler use
to load any variable that is:
 pointing to global memory,
 read-only in the kernel (programmer can enforce this using the const
keyword),
 not dependent on thread ID.


Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 167


Left: Linear addressing with a stride of one 32-bit word (no bank conflict).
Middle: Linear addressing with a stride of two 32-bit words (2-way bank conflicts).
Right: Linear addressing with a stride of three 32-bit words (no bank conflict).
Figure G-2 Examples of Strided Shared Memory Accesses for
Devices of Compute Capability 2.x
31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31 31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31 31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31
Appendix G. Compute Capabilities


168 CUDA C Programming Guide Version 3.2


Left: Conflict-free access via random permutation.
Middle: Conflict-free access since threads 3, 4, 6, 7, and 9 access the same word within bank 5.
Right: Conflict-free broadcast access (all threads access the same word).
31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31 31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31 31

30

29

28

27

26

25

24

23

22

21

20

19

18

17

16

15

14

13

12

11

10

9

8

7

6

5

4

3

2

1

0

Banks: Threads:
0
1
2
4
3
5
6
7
8
9
10
12
11
13
14
15
16
17
18
20
19
21
22
23
24
25
26
28
27
29
30
31
Appendix G. Compute Capabilities


CUDA C Programming Guide Version 3.2 169

Figure G-3 Examples of Irregular and Colliding Shared
Memory Accesses for Devices of Compute
Capability 2.x

NVIDIA Corporation
2701 San Tomas Expressway
Santa Clara, CA 95050
www.nvidia.com
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND
OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA
MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE
MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT,
MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no
responsibility for the consequences of use of such information or for any infringement of patents or other
rights of third parties that may result from its use. No license is granted by implication or otherwise under any
patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to
change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA
Corporation products are not authorized for use as critical components in life support devices or systems
without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA, the NVIDIA logo, GeForce, Tesla, and Quadro are trademarks or registered trademarks of NVIDIA
Corporation. Other company and product names may be trademarks of the respective companies with which
they are associated.
OpenCL is trademark of Apple Inc. used under license to the Khronos Group Inc.
Copyright
© 2006-2010 NVIDIA Corporation. All rights reserved.
This work incorporates portions of on an earlier work: Scalable Parallel Programming with CUDA, in ACM
Queue, VOL 6, No. 2 (March/April 2008), © ACM, 2008. http://mags.acm.org/queue/20080304/?u1=texterity"

Changes from Version 3.1.1

       

  

   

Simplified all the code samples that use cuParamSetv() to set a kernel parameter of type CUdeviceptr since CUdeviceptr is now of same size and alignment as void*, so there is no longer any need to go through an interneditate void* variable. Added Section 3.2.4.1.4 on 16-bit floating-point textures. Added Section 3.2.4.4 on read/write coherency for texture and surface memory. Added more details about surface memory access to Section 3.2.4.2. Added more details to Section 3.2.6.5. Mentioned new stream synchronization function cudaStreamSynchronize() in Section 3.2.6.5.2. Mentioned in Sections 3.2.7.2, 3.3.10.2, and 4.3 the new API calls to deal with devices using NVIDIA SLI in AFR mode. Added Sections 3.2.9 and 3.3.12 about the call stack. Changed the type of the pitch variable in the second code sample of Section 3.3.4 from unsigned int to size_t following the function signature change of cuMemAllocPitch(). Changed the type of the bytes variable in the last code sample of Section 3.3.4 from unsigned int to size_t following the function signature change of cuModuleGetGlobal(). Removed cuParamSetTexRef() from Section 3.3.7 as it is no longer necessary. Updated Section 5.2.3, Table 5-1, and Section G.4.1 for devices of compute capability 2.1. Added GeForce GTX 480M, GeForce GTX 470M, GeForce GTX 460M, GeForce GTX 445M, GeForce GTX 435M, GeForce GTX 425M, GeForce GTX 420M, GeForce GTX 415M, GeForce GTX 460, GeForce GTS 450, GeForce GTX 465, GeForce GTX 580, Quadro 2000, Quadro 600, Quadro 4000, Quadro 5000, Quadro 5000M, and Quadro 6000 to Table A-1. Fixed sample code in Section B.2.3: array[] was declared as an array of char causing a compiler error (“Unaligned memory accesses not supported”) when casting array to a pointer of higher alignment requirement; declaring array[] as an array of float fixes it. Mentioned in Section B.11 that any atomic operation can be implemented based on atomic Compare And Swap. Added Section B.15 on the new malloc() and free() device functions. Moved the type casting functions to a separate section C.2.4. Fixed the maximum height of a 2D texture reference for devices of compute capability 2.x (65535 instead of 65536) in Section G.1.

ii

CUDA C Programming Guide Version 3.2

Fixed the maximum dimensions for surface references in Section G.1.  Mentioned the new cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() API calls in Section G.4.1.  Mentioned in Section G.4.2 that global memory accesses that are cached in L2 only are serviced with 32-byte memory transactions.

CUDA C Programming Guide Version 3.2

iii

.............................4 2....3 3.... 17 C/C++ Compatibility ..................................................... 7 Chapter 3......... Programming Model ..............1............1............... 16 Binary Compatibility .................1 From Graphics Processing to General-Purpose Parallel Computing... 21 Multiple Devices ....... 19 Shared Memory ................2............................................................................................... 18 Device Memory .........4 2.......4............. Programming Interface ...........................................2 3............. 14 Compilation with NVCC ............................................................... 34 CUDA Arrays ..................... 29 Texture Memory ...........1........................ 7 Thread Hierarchy ...................................................................................................... 3 A Scalable Programming Model ....5 3.............2...........................................................................4 CUDA C ................................2.......................3 3....2 3............... 11 Compute Capability ...............Table of Contents Chapter 1.................6 3...................................2 3.... 16 Application Compatibility .........5 3..............................1 3................................................................................1 1......................................................................................................................... 6 Kernels . Introduction .................................... 1 1.................................................................................3 2.......................4.............................3 1..............2............................2......................................1 3........................................................2 ............................................. 28 Texture and Surface Memory ..................................... 8 Memory Hierarchy ......................2 3.....................................................1 3...............................................................................4....................................................1........................... 15 3...1...1............................. 16 PTX Compatibility .1 2............................ 36 Read/Write Coherency .........2 2..2 1.......... 29 Surface Memory ................................... 15 Compilation Workflow.......................................2.....................4........................ 1 CUDA™: a General-Purpose Parallel Computing Architecture ....................4 3...................... 18 3............................................... 4 Document’s Structure .. 18 64-Bit Compatibility ...........................................3 3.......... 36 Chapter 2....... 10 Heterogeneous Programming ..............................................2..........2....4 iv CUDA C Programming Guide Version 3........

...........................2.....................3..3.................2 v ...........3........................... 39 Stream ......... 61 Multiple Devices ...............3 3....................7.......1 3...6........................................9............................................................1 3..........3.....................................6.2.....................6 3......2...... 42 OpenGL Interoperability ..................6...................................................3.................................... 37 Concurrent Execution between Host and Device .....................2...2.........5 Page-Locked Host Memory...8 3.............. 52 3.....................................3......3................ 38 Overlap of Data Transfer and Kernel Execution . 45 3......6......................................2 3.......................................................... 62 Texture Memory ..........2..........................9 3............................................................ 62 Texture and Surface Memory .................... 39 Event .......................... 52 Context ................. 51 Call Stack ........................ 41 Synchronous Calls .....8 3..........1 3..................3...............1 3..........................2..2................3 3........ 38 Concurrent Kernel Execution .................2 3.........................2 3.......................2.....1 3......10 Graphics Interoperability................................3.................6.......3 3......... 43 Direct3D Interoperability ....................................... 56 Device Memory ..7.....3.....7.............2...............9...................3 3...................................... 55 Kernel Execution ...........9 Page-Locked Host Memory..............7 Asynchronous Concurrent Execution .......................2............................ 38 Graphics Interoperability......................4 3...........................................4 3....3................................................ 38 Concurrent Data Transfers ............................6 3.......... 54 Module .....3.....3...................5 3......... 37 Write-Combining Memory ..............1 3....................3 3....6...............................................5............................................................................7.......................... 66 Event Management ............................................3...............................................................................2.................................2 3.............................................7 3............... 62 Surface Memory ..... 36 Portable Memory .....................................2..... 66 Stream ..............5....6 3... 67 Synchronous Calls .................5...................................................3................................................................ 37 Mapped Memory............................................. 67 3...9....... 64 Driver API ............5 3..........2..........2.................... 67 CUDA C Programming Guide Version 3...............2..................7 3..........3... 58 Shared Memory ............................................................... 65 Asynchronous Concurrent Execution ........................... 42 Error Handling ......2...........................................2 3..................................6.........2 3......

.......3 5............... 90 Local Memory...............................3........... 79 SIMT Architecture ...........................2..........3...........................................................................................................................................2........3.............................................................12 3... Hardware Implementation ..........................3..................5 3...........2............... 86 Multiprocessor Level ............................................................................... 85 Application Level ....................5 5..........................3........................................4...........4 5...................................................................... Performance Guidelines ..........2.2 OpenGL Interoperability .. 78 Compute Modes ... CUDA-Enabled GPUs ................................................................... 81 Hardware Multithreading ............................................................................ 94 Control Flow Instructions .............................................................. 97 Appendix A..........................6 3....................... 96 Synchronization Instruction ......11 3...4 3.........1 5.... 92 Constant Memory ............................................................................................... 82 Multiple Devices ..........2 5.................................................1 5...3 5...................................................... 103 vi CUDA C Programming Guide Version 3........................................................1 5.................................................................................................................. 85 5........................1 Function Type Qualifiers ................................ 85 Maximize Utilization ........................... 93 Chapter 4.........................4.........2 .............. 77 Versioning and Compatibility........ C Language Extensions ....7 4...2 5.......................................................10.............................2......2............................................. 77 Interoperability between Runtime and Driver APIs ...1 5.................. 85 Device Level .......3...... 86 Data Transfer between Host and Device ........................ 88 5.... 77 Call Stack ........ 99 Appendix B.........2...............................2 Maximize Memory Throughput ............. 89 Device Memory Accesses ........2 4...............................3.......3................................3.....3 Maximize Instruction Throughput ..................................... 79 Mode Switches ...........................1 3....................2................................. 103 B.........4........................ 89 Global Memory ................................. 68 Direct3D Interoperability ................ 70 Error Handling ....................................3 5...........................................................................10.............................. 83 Overall Performance Optimization Strategies ........................................ 93 Arithmetic Instructions .. 92 Texture and Surface Memory ............3... 81 Chapter 5.4 5..1 4................. 91 Shared Memory ......................................................3 5...............................................3......................1 5.....................2 5..........2 3.......................3....

.. 105 __constant__ ............... 106 Pointers .................. ushort4...............................................................4. 105 Restrictions .............................................. 104 B........ 103 __host__ ..........5 dim3 ....... double1..... float4............................4....................1 B.................2 B..............................4........................................B......................4.....................................................4........3.......................................4 __device__ ............1...4......... 104 Recursion .... double2 108 B............................................. 104 Functions Parameters ....... char2........... 110 Restrictions ......4........ char3......... uchar3.....2.. 107 Built-in Vector Types .................. 106 Storage and Scope .. int2.............1 B.. long3.....................................................2...2 B.4 Variable Type Qualifiers ........1.....................................................................3 B.... 105 __shared__ ................. long1............................................................... 108 B.............................4............... longlong2.1......................4 B...... 103 __global__ ........................................... 105 __device__ . short2................2..................... 109 Memory Fence Functions ......... short1..3 volatile ......3 B..........................2 B.............................3..... ushort3..2 B.........................3 B... ulong4........1 B..............................1 char1.... short3............................................1............. 109 warpSize .......................... uint1..1.... 109 blockDim ............................... 107 B......4..... short4...................... 106 Assignment ..............................................6 B..........................................................2 B....... 109 blockIdx .......2..................2......... long4...............2 B......................................................... char4.. 103 Restrictions ................................................... ushort1............ uchar2.................................................2.................................................1....................... float2.......................... int4.......3 B.......... long2..... float1......................... 104 Static Variables ............................................2 vii ...........................4 B.....4 B.................. 104 Variadic Functions .............. uint2........................................ 109 gridDim ........ ulong1......4................................................... int1...........................................2....... uchar1......................4.............................................2.............4.......5 B................. ulonglong2.. int3...............1 B................................ uint3... 106 Automatic Variable .. uint4.. ulong2..............4............ 104 Function Pointers................. float3....... uchar4............1 B....................4.2 B..........................1.............4......1....5 B............................................2.. longlong1..... 110 Built-in Variables ................. 109 threadIdx ................ ulong3..................5 B...4 B..3 B.... ulonglong1.......................... 110 CUDA C Programming Guide Version 3..................................1.... ushort2...........

............................................. 113 tex1Dfetch() ..11............8.........................................................6 B..........................2 B................ 114 surf1Dread() .....................................5 B...................13 B...... 116 atomicSub() ................................................................ 119 Formatted Output ..............9............................................ 115 surf2Dwrite() ....................................10 B...3 B...........................1 B.. 112 Texture Functions ..............2 .11....................2...1..2...... 115 B....... 115 Atomic Functions .............................. 117 atomicDec() ...............................................................14......................................................11..... 118 atomicAnd() ............................................9 B.. 118 atomicCAS() .....................................................11..9..............................1...........1 B.......................... 114 Time Function ..................................1 B............. 119 Profiler Counter Function ..2 B......... 114 tex2D() .....14.......................................11.....2 B..14.................................................................................................................................................1 B..... 118 atomicXor()...3 viii CUDA C Programming Guide Version 3.................... 113 tex1D() .............................................................................................8 Synchronization Functions ..........................................11.........................................................................9................................................................ 116 Arithmetic Functions ...................................11.............................2...........................................................................11..............1... 119 Format Specifiers ............ 117 atomicInc() .....................................1.......................................11.3 B..4 B.............................................. 117 atomicMin() ..........1..........................11..11.......B...............................................................................................................................1.........................1 B.....................................................8 B...........2 B........ 115 surf2Dread() .....................................................................2 B............................7 B.....14 Warp Vote Functions................................. 117 atomicMax()........................... 118 B............................. 115 surf1Dwrite() ..........................2 Bitwise Functions ................1.4 B.....11...............3 B........... 120 Associated Host-Side API ..................................8..........12 B............ 116 atomicAdd() .....6 B.................................................................................................11.......................................................................1.11 Surface Functions ............ 118 atomicOr() ................................................................................................. 111 Mathematical Functions . 114 tex3D() .......................9...................................................8........................ 120 Limitations .......................... 117 atomicExch() ...............3 B.................... 121 B...............1 B..8....7 B..... 118 B..............................4 B.

.....2....................... 143 D..................14..........3 C......................... 129 Double-Precision Floating-Point Functions .............6..... 125 Dynamic Global Memory Allocation ........................................ NVCC Specifics .................. 139 Default Parameters ............ 127 Standard Functions ............... 142 Example 1 Pixel Data Type............. 134 Appendix D...........................................2.......................6 Intrinsic Functions ..... 129 Single-Precision Floating-Point Functions ......... 141 Classes .....................B....1........1 E........ 123 Examples ...................................15.............................1 D....................... C++ Language Constructs .............. 136 Type Casting Functions.....2 C.......................................4 D..............................2 E....................................................1 C.........2 B............15 B..6.......................................................................................................................2 ix ..........................................1 D.......2........ 121 Heap Memory Allocation ...................................................... 137 Appendix C.................15...................5 D......................................................4 B..........16 B.......................................................3 D....2 D.3 C............................. 126 Launch Bounds ................2 C............................................................. 124 Allocation Persisting Between Kernel Launches .. 142 Example 2 Functor Class ............................................................ 145 __restrict__ ..................................1 B.......................................... 132 Integer Functions ................3 Examples .......1.............3 Appendix E......................2 E........... 145 __noinline__ and __forceinline__ ............2 B........................................................................................................................1 B..3 B.............................................................17 C............................................... 123 Interoperability with Host Memory API .........................15.................................. Mathematical Functions ..............................1.................................................. 122 B...............................................1 Execution Configuration ............................................. 136 Integer Functions ..................................................................4 D..................................1 C................... 134 Single-Precision Floating-Point Functions ......................................... 123 Per Thread Allocation ............................ 141 Function Templates ................. 146 CUDA C Programming Guide Version 3....15..................................................... 139 Polymorphism .... 140 Operator Overloading..................................3.........................................3....................................15............2 C. 129 C.................................... 123 Per Thread Block Allocation ...15. 134 Double-Precision Floating-Point Functions .............. 140 Namespaces.......3....................................................................................................2.................................. 145 #pragma unroll ...............

...........................3..................................................................................1 G... 154 Floating-Point Standard ..........................................4..........................3.. 161 Architecture .......4 G..........Appendix F...................... Texture Fetching ...................................................... 165 G........................................... 158 32-Bit Strided Access ..1 F.3 G...............3...................2 G.................... 150 Linear Filtering .................................................1 G..4............... 161 Global Memory .......................3................ 166 x CUDA C Programming Guide Version 3........3....................... 159 32-Bit Broadcast Access .................. 159 Compute Capability 2........... 153 G.............................x .........................2 F.........................................1 G............................................2 G...........x ...3....2...2 G........................................................................................................... 155 Compute Capability 1...................3 Shared Memory ........ 152 Features and Technical Specifications ................................1 G................. 163 Shared Memory ............................................................................................2 G..................................................... 150 Table Lookup .......2 G.............................................3............4....4....3.3 Nearest-Point Sampling ...............3................... 160 Larger Than 32-Bit Access ......................... 165 Larger Than 32-Bit Access ....................................... 158 Devices of Compute Capability 1................. 157 Global Memory .......................... 157 Architecture ....... 165 32-Bit Strided Access ..........................1 ........3................3 ....4.............................................................. 158 Devices of Compute Capability 1..........3 G.......1 G........................................4.............................2 and 1.3..........3.........3........ 149 F...3..................2 .............0 and 1..............................2 G............................................................ Compute Capabilities .............. 160 8-Bit and 16-Bit Access .................4 Constant Memory ..............3.4 G.................................... 160 Appendix G.....3 G............................2..1 G.

............... 4-Byte Word per Thread............................................................................. 151 Figure F-3.......................... 5 Grid of Thread Blocks ....................................................... CUDA is Designed to Support Various Languages or Application Programming Interfaces .......... 9 Memory Hierarchy .. Automatic Scalability ......... Figure 2-2............................ 150 Figure F-2.......... Figure 2-1.....................................List of Figures Figure 1-1..... 13 Matrix Multiplication without Shared Memory . 28 Library Context Management ................................... One-Dimensional Table Lookup Using Linear Filtering .................................................. Figure 3-3......................... Linear Filtering of a One-Dimensional Texture of Four Texels in Clamp Addressing Mode.......... The GPU Devotes More Transistors to Data Processing ............. Figure 3-4.....................x .... Figure 3-1................. 55 The Driver API is Backward..... 11 Heterogeneous Programming ..................................... and Associated Memory Transactions Based on Compute Capability ........... but Not Forward Compatible ...........................x ........ Figure F-1......... Figure 2-3........................... 24 Matrix Multiplication with Shared Memory ......... 3 Figure 1-3.............................................................................................................................................. 79 Nearest-Point Sampling of a One-Dimensional Texture of Four Texels ...2 xi ............. Examples of Global Memory Accesses by a Warp.................... Floating-Point Operations per Second and Memory Bandwidth for the CPU and GPU 2 Figure 1-2................................................... 152 Figure G-1................ 167 Figure G-3 Examples of Irregular and Colliding Shared Memory Accesses for Devices of Compute Capability 2................................. 4 Figure 1-4.................. 164 Figure G-2 Examples of Strided Shared Memory Accesses for Devices of Compute Capability 2........... Figure 3-2...... 169 CUDA C Programming Guide Version 3........

.

1 1 . the programmable Graphic Processor Unit or GPU has evolved into a highly parallel.Chapter 1. multithreaded. CUDA C Programming Guide Version 3. as illustrated by Figure 1-1. high-definition 3D graphics. Introduction 1.1 From Graphics Processing to General-Purpose Parallel Computing Driven by the insatiable market demand for realtime. manycore processor with tremendous computational horsepower and very high memory bandwidth.

Chapter 1. Floating-Point Operations per Second and Memory Bandwidth for the CPU and GPU 2 CUDA C Programming Guide Version 3.2 . Introduction Figure 1-1.

many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing. In 3D rendering. In fact. image and media processing applications such as post-processing of rendered images. large sets of pixels and vertices are mapped to parallel threads. video encoding and decoding.2 3 . image scaling. the GPU is especially well-suited to address problems that can be expressed as data-parallel computations – the same program is executed on many data elements in parallel – with high arithmetic intensity – the ratio of arithmetic operations to memory operations. and pattern recognition can map image blocks and pixels to parallel processing threads. The GPU Devotes More Transistors to Data Processing More specifically. from general signal processing or physics simulation to computational finance or computational biology. and because it is executed on many data elements and has high arithmetic intensity. as schematically illustrated by Figure 1-2. NVIDIA introduced CUDA™. Many applications that process large data sets can use a data-parallel programming model to speed up the computations. stereo vision. highly parallel computation – exactly what graphics rendering is about – and therefore designed such that more transistors are devoted to data processing rather than data caching and flow control. 1. Introduction The reason behind the discrepancy in floating-point capability between the CPU and the GPU is that the GPU is specialized for compute-intensive. Because the same program is executed for each data element. Data-parallel processing maps data elements to parallel processing threads.Chapter 1.2 CUDA™: a General-Purpose Parallel Computing Architecture In November 2006. the memory access latency can be hidden with calculations instead of big data caches. Similarly. there is a lower requirement for sophisticated flow control. Control ALU ALU ALU ALU Cache DRAM DRAM CPU GPU Figure 1-2. a general purpose parallel computing architecture – with a new parallel programming model and instruction set architecture – that leverages the parallel compute engine in NVIDIA GPUs to CUDA C Programming Guide Version 3.

Furthermore. Figure 1-3. and each sub-problem into finer pieces that can be solved cooperatively in parallel by all threads within the block. shared memories.3 A Scalable Programming Model The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. their parallelism continues to scale with Moore‟s law. At its core are three key abstractions – a hierarchy of thread groups. nested within coarse-grained data parallelism and task parallelism. The CUDA parallel programming model is designed to overcome this challenge while maintaining a low learning curve for programmers familiar with standard programming languages such as C. They guide the programmer to partition the problem into coarse sub-problems that can be solved independently in parallel by blocks of threads. Introduction solve many complex computational problems in a more efficient way than on a CPU. The challenge is to develop application software that transparently scales its parallelism to leverage the increasing number of processor cores. These abstractions provide fine-grained data parallelism and thread parallelism. CUDA comes with a software environment that allows developers to use C as a high-level programming language. OpenCL. much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores. and DirectCompute. This decomposition preserves language expressivity by allowing threads to 4 CUDA C Programming Guide Version 3. such as CUDA FORTRAN. CUDA is Designed to Support Various Languages or Application Programming Interfaces 1. other languages or application programming interfaces are supported. and barrier synchronization – that are simply exposed to the programmer as a minimal set of language extensions. As illustrated by Figure 1-3.Chapter 1.2 .

mainstream GeForce GPUs (see Appendix A for a list of all CUDA-enabled GPUs). and only the runtime system needs to know the physical processor count.2 5 . and at the same time enables automatic scalability. so that a GPU with more cores will automatically execute the program in less time than a GPU with fewer cores. Indeed. Multithreaded CUDA Program Block 0 Block 4 Block 1 Block 5 Block 5 Block 2 Block 6 Block 6 Block 3 Block 7 GPU with 2 Cores Core 0 Core 1 GPU with 4 Cores Core 0 Core 1 Core 2 Core 3 Block 0 Block 1 Block 0 Block 1 Block 2 Block 3 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 4 Block 5 Block 6 Block 7 A multithreaded program is partitioned into blocks of threads that execute independently from each other. so that a compiled CUDA program can execute on any number of processor cores as illustrated by Figure 1-4. Figure 1-4. Automatic Scalability CUDA C Programming Guide Version 3. concurrently or sequentially. This scalable programming model allows the CUDA architecture to span a wide market range by simply scaling the number of processors and memory partitions: from the high-performance enthusiast GeForce GPUs and professional Quadro and Tesla computing products to a variety of inexpensive.Chapter 1. Introduction cooperate when solving each sub-problem. each block of threads can be scheduled on any of the available processor cores. in any order.

Chapter 3 describes the programming interface. Introduction 1. Appendix D lists the C++ constructs supported in device code. 6 CUDA C Programming Guide Version 3. as well as more architectural details. Chapter 5 gives some guidance on how to achieve maximum performance. Appendix G gives the technical specifications of various devices. Appendix E lists the specific keywords and directives supported by nvcc. Appendix B is a detailed description of all extensions to the C language. Chapter 2 outlines the CUDA programming model. Chapter 4 describes the hardware implementation. Appendix C lists the mathematical functions supported in CUDA.4 Document’s Structure This document is organized into the following chapters:             Chapter 1 is a general introduction to CUDA.Chapter 1.2 . Appendix F gives more details on texture fetching. Appendix A lists all CUDA-enabled devices.

CUDA C Programming Guide Version 3.2. when called. as opposed to only once like regular C functions. Programming Model This chapter introduces the main concepts behind the CUDA programming model by outlining how they are exposed in C. // Kernel invocation with N threads VecAdd<<<1.Chapter 2. that. float* B.1 7 . 2.1 Kernels CUDA C extends C by allowing the programmer to define C functions. Full code for the vector addition example used in this chapter and the next can be found in the vectorAdd SDK code sample. C[i] = A[i] + B[i]. Each thread that executes the kernel is given a unique thread ID that is accessible within the kernel through the built-in threadIdx variable..x. As an illustration. each of the N threads that execute VecAdd() performs one pair-wise addition.. float* C) { int i = threadIdx. the following sample code adds two vectors A and B of size N and stores the result into vector C: // Kernel definition __global__ void VecAdd(float* A. } Here. C). A kernel is defined using the __global__ declaration specifier and the number of CUDA threads that execute that kernel for a given kernel call is specified using a new <<<…>>> execution configuration syntax (see Appendix B. } int main() { . An extensive description of CUDA C is given in Section 3. B. N>>>(A. are executed N times in parallel by N different CUDA threads.16). called kernels.

y. Dz). a thread block may contain up to 1024 threads. } There is a limit to the number of threads per block. so that threads can be identified using a one-dimensional. On current GPUs. As an example. float B[N][N]. so that the total number of threads is equal to the number of threads per block times the number of blocks. y) is (x + y Dx). N).2 Thread Hierarchy For convenience. they are the same. This provides a natural way to invoke computation across the elements in a domain such as a vector. int j = threadIdx. forming a one-dimensional.. for a threedimensional block of size (Dx. two-dimensional. The index of a thread and its thread ID relate to each other in a straightforward way: For a one-dimensional block.2 .x. // Kernel invocation with one block of N * N * 1 threads int numBlocks = 1. C). 8 CUDA C Programming Guide Version 3. the thread ID of a thread of index (x. z) is (x + y Dx + z Dx Dy). or three-dimensional thread index. or three-dimensional thread block. dim3 threadsPerBlock(N. which it can greatly exceed. matrix. two-dimensional. for a two-dimensional block of size (Dx. C[i][j] = A[i][j] + B[i][j]. Dy). float C[N][N]) { int i = threadIdx. threadIdx is a 3-component vector. Dy. MatAdd<<<numBlocks. y. However. threadsPerBlock>>>(A. the thread ID of a thread of index (x. a kernel can be executed by multiple equally-shaped thread blocks. } int main() { . The number of thread blocks in a grid is usually dictated by the size of the data being processed or the number of processors in the system. or volume.Chapter 2. B. Programming Model 2. since all threads of a block are expected to reside on the same processor core and must share the limited memory resources of that core.. the following code adds two matrices A and B of size NxN and stores the result into matrix C: // Kernel definition __global__ void MatAdd(float A[N][N]. Blocks are organized into a one-dimensional or two-dimensional grid of thread blocks as illustrated by Figure 2-1.

int j = blockIdx. 1) Block (1.2 9 . 0) Block (2. 0) Thread (2. Grid of Thread Blocks The number of threads per block and the number of blocks per grid specified in the <<<…>>> syntax can be of type int or dim3. 2) Thread (3. CUDA C Programming Guide Version 3. 1) Thread (2.y * blockDim. 2) Thread (2. 1) Thread (0. 0) Thread (3. 0) Block (0. 0) Thread (0. Each block within the grid can be identified by a one-dimensional or twodimensional index accessible within the kernel through the built-in blockIdx variable. 0) Thread (1.y + threadIdx. the code becomes as follows.y.x * blockDim.Chapter 2: Programming Model Grid Block (0. 0) Block (1. Extending the previous MatAdd() example to handle multiple blocks. 1) Block (1. 2) Figure 2-1. // Kernel definition __global__ void MatAdd(float A[N][N]. float B[N][N]. 1) Thread (1. float C[N][N]) { int i = blockIdx. if (i < N && j < N) C[i][j] = A[i][j] + B[i][j]. 1) Block (2.x + threadIdx. 2) Thread (1. 1) Thread (3. The dimension of the thread block is accessible within the kernel through the built-in blockDim variable. 1) Thread (0.x. Two-dimensional blocks or grids can be specified as in the example above.

this example assumes that the number of threads per grid in each dimension is evenly divisible by the number of threads per block in that dimension. Thread blocks are required to execute independently: It must be possible to execute them in any order. one can specify synchronization points in the kernel by calling the __syncthreads() intrinsic function. The global. Programming Model } int main() { . For simplicity. 5. Each thread has private local memory.3 Memory Hierarchy CUDA threads may access data from multiple memory spaces during their execution as illustrated by Figure 2-2. constant.2. the shared memory is expected to be a low-latency memory near each processor core (much like an L1 cache) and __syncthreads() is expected to be lightweight.2 .3.2. Threads within a block can cooperate by sharing data through some shared memory and by synchronizing their execution to coordinate memory accesses. as well as data filtering.2.4). dim3 numBlocks(N / threadsPerBlock.2 gives an example of using shared memory.5). 2. All threads have access to the same global memory.2. threadsPerBlock>>>(A. There are also two additional read-only memory spaces accessible by all threads: the constant and texture memory spaces. C). for some specific data formats (see Section 3. For efficient cooperation. in parallel or in series.y). N / threadsPerBlock.x.2. Each thread block has shared memory visible to all threads of the block and with the same lifetime as the block.4. constant. This independence requirement allows thread blocks to be scheduled in any order across any number of cores as illustrated by Figure 1-4.3. // Kernel invocation dim3 threadsPerBlock(16. __syncthreads() acts as a barrier at which all threads in the block must wait before any is allowed to proceed. and texture memory spaces are persistent across kernel launches by the same application.. is a common choice. and 5.3. 16).1. Texture memory also offers different addressing modes. enabling programmers to write code that scales with the number of cores. Section 3. although arbitrary in this case. 10 CUDA C Programming Guide Version 3. } A thread block size of 16x16 (256 threads). More precisely. MatAdd<<<numBlocks. B. although that need not be the case.. The grid is created with enough blocks to have one thread per matrix element as before. The global.Chapter 2. and texture memory spaces are optimized for different memory usages (see Sections 5.

0) Block (2. 1) Block (1. 0) Block (0. 2) Block (1. Memory Hierarchy 2. 0) Global memory Block (0. CUDA C Programming Guide Version 3. 0) Block (1.Chapter 2: Programming Model Thread Per-thread local memory Thread Block Per-block shared memory Grid 0 Block (0. 0) Block (1. This is the case. for example. 1) Block (2. the CUDA programming model assumes that the CUDA threads execute on a physically separate device that operates as a coprocessor to the host running the C program.4 Heterogeneous Programming As illustrated by Figure 2-3. when the kernels execute on a GPU and the rest of the C program executes on a CPU.2 11 . 1) Block (0. 1) Grid 1 Block (0. 2) Figure 2-2. 1) Block (1.

a program manages the global. This includes device memory allocation and deallocation as well as data transfer between host and device memory. Therefore. respectively. Programming Model The CUDA programming model also assumes that both the host and the device maintain their own separate memory spaces in DRAM. referred to as host memory and device memory. 12 CUDA C Programming Guide Version 3. and texture memory spaces visible to kernels through calls to the CUDA runtime (described in Chapter 3).Chapter 2.2 . constant.

2) Block (1. Heterogeneous Programming CUDA C Programming Guide Version 3. 1) Block (2. 0) Block (2. 0) Block (0. 0) Block (0. 0) Block (1.Chapter 2: Programming Model C Program Sequential Execution Serial code Host Parallel kernel Kernel0<<<>>>() Device Grid 0 Block (0. 1) Block (1. 1) Block (1. 0) Block (1. 1) Block (0. 1) Serial code Host Parallel kernel Kernel1<<<>>>() Device Grid 1 Block (0. 2) Serial code executes on the host while parallel code executes on the device.2 13 . Figure 2-3.

Programming Model 2.x (Their major revision number is 1). 14 CUDA C Programming Guide Version 3. Appendix G gives the technical specifications of each compute capability. Prior devices are all of compute capability 1.Chapter 2. The major revision number of devices based on the Fermi architecture is 2. possibly including new features. Devices with the same major revision number are of the same core architecture. The minor revision number corresponds to an incremental improvement to the core architecture. Appendix A lists of all CUDA-enabled devices along with their compute capability.5 Compute Capability The compute capability of a device is defined by a major revision number and a minor revision number.2 .

CUDA arrays. manage systems with multiple devices. shared memory.Chapter 3. Section 3.1 15 . CUDA C comes with a runtime API and both the runtime API and the driver API provide functions to allocate and deallocate device memory. Any source file that contains some of these extensions must be compiled with nvcc as outlined in Section 3. but it can use both as described in Section 3. and to launch them. called PTX. context. Initialization.3 assumes knowledge of these concepts and describes how they are exposed by the driver API. page-locked host memory. asynchronous execution. Section 3. etc. 3. transfer data between host memory and device memory. is harder to program and debug. An application typically uses either one or the other. the CUDA driver API requires more code. The CUDA driver API is a lower-level C API that provides functions to load kernels as modules of CUDA binary or assembly code.1 Compilation with NVCC Kernels can be written using the CUDA instruction set architecture. It is however usually more CUDA C Programming Guide Version 3. CUDA C exposes the CUDA programming model as a minimal set of extensions to the C language. to inspect their parameters. device enumeration. Programming Interface Two interfaces are currently supported to write CUDA programs: CUDA C and the CUDA driver API. In contrast. It also introduces concepts that are common to both CUDA C and the driver API: linear memory. but offers a better level of control and is language-independent since it handles binary or assembly code. and module management are all implicit and resulting code is more concise. The runtime API is built on top of the CUDA driver API.2 continues the description of CUDA C started in Chapter 2. texture memory.1.4. which is described in the PTX reference manual. These extensions allow programmers to define a kernel as a C function and use some new syntax to specify the grid and block dimension each time the function is called. interoperability with graphics APIs. Binary and assembly codes are usually obtained by compiling kernels written in C.

16) into the necessary CUDA C runtime function calls to load and launch each compiled kernel. but allow applications to benefit from latest compiler improvements. The generated host code is output either as C code that is left to be compiled using another tool or as object code directly by letting nvcc invoke the host compiler during the last compilation stage. as detailed in Section 3. 3. Programming Interface effective to use a high-level programming language such as C.2 Binary Compatibility Binary code is architecture-specific.y is only guaranteed to execute on devices of compute capability X. For example.Chapter 3. Just-in-time compilation increases application load time.1 Compilation Workflow Source files compiled with nvcc can include a mix of host code (i. In both cases. nvcc‟s basic workflow consists in separating device code from host code and compiling the device code into an assembly form (PTX code) and/or binary form (cubin object).1. 3.3 PTX Compatibility Some PTX instructions are only supported on devices of higher compute capabilities.  Or link to the generated host code. Binary compatibility is guaranteed from one minor revision to the next one. kernels must be compiled into binary code by nvcc to execute on the device. Applications can then: Either load and execute the PTX code or cubin object on the device using the CUDA driver API (see Section 3.4. This section gives an overview of nvcc workflow and command options. but not from one minor revision to the previous one or across major revisions. atomic instructions on global memory are only supported 16 CUDA C Programming Guide Version 3. In other words. compiling with –code=sm_13 produces binary code for devices of compute capability 1.e. A cubin object is generated using the compiler option –code that specifies the targeted architecture: For example. Any PTX code loaded by an application at runtime is compiled further to binary code by the device driver.1. A complete description can be found in the nvcc user manual. the generated host code includes the PTX code and/or cubin object as a global initialized data array and a translation of the <<<…>>> syntax introduced in Section 2.e. a cubin object generated for compute capability X. nvcc is a compiler driver that simplifies the process of compiling C or PTX code: It provides simple and familiar command line options and executes them by invoking the collection of tools that implement the different compilation stages.2 .1.3.  3.3) and ignore the generated host code (if any). code that executes on the host) and device code (i. code that executes on the device). This is called just-in-time compilation.z where z≥y.1 (and described in more details in Section B. It is also the only way for applications to run on devices that did not exist at the time the application was compiled.1.

–code. which are only supported in devices of compute capability 1.cu –gencode arch=compute_10. In particular.1 (second -gencode option). an application must load PTX code that will be compiled just-intime for these devices. which. Which PTX and binary code gets embedded in a CUDA C application is controlled by the –arch and –code compiler options or the –gencode compiler option as detailed in the nvcc user manual.1. “arch=sm_13” is a shorthand for “arch=compute_13 code=compute_13. in the above example.code=sm_10 –gencode arch=compute_11. The –arch compiler option specifies the compute capability that is assumed when compiling C to PTX code. double-precision instructions are only supported on devices of compute capability 1. for example. and gencode compiler options.Chapter 3. code that contains double-precision arithmetic.cu can have an optimized code path that uses atomic operations.1 and higher.sm_11\’ embeds binary code compatible with compute capability 1.1 PTX code for devices with compute capabilities 2. __CUDA_ARCH__ is equal to 110.3. PTX code produced for some specific compute capability can always be compiled to binary code of greater or equal compute capability.0 and higher.sm_13\’”).sm_13” (which is the same as “gencode arch=compute_13.1.code=\’compute_13. The nvcc user manual lists various shorthands for the –arch. 1.1 and above.0 binary code for devices with compute capability 1. otherwise double-precision arithmetic will get demoted to single-precision arithmetic. to be able to execute code on future architectures with higher compute capability – for which no binary code can be generated yet –. 1. For example. So.1 binary code for devices with compute capability 1. must be compiled with “-arch=sm_13” (or higher compute capability).1. It is only defined for device code. nvcc x.3.  Applications using the driver API must compile code to separate files and explicitly load and execute the most appropriate file at runtime.  1. Programming Interface on devices of compute capability 1.2 17 . an application must load binary or PTX code that is compatible with this compute capability as described in Sections 3.0. Host code is generated to automatically select at runtime the most appropriate code to load and execute.4 Application Compatibility To execute code on devices of specific compute capability. 3.1. For example. will be: 1.code=\’compute_11. The __CUDA_ARCH__ macro can be used to differentiate various code paths based on compute capability. for example. x.  binary code obtained by compiling 1.2. When compiling with “arch=compute_11” for example.2 and 3.0 (first –gencode option) and PTX and binary code compatible with compute capability 1.3 and above. CUDA C Programming Guide Version 3.

) allocated via some runtime function call in the host thread is only valid within the context of the host thread. …) can operate on these resources. The runtime is implemented in the cudart dynamic library and all its entry points are prefixed with cuda. stream. Similarly. However. This is because a CUDA context (see Section 3..6 64-Bit Compatibility The 64-bit version of nvcc compiles device code in 64-bit mode (i. Full C++ is supported for the host code. any resource (memory. event. The 64-bit version of nvcc can compile device code in 32-bit mode also using the m32 compiler option. Device code compiled in 64-bit mode is only supported with host code compiled in 64-bit mode. Therefore only runtime functions calls made by the host thread (memory copies.2 CUDA C CUDA C provides a simple path for users familiar with the C programming language to easily write programs for execution by the device. only a subset of C++ is fully supported for the device code as described in detail in Appendix D. This section continues with an introduction to the runtime.3.1.1. 3. nvcc also support specific keywords and directives detailed in Appendix E.2 .g. Programming Interface 3. etc.1) is created under 18 CUDA C Programming Guide Version 3. As a consequence of the use of C++ syntax rules. Once the runtime has been initialized in a host thread. One needs to keep this in mind when timing runtime function calls and when interpreting the error code from the first call into the runtime.Chapter 3. A complete description of all extensions can be found in Appendix B and a complete description of the runtime in the CUDA reference manual. pointers are 64-bit). void pointers (e. returned by malloc()) cannot be assigned to non-void pointers without a typecast. the 32-bit version of nvcc compiles device code in 32-bit mode and device code compiled in 32-bit mode is only supported with host code compiled in 32-bit mode. 3. The core language extensions have been introduced in Chapter 2.e. kernel launches. It consists of a minimal set of extensions to the C language and a runtime library. There is no explicit initialization function for the runtime.5 C/C++ Compatibility The front end of the compiler processes CUDA source files according to C++ syntax rules. The 32-bit version of nvcc can compile device code in 64-bit mode also using the m64 compiler option. it initializes the first time a runtime function is called (more specifically any function other than functions from the device and version management sections of the reference manual).

CUDA C Programming Guide Version 3.. kernels are executed on device 0 by default as detailed in Section 3. 3. cudaMalloc(&d_A. and copy device memory. so separately allocated entities can reference one another via pointers. Linear memory is typically allocated using cudaMalloc() and freed using cudaFree() and data transfer between host memory and device memory are typically done using cudaMemcpy(). for example.. } // Host code int main() { int N = . each with their own separate memory. They are described in Section 3. size_t size = N * sizeof(float).. Device memory can be allocated either as linear memory or as CUDA arrays. float* h_B = (float*)malloc(size). Linear memory exists on the device in a 32-bit address space for devices of compute capability 1.1 Device Memory As mentioned in Section 2. cudaMalloc(&d_B. the CUDA programming model assumes a system composed of a host and a device. float* B. deallocate.2.3. Programming Interface the hood as part of initialization and made current to the host thread.x.2.4.. so the runtime provides functions to allocate. float* d_C.1. On system with multiple devices. and it cannot be made current to any other host thread. CUDA arrays are opaque memory layouts optimized for texture fetching. the vectors need to be copied from host memory to device memory: // Device code __global__ void VecAdd(float* A. int N) { int i = blockDim. size). // Initialize input vectors ..x.x and 40-bit address space of devices of compute capability 2. Kernels can only operate out of device memory. In the vector addition code sample of Section 2. float* d_B. float* C. if (i < N) C[i] = A[i] + B[i]. size).x + threadIdx.x * blockIdx. // Allocate vectors in device memory float* d_A.4. // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size).2 19 .2. in a binary tree. as well as transfer data between host memory and device memory.Chapter 3.

N). // Device code __global__ void MyKernel(float* devPtr. size_t pitch.Chapter 3. height). // Free device memory cudaFree(d_A). cudaMemcpyHostToDevice). // Copy result from device memory to host memory // h_C contains the result in host memory cudaMemcpy(h_C. The returned pitch (or stride) must be used to access array elements. int width. size_t pitch. for (int c = 0. ++c) { float element = row[c]. cudaMemcpy(d_B. Programming Interface cudaMalloc(&d_C. r < height. threadsPerBlock>>>(d_A. width * sizeof(float). cudaFree(d_C). size. &pitch.2. float* devPtr. therefore ensuring best performance when accessing the row addresses or performing copies between 2D arrays and other regions of device memory (using the cudaMemcpy2D() and cudaMemcpy3D() functions). c < width. } } } The following code sample allocates a width×height×depth 3D array of floating-point values and shows how to loop over the array elements in device code: 20 CUDA C Programming Guide Version 3. size.3. int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock. pitch. // Copy vectors from host memory to device memory cudaMemcpy(d_A. width. d_C. d_B. cudaMallocPitch(&devPtr.. int height) { for (int r = 0. cudaMemcpyDeviceToHost). These functions are recommended for allocations of 2D or 3D arrays as it makes sure that the allocation is appropriately padded to meet the alignment requirements described in Section 5. size). // Invoke kernel int threadsPerBlock = 256. The following code sample allocates a width×height 2D array of floating-point values and shows how to loop over the array elements in device code: // Host code int width = 64. 512>>>(devPtr.2 . VecAdd<<<blocksPerGrid. h_A. cudaFree(d_B). h_B. d_C. // Free host memory . height = 64. size. MyKernel<<<100.. ++r) { float* row = (float*)((char*)devPtr + r * pitch). } Linear memory can also be allocated through cudaMallocPitch() and cudaMalloc3D(). height).1. cudaMemcpyHostToDevice).

__device__ float devData. CUDA arrays. float* ptr. &ptr.pitch. Programming Interface // Host code int width = 64. sizeof(data)). MyKernel<<<100. cudaExtent extent = make_cudaExtent(width * sizeof(float). x < width. sizeof(ptr)). height. linear memory allocated with cudaMallocPitch() or cudaMalloc3D(). y < height. __device__ float* devPointer. constData. cudaMalloc3D(&devPitchedPtr. extent). The following code sample illustrates various ways of accessing global variables via the runtime API: __constant__ float constData[256]. size_t slicePitch = pitch * height. float value = 3. cudaMemcpyToSymbol(devPointer.14f. for (int y = 0.2 Shared Memory As detailed in Section B. The size of the allocated memory is obtained through cudaGetSymbolSize(). data. depth). sizeof(float)). ++z) { char* slice = devPtr + z * slicePitch. 512>>>(devPitchedPtr. 3.Chapter 3. int depth) { char* devPtr = devPitchedPtr. height = 64.ptr. size_t pitch = devPitchedPtr. width. ++y) { float* row = (float*)(slice + y * pitch). and memory allocated for variables declared in global or constant memory space. cudaMemcpyToSymbol(devData. } } } } The reference manual lists all the various functions used to copy memory between linear memory allocated with cudaMalloc(). depth). 256 * sizeof(float)). CUDA C Programming Guide Version 3. cudaMalloc(&ptr. // Device code __global__ void MyKernel(cudaPitchedPtr devPitchedPtr. height. &value. for (int z = 0.2 shared memory is allocated using the __shared__ qualifier. for (int x = 0.2. cudaPitchedPtr devPitchedPtr. int height. cudaMemcpyToSymbol(constData. z < depth.2 21 . int width. ++x) { float element = row[x]. depth = 64. cudaGetSymbolAddress() is used to retrieve the address pointing to the memory allocated for a variable declared in global memory space. float data[256]. cudaMemcpyFromSymbol(data. sizeof(data)).

width * A.elements.width. size = C. cudaMemcpyDeviceToHost). size = B. Each thread reads one row of A and one column of B and computes the corresponding element of C as illustrated in Figure 3-1.y).height * sizeof(float). Programming Interface Shared memory is expected to be much faster than global memory as mentioned in Section 2.2.width / dimBlock. d_B. const Matrix.elements.height = B.height.3. Cd.elements.width * B. cudaMalloc(&d_C. size_t size = A. d_C).width = C.height / dimBlock. // Read C from device memory cudaMemcpy(C. cudaMalloc(&d_A. size.width times from global memory and B is read A.elements. d_B.width * C.elements.elements + row * M. 22 CUDA C Programming Guide Version 3. int height. cudaMemcpy(d_B.width + col) typedef struct { int width. size.Chapter 3. d_B. size). cudaMalloc(&d_B. d_C.2 and detailed in Section 5.elements. MatMulKernel<<<dimGrid. } Matrix. size). float* elements.elements. // Matrices are stored in row-major order: // M(row.height * sizeof(float).height = C.width = A. B. // Matrix multiplication .3. A. A is therefore read B.elements. size).height.width = B. The following code sample is a straightforward implementation of matrix multiplication that does not take advantage of shared memory.height = A. Matrix C) { // Load A and B to device memory Matrix d_A.height * sizeof(float). const Matrix B. Matrix).height times. cudaMemcpyHostToDevice).width. d_A. // Thread block size #define BLOCK_SIZE 16 // Forward declaration of the matrix multiplication kernel __global__ void MatMulKernel(const Matrix. BLOCK_SIZE). dimBlock>>>(d_A.2 . Matrix d_B. d_A.height. Any opportunity to replace global memory accesses by shared memory accesses should therefore be exploited as illustrated by the following matrix multiplication example. dim3 dimGrid(B. cudaMemcpyHostToDevice). cudaMemcpy(d_A.Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A. // Invoke kernel dim3 dimBlock(BLOCK_SIZE. col) = *(M.elements. d_C. // Allocate C in device memory Matrix d_C. A. size.x.width.

int col = blockIdx.elements[row * C. } CUDA C Programming Guide Version 3. int row = blockIdx.width + col] = Cvalue.width + col]. cudaFree(d_C.elements). C.x + threadIdx.x * blockDim. Matrix C) { // Each thread computes one element of C // by accumulating results into Cvalue float Cvalue = 0.elements). Programming Interface // Free device memory cudaFree(d_A. Matrix B.width.elements).2 23 .width + e] * B.x. } // Matrix multiplication kernel called by MatMul() __global__ void MatMulKernel(Matrix A.elements[row * A. for (int e = 0.Chapter 3. ++e) Cvalue += A.y. e < A.y * blockDim. cudaFree(d_B.elements[e * B.y + threadIdx.

Chapter 3.height B. these two rectangular matrices are divided into as many square matrices of dimension block_size as necessary and Csub is computed as the sum of the products of these square matrices. and the submatrix of B of dimension (block_size. As illustrated in Figure 3-2.height-1 B.width Figure 3-1.width) that has the same column indices as Csub. and then by having each thread compute one element of the product.width A. each thread block is responsible for computing one square sub-matrix Csub of C and each thread within the block is responsible for computing one element of Csub. Each thread accumulates the result of each of these products into a register and once done writes the result to global memory.width-1 . block_size) that has the same line indices as Csub.height B. A. In this implementation. 24 CUDA C Programming Guide Version 3.2 A.width. Matrix Multiplication without Shared Memory The following code sample is an implementation of matrix multiplication that does take advantage of shared memory. Csub is equal to the product of two rectangular matrices: the sub-matrix of A of dimension (A. Each of these products is performed by first loading the two corresponding square matrices from global memory to shared memory with one thread loading one element of each matrix. In order to fit into the device‟s resources. Programming Interface 0 col B 0 A C row A.

2 25 . int col) { Matrix Asub.Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A. int row.1. int stride.elements[row * A. int row. const Matrix B. float value) { A. } // Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is // located col sub-matrices to the right and row sub-matrices down // from the upper-left corner of A __device__ Matrix GetSubMatrix(Matrix A.1) are used to get and set elements and build any submatrix from a matrix.stride = A. const Matrix.stride + col] = value.width / block_size) times from global memory and B is read (A. int height. Asub. CUDA C Programming Guide Version 3.stride + col) typedef struct { int width. return Asub.stride + col].height / block_size) times.width = BLOCK_SIZE.stride * BLOCK_SIZE * row + BLOCK_SIZE * col]. Matrix C) { // Load A and B to device memory Matrix d_A. The Matrix type from the previous code sample is augmented with a stride field. } // Thread block size #define BLOCK_SIZE 16 // Forward declaration of the matrix multiplication kernel __global__ void MatMulKernel(const Matrix. } // Set a matrix element __device__ void SetElement(Matrix A. we take advantage of fast shared memory and save a lot of global memory bandwidth since A is only read (B.stride. float* elements. int col. // Get a matrix element __device__ float GetElement(const Matrix A. int col) { return A. int row. Asub. col) = *(M. Matrix).elements + row * M.height = BLOCK_SIZE. // Matrix multiplication . Asub.elements[row * A. // Matrices are stored in row-major order: // M(row. __device__ functions (see Section B. so that sub-matrices can be efficiently represented with the same type.elements[A.Chapter 3.elements = &A. } Matrix. Asub. Programming Interface By blocking the computation this way.

d_C. size. A. cudaMemcpyDeviceToHost).width. cudaMemcpy(d_B. // Free device memory cudaFree(d_A.height * sizeof(float). Matrix B. d_C). // Each thread computes one element of Csub // by accumulating results into Cvalue float Cvalue = 0. Programming Interface d_A.height.height.width * C. // Thread row and column within Csub int row = threadIdx. cudaMalloc(&d_B.stride = A.height * sizeof(float). size_t size = A.x. blockRow. A. size = B. } // Matrix multiplication kernel called by MatMul() __global__ void MatMulKernel(Matrix A. Matrix d_B. d_B. cudaFree(d_C.y. m < (A.elements. dimBlock>>>(d_A. size).elements.elements.elements).width * A. // Each thread block computes one sub-matrix Csub of C Matrix Csub = GetSubMatrix(C. // Loop over all the sub-matrices of A and B that are // required to compute Csub // Multiply each pair of sub-matrices together // and accumulate the results for (int m = 0. size. size.elements). // Invoke kernel dim3 dimBlock(BLOCK_SIZE. Matrix C) { // Block row and column int blockRow = blockIdx.elements.height = C. cudaMemcpyHostToDevice). size).width / dimBlock. d_B.height / dimBlock.width = d_A.Chapter 3.elements.2 .width * B.height * sizeof(float).height = A. int col = threadIdx.width / BLOCK_SIZE).x.elements.y.width = d_C.y). ++m) { 26 CUDA C Programming Guide Version 3. int blockCol = blockIdx.stride = C.elements. blockCol).elements. BLOCK_SIZE).x. d_A.stride = B.height = B. // Read C from device memory cudaMemcpy(C. dim3 dimGrid(B. d_C.width.elements. size = C. cudaMalloc(&d_C. cudaMalloc(&d_A. MatMulKernel<<<dimGrid. cudaMemcpy(d_A. // Allocate C in device memory Matrix d_C.width = d_B. d_B. d_C. B.width. cudaFree(d_B.elements).height. size). cudaMemcpyHostToDevice).

// Multiply Asub and Bsub together for (int e = 0. // Synchronize to make sure the sub-matrices are loaded // before starting the computation __syncthreads().2 27 . m). Bs[row][col] = GetElement(Bsub. col). Cvalue). row. // Load Asub and Bsub from device memory to shared memory // Each thread loads one element of each sub-matrix As[row][col] = GetElement(Asub. __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE]. col). } // Write Csub to device memory // Each thread writes one element SetElement(Csub. row. } CUDA C Programming Guide Version 3. e < BLOCK_SIZE. Programming Interface // Get sub-matrix Asub of A Matrix Asub = GetSubMatrix(A. blockCol). // Shared memory used to store Asub and Bsub respectively __shared__ float As[BLOCK_SIZE][BLOCK_SIZE]. row. m. // Get sub-matrix Bsub of B Matrix Bsub = GetSubMatrix(B. ++e) Cvalue += As[row][e] * Bs[e][col]. blockRow.Chapter 3. // Synchronize to make sure that the preceding // computation is done before loading two new // sub-matrices of A and B in the next iteration __syncthreads(). col.

cudaGetDeviceCount(&deviceCount). These devices can be enumerated.width Figure 3-2. cudaGetDeviceProperties(&deviceProp. int deviceCount. Matrix Multiplication with Shared Memory 3. any CUDA resources created through the runtime in one host thread cannot be used by the runtime from another host thread.2 A. Also. Several host threads can execute device code on the same device. but by design. int device. a host thread can execute device code on only one device at any given time.Chapter 3.height B. if (dev == 0) { 28 CUDA C Programming Guide Version 3. their properties can be queried. multiple host threads are required to execute device code on multiple devices. device < deviceCount. ++device) { cudaDeviceProp deviceProp. It also determines the number of CUDA-enabled devices.height . As a consequence.width A.2. Programming Interface blockCol B BLOCK_SIZE 0 0 A C col Csub BLOCK_SIZE-1 BLOCK_SIZE blockRow BLOCK_SIZE row BLOCK_SIZE-1 BLOCK_SIZE BLOCK_SIZE BLOCK_SIZE B. The following code sample enumerates all devices in the system and retrieves their properties.3 Multiple Devices A host system can have multiple devices. device). and one of them can be selected for kernel executions. for (device = 0.

Chapter 3. Programming Interface

if (deviceProp.major == 9999 && deviceProp.minor == 9999) printf("There is no device supporting CUDA.\n"); else if (deviceCount == 1) printf("There is 1 device supporting CUDA\n"); else printf("There are %d devices supporting CUDA\n", deviceCount); } }

By default, the device associated to the host thread is implicitly selected as device 0 as soon as a non-device management runtime function is called (see Section 3.6 for exceptions). Any other device can be selected by calling cudaSetDevice() first. After a device has been selected, either implicitly or explicitly, any subsequent explicit call to cudaSetDevice() will fail up until cudaThreadExit() is called. cudaThreadExit() cleans up all runtime-related resources associated with the calling host thread. Any subsequent API call reinitializes the runtime.

3.2.4

Texture and Surface Memory
CUDA supports a subset of the texturing hardware that the GPU uses for graphics to access texture and surface memory. Reading data from texture or surface memory instead of global memory can have several performance benefits as described in Section 5.3.2.5.

3.2.4.1

Texture Memory
Texture memory is read from kernels using device functions called texture fetches, described in Section B.8. The first parameter of a texture fetch specifies an object called a texture reference. A texture reference defines which part of texture memory is fetched. As detailed in Section 3.2.4.1.3, it must be bound through runtime functions to some region of memory, called a texture, before it can be used by a kernel. Several distinct texture references might be bound to the same texture or to textures that overlap in memory. A texture reference has several attributes. One of them is its dimensionality that specifies whether the texture is addressed as a one-dimensional array using one texture coordinate, a two-dimensional array using two texture coordinates, or a threedimensional array using three texture coordinates. Elements of the array are called texels, short for “texture elements.” Other attributes define the input and output data types of the texture fetch, as well as how the input coordinates are interpreted and what processing should be done. A texture can be any region of linear memory or a CUDA array (described in Section 3.2.4.3). Section G.1 lists the maximum texture width, height, and depth depending on the compute capability of the device.

CUDA C Programming Guide Version 3.2

29

Chapter 3. Programming Interface

3.2.4.1.1

Texture Reference Declaration
Some of the attributes of a texture reference are immutable and must be known at compile time; they are specified when declaring the texture reference. A texture reference is declared at file scope as a variable of type texture:
texture<Type, Dim, ReadMode> texRef;

where:

Type specifies the type of data that is returned when fetching the texture; Type

is restricted to the basic integer and single-precision floating-point types and any of the 1-, 2-, and 4-component vector types defined in Section B.3.1;  Dim specifies the dimensionality of the texture reference and is equal to 1, 2, or 3; Dim is an optional argument which defaults to 1;  ReadMode is equal to cudaReadModeNormalizedFloat or cudaReadModeElementType; if it is cudaReadModeNormalizedFloat and Type is a 16-bit or 8-bit integer type, the value is actually returned as floating-point type and the full range of the integer type is mapped to [0.0, 1.0] for unsigned integer type and [-1.0, 1.0] for signed integer type; for example, an unsigned 8-bit texture element with the value 0xff reads as 1; if it is cudaReadModeElementType, no conversion is performed; ReadMode is an optional argument which defaults to cudaReadModeElementType. A texture reference can only be declared as a static global variable and cannot be passed as an argument to a function.

3.2.4.1.2

Runtime Texture Reference Attributes
The other attributes of a texture reference are mutable and can be changed at runtime through the host runtime. They specify whether texture coordinates are normalized or not, the addressing mode, and texture filtering, as detailed below. By default, textures are referenced using floating-point coordinates in the range [0, N) where N is the size of the texture in the dimension corresponding to the coordinate. For example, a texture that is 6432 in size will be referenced with coordinates in the range [0, 63] and [0, 31] for the x and y dimensions, respectively. Normalized texture coordinates cause the coordinates to be specified in the range [0.0, 1.0) instead of [0, N), so the same 6432 texture would be addressed by normalized coordinates in the range [0, 1) in both the x and y dimensions. Normalized texture coordinates are a natural fit to some applications‟ requirements, if it is preferable for the texture coordinates to be independent of the texture size. The addressing mode defines what happens when texture coordinates are out of range. When using unnormalized texture coordinates, texture coordinates outside the range [0, N) are clamped: Values below 0 are set to 0 and values greater or equal to N are set to N-1. Clamping is also the default addressing mode when using normalized texture coordinates: Values below 0.0 or above 1.0 are clamped to the range [0.0, 1.0). For normalized coordinates, the “wrap” addressing mode also may be specified. Wrap addressing is usually used when the texture contains a periodic signal. It uses only the fractional part of the texture coordinate; for example, 1.25 is treated the same as 0.25 and -1.25 is treated the same as 0.75. Linear texture filtering may be done only for textures that are configured to return floating-point data. It performs low-precision interpolation between neighboring texels. When enabled, the texels surrounding a texture fetch location are read and

30

CUDA C Programming Guide Version 3.2

Chapter 3. Programming Interface

the return value of the texture fetch is interpolated based on where the texture coordinates fell between the texels. Simple linear interpolation is performed for onedimensional textures and bilinear interpolation is performed for two-dimensional textures. Appendix F gives more details on texture fetching.

3.2.4.1.3

Texture Binding
As explained in the reference manual, the runtime API has a low-level C-style interface and a high-level C++-style interface. The texture type is defined in the high-level API as a structure publicly derived from the textureReference type defined in the low-level API as such:
struct textureReference { int enum cudaTextureFilterMode enum cudaTextureAddressMode struct cudaChannelFormatDesc }  normalized; filterMode; addressMode[3]; channelDesc;

normalized specifies whether texture coordinates are normalized or not; if it

is non-zero, all elements in the texture are addressed with texture coordinates in the range [0,1] rather than in the range [0,width-1], [0,height-1], or [0,depth-1] where width, height, and depth are the texture sizes;  filterMode specifies the filtering mode, that is how the value returned when fetching the texture is computed based on the input texture coordinates; filterMode is equal to cudaFilterModePoint or cudaFilterModeLinear; if it is cudaFilterModePoint, the returned value is the texel whose texture coordinates are the closest to the input texture coordinates; if it is cudaFilterModeLinear, the returned value is the linear interpolation of the two (for a one-dimensional texture), four (for a two-dimensional texture), or eight (for a three-dimensional texture) texels whose texture coordinates are the closest to the input texture coordinates; cudaFilterModeLinear is only valid for returned values of floating-point type;  addressMode specifies the addressing mode, that is how out-of-range texture coordinates are handled; addressMode is an array of size three whose first, second, and third elements specify the addressing mode for the first, second, and third texture coordinates, respectively; the addressing mode is equal to either cudaAddressModeClamp, in which case out-of-range texture coordinates are clamped to the valid range, or cudaAddressModeWrap, in which case out-of-range texture coordinates are wrapped to the valid range; cudaAddressModeWrap is only supported for normalized texture coordinates;  channelDesc describes the format of the value that is returned when fetching the texture; channelDesc is of the following type:
struct cudaChannelFormatDesc { int x, y, z, w; enum cudaChannelFormatKind f; };

where x, y, z, and w are equal to the number of bits of each component of the returned value and f is:

CUDA C Programming Guide Version 3.2

31

cudaReadModeElementType> texRef. otherwise. &channelDesc. cudaBindTextureToArray(texRef. cuArray). &channelDesc. textureReference* texRefPtr.Chapter 3. cudaReadModeElementType> texRef. cudaGetChannelDesc(&channelDesc. &channelDesc). “texRef”). cudaReadModeElementType> texRef. devPtr. addressMode. The format specified when binding a texture to a texture reference must match the parameters specified when declaring the texture reference. textureReference* texRefPtr. Before a kernel can use a texture reference to read from texture memory. Programming Interface  cudaChannelFormatKindSigned if these components are of signed integer type. devPtr. and filterMode may be directly modified in host code. cudaReadModeElementType> texRef. 2. cudaGetTextureReference(&texRefPtr.2 . 2. normalized. int width. “texRef”).  cudaChannelFormatKindFloat if they are of floating point type. float theta) { 32 CUDA C Programming Guide Version 3. width. the results of texture fetches are undefined. pitch). cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>().  cudaChannelFormatKindUnsigned if they are of unsigned integer type. width. height. The following code sample applies some simple transformation kernel to a // 2D float texture texture<float. cudaReadModeElementType> texRef. The following code samples bind a texture reference to a CUDA array cuArray:  Using the low-level API: texture<float. height. int height. The following code samples bind a texture reference to linear memory pointed to by devPtr:  Using the low-level API: texture<float. texRef. 2. cudaChannelFormatDesc channelDesc. the texture reference must be bound to a texture using cudaBindTexture() or cudaBindTextureToArray(). cudaBindTexture2D(0. 2. cuArray.  Using the high-level API: texture<float. cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>(). cudaUnbindTexture() is used to unbind a texture reference. texRefPtr. 2. cudaGetTextureReference(&texRefPtr. cudaBindTexture2D(0. cudaBindTextureToArray(texRef. cuArray). // Simple transformation kernel __global__ void transformKernel(float* output.  Using the high-level API: texture<float. pitch).

5f. channelDesc). height). // Transform coordinates u -= 0.5f.y – 1) / dimBlock. width * height * sizeof(float)).Chapter 3. (height + dimBlock. dimBlock>>>(output.x – 1) / dimBlock. transformKernel<<<dimGrid. cudaMemcpyHostToDevice). cudaMalloc(&output.normalized = true.x * blockDim. 0. v -= 0. 0. 0. texRef. unsigned int y = blockIdx. texRef. cudaChannelFormatKindFloat). cudaArray* cuArray. // Invoke kernel dim3 dimBlock(16.x. h_data.filterMode = cudaFilterModeLinear. angle). 16).y * blockDim.y. tv). float v = y / (float)height. 0. width.addressMode[1] = cudaAddressModeWrap. 0.5f. // Bind the array to the texture reference cudaBindTextureToArray(texRef. float tv = v * cosf(theta) + u * sinf(theta) + 0. cudaFree(output).y). height. // Allocate result of transformation in device memory float* output. texRef. } CUDA C Programming Guide Version 3. tu.x + threadIdx. &channelDesc. } // Host code int main() { // Allocate CUDA array in device memory cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(32. size. width. cudaMallocArray(&cuArray. Programming Interface // Calculate normalized texture coordinates unsigned int x = blockIdx. // Copy to device memory some data located at address h_data // in host memory cudaMemcpyToArray(cuArray.y + threadIdx. // Free device memory cudaFreeArray(cuArray). float tu = u * cosf(theta) – v * sinf(theta) + 0.5f.2 33 .x. dim3 dimGrid((width + dimBlock. // Set texture parameters texRef. // Read from texture and write to global memory output[y * width + x] = tex2D(texRef.addressMode[0] = cudaAddressModeWrap. float u = x / (float)width. cuArray.

cudaBindSurfaceToArray(surfRef. cuArray).2. CUDA C does not support a matching data type. created with the cudaArraySurfaceLoadStore flag.2 .4 16-Bit Floating-Point Textures The 16-bit floating-point or half format supported by CUDA arrays is the same as the IEEE 754-2008 binary2 format.1 lists the maximum surface width. A channel description for the 16-bit floating-point format can be created by calling one of the cudaCreateChannelDescHalf*() functions.2 Surface Binding Before a kernel can use a surface reference to access a CUDA array.2 Surface Memory A CUDA array (described in Section 3. otherwise. but provides intrinsic functions to convert to and from the 32-bit floating-point format via the unsigned short type: __float2half(float) and __half2float(unsigned short). cudaBindSurfaceToArray(surfRef. A CUDA array must be read and written using surface functions of matching dimensionality and type and via a surface reference of matching dimensionality. cudaGetChannelDesc(&channelDesc. 16-bit floating-point components are promoted to 32 bit float during texture fetching before any filtering is performed.2.2.4.2.Chapter 3. can be read and written via a surface reference using the functions described in Section B. A surface reference can only be declared as a static global variable and cannot be passed as an argument to a function. cudaGetSurfaceReference(&surfRefPtr. Section G. &channelDesc).2.2. 2> surfRef. the surface reference must be bound to the CUDA array using cudaBindSurfaceToArray(). The following code samples bind a surface reference to a CUDA array cuArray:  Using the low-level API: surface<void. for example.4.4. 3. surfaceReference* surfRefPtr.2. 2> surfRef. the results of reading and writing the CUDA array are undefined. Dim is an optional argument which defaults to 1. and depth depending on the compute capability of the device. These functions are only supported in device code. cuArray. height. 3.4. Programming Interface 3. cuArray). 3.1 Surface Reference Declaration A surface reference is declared at file scope as a variable of type surface: surface<void.1. “surfRef”). Dim> surfRef.9. cudaChannelFormatDesc channelDesc. where Dim specifies the dimensionality of the surface reference and is equal to 1 or 2.3).  Using the high-level API: surface<void.4. Equivalent functions for the host code can be found in the OpenEXR library. 34 CUDA C Programming Guide Version 3.

h_data. inputSurfRef. This means that the x-coordinate used to access a texture element via texture functions needs to be multiplied by the byte size of the element to access the same element via a surface function.Chapter 3. y).y + threadIdx. // Bind the arrays to the surface references cudaBindSurfaceToArray(inputSurfRef. 2> inputSurfRef. The following code sample applies some simple transformation kernel to a // 2D surfaces surface<void. Programming Interface Unlike texture memory. int height) { // Calculate surface coordinates unsigned int x = blockIdx.y. x * 4. cudaArraySurfaceLoadStore). 4*x) via surfRef. Similarly. &channelDesc. 2> outputSurfRef. // Write to output surface surf2Dwrite(data. x) via texRef. y) via texRef. but surf1Dread(surfRef. outputSurfRef. the element at texture coordinate x of a one-dimensional floating-point CUDA array bound to a texture reference texRef and a surface reference surfRef is read using tex1d(texRef. } } // Host code int main() { // Allocate CUDA arrays in device memory cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(8. y) via surfRef (the byte offset of the y-coordinate is internally calculated from the underlying line pitch of the CUDA array). 4*x. cudaArray* cuInputArray. x. cuInputArray). // Copy to device memory some data located at address h_data // in host memory cudaMemcpyToArray(cuInputArray.x * blockDim. 0. &channelDesc.2 35 . height. CUDA C Programming Guide Version 3. // Read from input surface surf2Dread(&data. 8. cudaChannelFormatKindUnsigned). x * 4. 0. cuOutputArray). height. cudaMemcpyHostToDevice). 8. but surf2Dread(surfRef.x. For example. // Simple copy kernel __global__ void copyKernel(int width. size. the element at texture coordinate x and y of a two-dimensional floating-point CUDA array bound to a texture reference texRef and a surface reference surfRef is accessed using tex2d(texRef. width. unsigned int y = blockIdx. 8. cudaMallocArray(&cuOutputArray. cudaArraySurfaceLoadStore). surface memory uses byte addressing. y). cudaBindSurfaceToArray(outputSurfRef. width. cudaArray* cuOutputArray. surface<void. if (x < width && y < height) { uchar4 data.y * blockDim.x + threadIdx. cudaMallocArray(&cuInputArray.

2.or 32-bit integers.x. In addition. a thread can safely read some texture or surface memory location only if this memory location has been updated by a previous kernel call or memory copy. 16. Programming Interface // Invoke kernel dim3 dimBlock(16. // Free device memory cudaFreeArray(cuInputArray). eliminating the need to copy it to or from device memory as detailed in Section 3.x – 1) / dimBlock.2. height). 3.3. by reducing the amount of physical memory available to the operating  36 CUDA C Programming Guide Version 3.2. 2 or 4 components that may be signed or unsigned 8-.4 Read/Write Coherency The texture and surface memory is cached (see Section 5. 16). so allocations in pagelocked memory will start failing long before allocations in pageable memory. } 3.4. so any texture fetch or surface read to an address that has been written to via a global write or a surface write in the same kernel call returns undefined data. cudaFreeArray(cuOutputArray). 16-bit floats. CUDA arrays are only readable by kernels through texture fetching and may only be bound to texture references with the same number of packed components. Using page-locked host memory has several benefits: Copies between page-locked host memory and device memory can be performed concurrently with kernel execution for some devices as mentioned in Section 3. but not if it has been previously updated by the same thread or another thread from the same kernel call. copyKernel<<<dimGrid.2. or 32-bit floats. dimBlock>>>(width. each of which has 1.5 Page-Locked Host Memory The runtime also provides functions to allocate and free page-locked (also known as pinned) host memory – as opposed to regular pageable host memory allocated by malloc(): cudaHostAlloc() and cudaFreeHost().2.4.3 CUDA Arrays CUDA arrays are opaque memory layouts optimized for texture fetching. Page-locked host memory is a scarce resource however. They are one-dimensional.2.5. bandwidth between host memory and device memory is higher if host memory is allocated as page-locked and even higher if in addition it is allocated as write-combining as described in Section 3.y).6. page-locked host memory can be mapped into the address space of the device. (height + dimBlock. In other words.y – 1) / dimBlock. two-dimensional.Chapter 3.  On some devices.2. dim3 dimGrid((width + dimBlock.5.2 .5) and within the same kernel call. or three-dimensional and composed of elements.2. the cache is not kept coherent with respect to global memory writes and surface memory writes.3.  On systems with a front-side bus. 3.

5.1 Portable Memory A block of page-locked memory can be used by any host threads.  A block of page-locked host memory can be allocated as both mapped and portable (see Section 3. data transfers are implicitly performed as needed by the kernel.1).5. Accessing host memory directly from within a kernel has several advantages: There is no need to allocate a block in device memory and copy data between this block and the block in host memory. as device pointers will generally differ from one host thread to the other. the benefits of using page-locked memory described above are only available for the thread that allocates it.2 Write-Combining Memory By default page-locked host memory is allocated as cacheable. Reading from write-combining memory from the host is prohibitively slow. 3.0. write-combining memory is not snooped during transfers across the PCI Express bus.2. making more cache available to the rest of the application. which can improve transfer performance by up to 40%. the kernel-originated data transfers automatically overlap with kernel execution. 3. in which case each host thread that needs to map the block to its device address space must call cudaHostGetDevicePointer() to retrieve a device pointer.2. but by default.3 Mapped Memory On devices of compute capability greater than 1. allocating too much page-locked memory reduces overall system performance.2. To make these advantages available to all threads. The simple zero-copy SDK sample comes with a detailed document on the pagelocked memory APIs.5. Since mapped page-locked memory is shared between host and device however.2. 3. Such a block has therefore two addresses: one in host memory and one in device memory.4) to overlap data transfers with kernel execution. a block of page-locked host memory can also be mapped into the address space of the device by passing flag cudaHostAllocMapped to cudaHostAlloc(). Write-combining memory frees up L1 and L2 cache resources.  There is no need to use streams (see Section 3. the application must synchronize memory accesses using streams or events (see Section 3.6) to avoid any potential read-after-write. It can optionally be allocated as write-combining instead by passing flag cudaHostAllocWriteCombined to cudaHostAlloc(). write-after-read. so write-combining memory should in general be used for memory that the host only writes to. it needs to be allocated by passing flag cudaHostAllocPortable to cudaHostAlloc(). In addition.2.5. Programming Interface system for paging.Chapter 3.6. The host memory pointer is returned by cudaHostAlloc() and the device memory pointer can be retrieved using cudaHostGetDevicePointer()and then used to access the block from within a kernel. CUDA C Programming Guide Version 3. or write-afterwrite hazards.2.2 37 .

Otherwise. Applications may query this capability by calling cudaGetDeviceProperties() and checking the concurrentKernels property. Programmers can globally disable asynchronous kernel launches for all CUDA applications running on a system by setting the CUDA_LAUNCH_BLOCKING environment variable to 1.6.2. When an application is run via a CUDA debugger or profiler (cuda-gdb. These are:    Kernel launches. CUDA C Programming Guide Version 3.2. Host  device memory copies of a memory block of 64 KB or less.Chapter 3.2. CUDA Visual Profiler.11) operating on mapped page-locked memory are not atomic from the point of view of the host or other devices. This feature is provided for debugging purposes only and should never be used as a way to make production software run reliably.2 38 .  Memory set function calls.2.3 Concurrent Kernel Execution Some devices of compute capability 2. cudaHostGetDevicePointer() will return an error.6. cudaHostGetDevicePointer() also returns an error if the device does not support mapped page-locked host memory. all launches are synchronous. The maximum number of kernel launches that a device can execute concurrently is sixteen.6.  Memory copies performed by functions that are suffixed with Async. Applications may query whether a device supports mapped page-locked host memory or not by calling cudaGetDeviceProperties() and checking the canMapHostMemory property. some function calls are asynchronous: Control is returned to the host thread before the device has completed the requested task.2 Overlap of Data Transfer and Kernel Execution Some devices of compute capability 1.1 Asynchronous Concurrent Execution Concurrent Execution between Host and Device In order to facilitate concurrent execution between host and device.2.x can execute multiple kernels concurrently. 3.6 3. Parallel Nsight).1). Note that atomic functions (Section B. Device  device memory copies. Programming Interface To be able to retrieve the device pointer to any mapped page-locked memory within a given host thread. page-locked memory mapping must be enabled by calling cudaSetDeviceFlags() with the cudaDeviceMapHost flag before any other CUDA calls is performed by the thread. Applications may query this capability by calling cudaGetDeviceProperties() and checking the deviceOverlap property. 3. This capability is currently supported only for memory copies that do not involve CUDA arrays or 2D arrays allocated through cudaMallocPitch() (see Section 3. 3.1 and higher can perform copies between page-locked host memory and device memory concurrently with kernel execution.

3. cudaMemcpyDeviceToHost. and one memory copy from device to host: for (int i = 0.6. hostPtr + i * size.2 39 . inputDevPtr + i * size.Chapter 3. for (int i = 0. Kernels that use many textures or a large amount of local memory are less likely to execute concurrently with other kernels. cudaMemcpyAsync(hostPtr + i * size. Programming Interface A kernel from one CUDA context cannot execute concurrently with a kernel from another CUDA context.2. stream[i]). this behavior is not guaranteed and should therefore not be relied upon for correctness (e. for (int i = 0. cudaStreamDestroy() waits for all preceding commands in the given stream to complete before destroying the stream and returning control to the host thread. may execute their commands out of order with respect to one another or concurrently. 2 * size). i < 2.2. ++i) { cudaMemcpyAsync(inputDevPtr + i * size. 0. 3. MyKernel<<<100. 512. cudaStream_t stream[2]. i < 2. ++i) cudaStreamCreate(&stream[i]). cudaMallocHost(&hostPtr.1 Creation and Destruction A stream is defined by creating a stream object and specifying it as the stream parameter to a sequence of kernel launches and host  device memory copies.2. size. A stream is a sequence of commands that execute in order. Streams are released by calling cudaStreamDestroy(). stream[i]>>> (outputDevPtr + i * size.5.5.2. and copies the result outputDevPtr back to the same portion of hostPtr. stream[i]).6.x can perform a copy from page-locked host memory to device memory concurrently with a copy from device memory to pagelocked host memory. Section 3.6. processes inputDevPtr on the device by calling MyKernel(). 3. size. cudaMemcpyHostToDevice. outputDevPtr + i * size.4 describes how the streams overlap in this example depending on the capability of the device. Note that hostPtr must point to page-locked host memory for any overlap to occur.4 Concurrent Data Transfers Some devices of compute capability 2. Each of these streams is defined by the following code sample as a sequence of one memory copy from host to device. size). inter-kernel communication is undefined).6. CUDA C Programming Guide Version 3. float* hostPtr. on the other hand. i < 2. } Each stream copies its portion of input array hostPtr to array inputDevPtr in device memory.5 Stream Applications manage concurrency through streams. one kernel launch. ++i) cudaStreamDestroy(stream[i]).g. The following code sample creates two streams and allocates an array hostPtr of float in page-locked memory. Different streams.

any CUDA command to stream 0 (including kernel launches and host  device memory copies that do not specify any stream parameter). It can be used to synchronize the host with a specific stream.6. applications should follow these guidelines to improve their potential for concurrent kernel execution:    All independent operations should be issued before dependent operations. Synchronization of any kind should be delayed as long as possible. Programming Interface 3. cudaThreadSynchronize() completed.2. cudaStreamWaitEvent() cudaStreamQuery() provides applications with a way to know if all preceding commands in a stream have completed. 3. Therefore.1.2. cudaStreamSynchronize() waits until all preceding commands in all streams have takes a stream as a parameter and waits until all preceding commands in the given stream have completed.6.  a switch between the L1/shared memory configurations described in Section G. Operations that require a dependency check include any other commands within the same stream as the launch being checked and any call to cudaStreamQuery() on that stream.2 Explicit Synchronization There are various ways to explicitly synchronize streams with each other.2 .6. allowing other streams to continue executing on the device.6 for a description of events) and makes all the commands added to the given stream after the call to cudaStreamWaitEvent() delay their execution until the given event has completed.  a device memory set.5.Chapter 3.2. all these synchronization functions are usually best used for timing purposes or to isolate a launch or memory copy that is failing. any operation that requires a dependency check to see if a streamed kernel launch is complete: Can start executing only when all thread blocks of all prior kernel launches from any stream in the CUDA context have started executing.    a device  device memory copy. For devices that support concurrent kernel execution. in which case all the commands added to any stream after the call to cudaStreamWaitEvent() wait on the event.  Blocks all later kernel launches from any stream in the CUDA context until the kernel launch being checked is complete.  a device memory allocation. The stream can be 0. 40 CUDA C Programming Guide Version 3.4. To avoid unnecessary slowdowns. takes a stream and an event as parameters (see Section 3.5.3 Implicit Synchronization Two commands from different streams cannot run concurrently if either one of the following operations is issued in-between them by the host thread: a page-locked host memory allocation.

2.4 Overlapping Behavior The amount of execution overlap between two streams depends on the order in which the commands are issued to each stream and whether or not the device supports overlap of data transfer and kernel execution (Section 3.5. If the code is rewritten as above. However. 512.6. stop.6.4). cudaEventCreate(&start).6. 3. inputDevPtr + i * size. i < 2. For example. CUDA C Programming Guide Version 3. cudaEventCreate(&stop). concurrent kernel execution (Section 3.2. stream[i]>>> (outputDevPtr + i * size. stream[i]).6. the two streams of the code sample of Section 3. the memory copy from device to host issued to stream 0 only overlaps with the last thread blocks of the kernel launch issued to stream 1 as per Section 3. ++i) cudaMemcpyAsync(hostPtr + i * size.2. the kernel executions overlap (assuming the device supports concurrent kernel execution) since the kernel launch is issued to stream 1 before the memory copy from device to host is issued to stream 0.2. In that case however. 0. cudaMemcpyDeviceToHost.Chapter 3.2. On devices that do support concurrent data transfers.3. An event has completed when all tasks – or optionally. as well as perform accurate timing. the kernel executions cannot possibly overlap because the kernel launch is issued to stream 1 after the memory copy from device to host is issued to stream 0. Events in stream zero are completed after all preceding task and commands in all streams are completed.2 41 . stream[i]).2). cudaMemcpyHostToDevice. which can represent a small portion of the total execution time of the kernel. the two streams of the code sample of Section 3.5. Programming Interface 3. size). by letting the application asynchronously record events at any point in the program and query when these events are completed.6. for (int i = 0. for (int i = 0. hostPtr + i * size. size.2.6. on devices that do not support concurrent data transfers. so it is blocked until the kernel launch issued to stream 0 is complete as per Section 3.6. If the code is rewritten the following way (and assuming the device supports overlap of data transfer and kernel execution) for (int i = 0. all commands in a given stream – preceding the event have completed.5. i < 2.5.6. ++i) MyKernel<<<100. The following code sample creates two events: cudaEvent_t start.3.1 do not overlap at all because the memory copy from host to device is issued to stream 1 after the memory copy from device to host is issued to stream 0.5.2. i < 2. then the memory copy from host to device issued to stream 1 overlaps with the kernel launch issued to stream 0.1 do overlap: The memory copy from host to device issued to stream 1 overlaps with the memory copy from device to host issued to stream 0 and even with the kernel launch issued to stream 0 (assuming the device supports overlap of data transfer and kernel execution).3).2.6 Event The runtime also provides a way to closely monitor the device‟s progress. and/or concurrent data transfers (Section 3.6.2. outputDevPtr + i * size. ++i) cudaMemcpyAsync(inputDevPtr + i * size. size.

cudaMemcpyDeviceToHost.7 Synchronous Calls When a synchronous function is called.2. A mapped resource can be read from or written to by kernels using the device memory address returned by cudaGraphicsResourceGetMappedPointer() for buffers and cudaGraphicsSubResourceGetMappedArray() for CUDA arrays. or to enable CUDA to write data for consumption by OpenGL or Direct3D. Programming Interface These events can be used to time the code sample of the previous section the following way: cudaEventRecord(start. start. Whether the host thread will then yield.1 and 3. ++i) { cudaMemcpyAsync(inputDev + i * size. 3. control is not returned to the host thread before the device has completed the requested task. read-only) that the CUDA driver can use to optimize resource management. They are destroyed this way: cudaEventDestroy(start). outputDev + i * size. stream[i]>>> (outputDev + i * size. cudaGraphicsResourceSetMapFlags() can be called to specify usage hints (write-only. 3.7. cudaEventSynchronize(stop). 0). stream[i]).6.7 Graphics Interoperability Some resources from OpenGL and Direct3D may be mapped into the address space of CUDA. Once a resource is registered to CUDA.Chapter 3. i < 2. for (int i = 0. stop). stream[i]). or spin can be specified by calling cudaSetDeviceFlags()with some specific flags (see reference manual for details) before any other CUDA calls is performed by the host thread. either to enable CUDA to read data written by OpenGL or Direct3D. size. it can be mapped and unmapped as many times as necessary using cudaGraphicsMapResources() and cudaGraphicsUnmapResources(). A resource must be registered to CUDA before it can be mapped using the functions mentioned in Sections 3. These functions return a pointer to a CUDA graphics resource of type struct cudaGraphicsResource. MyKernel<<<100. size. Registering a resource is potentially high-overhead and therefore typically called only once per resource. cudaMemcpyHostToDevice. cudaMemcpyAsync(outputHost + i * size.2.7. cudaEventDestroy(stop). 512.2 . float elapsedTime. block. cudaEventElapsedTime(&elapsedTime.2. inputHost + i * size.2. A CUDA graphics resource is unregistered using cudaGraphicsUnregisterResource(). 0. size). inputDev + i * size. 42 CUDA C Programming Guide Version 3. } cudaEventRecord(stop. 0).2.

// Create buffer object and register it with CUDA glGenBuffers(1. GL_RGBA8UI). it appears as a CUDA array and can therefore be bound to a texture reference and be read and written by kernels or via cudaMemcpy2D() calls. Please note that since GL_RGBA8UI is an OpenGL 3. not the fixed function pipeline. A buffer object is registered using cudaGraphicsGLRegisterBuffer(). cudaGraphicsMapFlagsWriteDiscard).7. The OpenGL resources that may be mapped into the address space of CUDA are OpenGL buffer. } CUDA C Programming Guide Version 3. and renderbuffer objects. 0.g. 3. cudaGraphicsGLRegisterBuffer(&positionsVBO_CUDA.g.2. texture. In CUDA.. It does not currently support normalized integer formats (e. // Launch rendering loop glutMainLoop(). glutDisplayFunc(display). &vbo). GL_RGBA8).2 give specifics for each graphics API and some code samples.0 texture format.2 43 .Chapter 3.g. 2.7. int main() { // Explicitly set device cudaGLSetGLDevice(0). Sections 3.1 OpenGL Interoperability Interoperability with OpenGL requires that the CUDA device be specified by cudaGLSetGLDevice() before any other runtime calls. or 4 components and an internal type of float (e. unsigned int size = width * height * 4 * sizeof(float). size. glBindBuffer(GL_ARRAY_BUFFER. GL_RGBA_FLOAT32) and unnormalized integer (e. glBufferData(GL_ARRAY_BUFFER. A texture or renderbuffer object is registered using cudaGraphicsGLRegisterImage(). The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices stored in a vertex buffer object: GLuint positionsVBO. Programming Interface Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA produces undefined results. glBindBuffer(GL_ARRAY_BUFFER. 0). GL_DYNAMIC_DRAW).. // Initialize OpenGL and GLUT . Note that cudaSetDevice()and cudaGLSetGLDevice() are mutually exclusive. it appears as a device pointer and can therefore be read and written by kernels or via cudaMemcpy() calls.1 and 3. In CUDA. positionsVBO).2. cudaGraphicsGLRegisterImage() supports all texture formats with 1. it can only be written by shaders.2.7. positionsVBO. struct cudaGraphicsResource* positionsVBO_CUDA.

0f).5f. glEnableClientState(GL_VERTEX_ARRAY). glVertexPointer(4. 0. positionsVBO_CUDA)). // calculate simple sine wave pattern float freq = 4. 0). // Write positions positions[y * width + x] = make_float4(u.y. GL_FLOAT. 1.0f . } __global__ void createVertices(float4* positions. glutPostRedisplay(). positionsVBO). width.1. glDisableClientState(GL_VERTEX_ARRAY). 0). height / dimBlock. } void deleteVBO() { cudaGraphicsUnregisterResource(positionsVBO_CUDA). 0.x.y * blockDim. cudaGraphicsMapResources(1. // Render from buffer object glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT). w. createVertices<<<dimGrid. // Execute kernel dim3 dimBlock(16. time. glBindBuffer(GL_ARRAY_BUFFER.y + threadIdx. dimBlock>>>(positions. &positionsVBO_CUDA.y. glDrawArrays(GL_POINTS. height). size_t num_bytes. 0). 1). v. // Unmap buffer object cudaGraphicsUnmapResources(1.0f. 1).x. // Calculate uv coordinates float u = x / (float)width. unsigned int width.x + threadIdx.0f. u = u * 2. &num_bytes. Programming Interface void display() { // Map buffer object for writing from CUDA float4* positions. width * height). float v = y / (float)height.1. &positionsVBO_CUDA. glDeleteBuffers(1. cudaGraphicsResourceGetMappedPointer((void**)&positions. dim3 dimGrid(width / dimBlock.x * blockDim. unsigned int y = blockIdx. v = v * 2.0f . // Swap buffers glutSwapBuffers().2 .Chapter 3. 16.0f. &positionsVBO). 44 CUDA C Programming Guide Version 3. unsigned int height) { unsigned int x = blockIdx. float time. float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.

The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices stored in a vertex buffer object. Direct3D 10. the application relies on the interoperability between CUDA driver and runtime APIs (Section 3. and cudaD3D11GetDevice() can be used to retrieve the CUDA device associated to some adapter.4). 3. each using a different CUDA context. If using a single CPU thread. Programming Interface } On Windows and for Quadro GPUs. and Direct3D 11. cudaD3D9GetDevice(). A CUDA context may interoperate with only one Direct3D device at a time and the CUDA context and Direct3D device must be created on the same GPU. cudaD3D10SetDirect3DDevice() and cudaD3D11SetDirect3DDevice(). or a single CPU thread using multiple CUDA context. Moreover. cudaGraphicsD3D10RegisterResource(). and cudaGraphicsD3D11RegisterResource(). CUDA C Programming Guide Version 3.2 Direct3D Interoperability Direct3D interoperability is supported for Direct3D 9. which allows it to call cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context active at a given time. Quadro GPUs offer higher performance OpenGL interoperability than GeForce and Tesla GPUs in a multi-GPU configuration where OpenGL rendering is performed on the Quadro GPU and CUDA computations are performed on other GPUs in the system.Chapter 3. before any other runtime calls.2 45 . the Direct3D device must be created with the D3DCREATE_HARDWARE_VERTEXPROCESSING flag.2. A set of calls is also available to allow the creation of CUDA devices with interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate Frame Rendering) mode: cudaD3D[9|10|11]GetDevices(). The application has the choice to either create multiple CPU threads. See Section 4. cudaWGLGetDevice() can be used to retrieve the CUDA device associated to the handle returned by wglEnumGpusNV(). A call to cuD3D[9|10|11]GetDevices()can be used to obtain a list of CUDA device handles that can be passed as the (optional) last parameter to cudaD3D[9|10|11]SetDirect3DDevice(). textures. cudaD3D10GetDevice(). and surfaces.3 for general recommendations related to interoperability between Direct3D devices using SLI and CUDA contexts. Each of these CUDA contexts would be created using one of the CUDA device handles returned by cudaD3D[9|10|11]GetDevices()). Direct3D 9 Version: IDirect3D9* D3D. The Direct3D resources that may be mapped into the address space of CUDA are Direct3D buffers. Interoperability with Direct3D requires that the Direct3D device be specified by cudaD3D9SetDirect3DDevice().7. These resources are registered using cudaGraphicsD3D9RegisterResource().

&positionsVB. 0). Programming Interface IDirect3DDevice9* device. &positionsVB_CUDA. &device). cudaGraphicsMapResources(1. IDirect3DVertexBuffer9* positionsVB. adapter++) { D3DADAPTER_IDENTIFIER9 adapterId.Chapter 3.. adapter < g_pD3D->GetAdapterCount(). cudaGraphicsRegisterFlagsNone). struct CUSTOMVERTEX { FLOAT x. if (cudaD3D9GetDevice(&dev.2 . 0. size_t num_bytes.. D3D->CreateDevice(adapter. } // Create device . hWnd. }. // Get a CUDA-enabled adapter unsigned int adapter = 0. int main() { // Initialize Direct3D D3D = Direct3DCreate9(D3D_SDK_VERSION). // Launch rendering loop while (. 0. // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX). g_pD3D->GetAdapterIdentifier(adapter. ... DWORD color. device->CreateVertexBuffer(size.DeviceName) == cudaSuccess) break.. cudaGraphicsMapFlagsWriteDiscard). int dev. z. D3DFVF_CUSTOMVERTEX. D3DDEVTYPE_HAL. &adapterId).) { .. struct cudaGraphicsResource* positionsVB_CUDA. y. D3DCREATE_HARDWARE_VERTEXPROCESSING. for (. positionsVB. 46 CUDA C Programming Guide Version 3. 0).. Render().. adapterId. } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. &params. D3DPOOL_DEFAULT. // Register device with CUDA cudaD3D9SetDirect3DDevice(device). cudaGraphicsResourceSetMapFlags(positionsVB_CUDA. cudaGraphicsD3D9RegisterResource(&positionsVB_CUDA.

x. 1).y + threadIdx.x. 1). float time. width. createVertices<<<dimGrid. __int_as_float(0xff00ff00)). }. unsigned int y = blockIdx.0f.1.0f. height / dimBlock. height).x * blockDim. dim3 dimGrid(width / dimBlock.y * blockDim.y. &positionsVB_CUDA. DWORD color.0f . // Calculate simple sine wave pattern float freq = 4.Chapter 3. v. z. // Draw and present . u = u * 2. 16. int main() { // Get a CUDA-enabled adapter CUDA C Programming Guide Version 3. // Unmap vertex buffer cudaGraphicsUnmapResources(1. positionsVB_CUDA)). positionsVB->Release(). struct cudaGraphicsResource* positionsVB_CUDA. float w = sinf(u * freq + time) * cosf(v * freq + time) * 0... // Execute kernel dim3 dimBlock(16. 0). time. dimBlock>>>(positions. ID3D10Buffer* positionsVB. struct CUSTOMVERTEX { FLOAT x.1.y. unsigned int width. // Write positions positions[y * width + x] = make_float4(u.2 47 .0f. y. } __global__ void createVertices(float4* positions. Programming Interface cudaGraphicsResourceGetMappedPointer((void**)&positions. v = v * 2. &num_bytes. // Calculate uv coordinates float u = x / (float)width.0f . float v = y / (float)height. unsigned int height) { unsigned int x = blockIdx. } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA).5f. } Direct3D 10 Version: ID3D10Device* device. w.x + threadIdx.

. &adapter)) break. positionsVB.. } factory->Release(). &device). ++i) { if (FAILED(factory->EnumAdapters(i.. D3D10_DRIVER_TYPE_HARDWARE. 48 CUDA C Programming Guide Version 3.. cudaGraphicsResourceSetMapFlags(positionsVB_CUDA. int dev. if (cudaD3D10GetDevice(&dev. D3D10_CREATE_DEVICE_DEBUG.Usage = D3D10_USAGE_DEFAULT.) { . 0. !adapter. D3D10_SDK_VERSION. size_t num_bytes. 0). 0. Render(). &swapChain. IDXGIAdapter* adapter = 0. Programming Interface IDXGIFactory* factory.Chapter 3. bufferDesc. device->CreateBuffer(&bufferDesc. &positionsVB_CUDA. // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX). bufferDesc. } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. cudaGraphicsResourceGetMappedPointer((void**)&positions. for (unsigned int i = 0. D3D10_BUFFER_DESC bufferDesc. adapter) == cudaSuccess) break. cudaGraphicsD3D10RegisterResource(&positionsVB_CUDA.2 .. cudaGraphicsMapResources(1.CPUAccessFlags = 0.. // Register device with CUDA cudaD3D10SetDirect3DDevice(device).. cudaGraphicsRegisterFlagsNone). // Create swap chain and device . CreateDXGIFactory(__uuidof(IDXGIFactory).. adapter->Release(). bufferDesc. cudaGraphicsMapFlagsWriteDiscard). &swapChainDesc. bufferDesc. bufferDesc. &positionsVB).MiscFlags = 0. adapter->Release().ByteWidth = size. // Launch rendering loop while (. D3D10CreateDeviceAndSwapChain(adapter. (void**)&factory). &num_bytes.BindFlags = D3D10_BIND_VERTEX_BUFFER..

u = u * 2. 1).. DWORD color. 16. // Calculate simple sine wave pattern float freq = 4. height). float v = y / (float)height. unsigned int y = blockIdx.y * blockDim. __int_as_float(0xff00ff00)). (void**)&factory). time. // Execute kernel dim3 dimBlock(16. // Calculate uv coordinates float u = x / (float)width. &positionsVB_CUDA.x. CUDA C Programming Guide Version 3. float time. v. dim3 dimGrid(width / dimBlock. z. unsigned int height) { unsigned int x = blockIdx.5f.x. w. unsigned int width. 0). Programming Interface positionsVB_CUDA)). float w = sinf(u * freq + time) * cosf(v * freq + time) * 0. y.0f.Chapter 3. } Direct3D 11 Version: ID3D11Device* device.y + threadIdx.1.x * blockDim. struct CUSTOMVERTEX { FLOAT x.x + threadIdx..0f. ID3D11Buffer* positionsVB. 1).y. dimBlock>>>(positions. } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA).0f . positionsVB->Release(). CreateDXGIFactory(__uuidof(IDXGIFactory). v = v * 2.y. // Write positions positions[y * width + x] = make_float4(u. int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory. createVertices<<<dimGrid. }. width. } __global__ void createVertices(float4* positions.1. // Draw and present .0f . struct cudaGraphicsResource* positionsVB_CUDA. height / dimBlock. // Unmap vertex buffer cudaGraphicsUnmapResources(1.2 49 .0f.

0. !adapter. &device... if (cudaD3D11GetDevice(&dev. &deviceContext).. Programming Interface IDXGIAdapter* adapter = 0. bufferDesc.. adapter) == cudaSuccess) break. Render(). device->CreateBuffer(&bufferDesc.2 . } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. int dev.) { . bufferDesc. sFnPtr_D3D11CreateDeviceAndSwapChain(adapter. D3D11_DRIVER_TYPE_HARDWARE. } factory->Release().BindFlags = D3D11_BIND_VERTEX_BUFFER. ++i) { if (FAILED(factory->EnumAdapters(i. &positionsVB).Chapter 3. 0). D3D11_SDK_VERSION. D3D11_BUFFER_DESC bufferDesc. adapter->Release(). positionsVB. D3D11_CREATE_DEVICE_DEBUG. size_t num_bytes..CPUAccessFlags = 0. 3.. . // Launch rendering loop while (. cudaGraphicsMapResources(1. bufferDesc. &featureLevel. 0. // Create swap chain and device . cudaGraphicsRegisterFlagsNone). // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX). &positionsVB_CUDA. // Register device with CUDA cudaD3D11SetDirect3DDevice(device). bufferDesc. &swapChain.Usage = D3D11_USAGE_DEFAULT. featureLevels.. &adapter)) break. for (unsigned int i = 0. cudaGraphicsResourceSetMapFlags(positionsVB_CUDA. &swapChainDesc. cudaGraphicsMapFlagsWriteDiscard). bufferDesc..ByteWidth = size. cudaGraphicsD3D11RegisterResource(&positionsVB_CUDA. 50 CUDA C Programming Guide Version 3. adapter->Release().MiscFlags = 0.

y + threadIdx. if an asynchronous error occurs. the error code only reports errors that occur on the host prior to executing the task. float time.0f. // Calculate uv coordinates float u = x / (float)width.1. w. createVertices<<<dimGrid. positionsVB->Release(). &positionsVB_CUDA. width. } 3. unsigned int y = blockIdx.0f. time.. __int_as_float(0xff00ff00)).x + threadIdx..8 Error Handling All runtime functions return an error code.Chapter 3. // Unmap vertex buffer cudaGraphicsUnmapResources(1. u = u * 2.2. v = v * 2.1.5f. Programming Interface cudaGraphicsResourceGetMappedPointer((void**)&positions. } __global__ void createVertices(float4* positions. dim3 dimGrid(width / dimBlock. float v = y / (float)height.y. // Draw and present . v.y * blockDim. // Calculate simple sine wave pattern float freq = 4.x. typically related to parameter validation. 0). // Execute kernel dim3 dimBlock(16. dimBlock>>>(positions. but for an asynchronous function (see Section 3.x.0f.0f . float w = sinf(u * freq + time) * cosf(v * freq + time) * 0.0f . &num_bytes. 1). // Write positions positions[y * width + x] = make_float4(u. height / dimBlock. unsigned int width. height). positionsVB_CUDA)). it will be reported by some subsequent unrelated runtime function call. 1). 16.6). CUDA C Programming Guide Version 3. unsigned int height) { unsigned int x = blockIdx. } void releaseVB() { cudaGraphicsUnregisterResource(positionsVB_CUDA). this error code cannot possibly report any of the asynchronous errors that could occur on the device since the function returns before the device has completed the task.x * blockDim.2 51 .2.y.

Programming Interface The only way to check for asynchronous errors just after some asynchronous function call is therefore to synchronize just after the call by calling cudaThreadSynchronize() (or by using any other synchronization mechanisms described in Section 3. To ensure that any error returned by cudaPeekAtLastError() or cudaGetLastError() does not originate from calls prior to the kernel launch. the kernel call fails with a stack overflow error if the application is run via a CUDA debugger (cuda-gdb. The runtime maintains an error variable for each host thread that is initialized to cudaSuccess and is overwritten by the error code every time an error occurs (be it a parameter validation error or an asynchronous error).6) and checking the error code returned by cudaThreadSynchronize(). Parallel Nsight) or an unspecified launch error. cudaGetLastError() returns this variable and resets it to cudaSuccess. Table 3-1. 3. The objects available in the driver API are summarized in Table 3-1.9 Call Stack On devices of compute capability 2. Note that cudaErrorNotReady that may be returned by cudaStreamQuery() and cudaEventQuery() is not considered an error and is therefore not reported by cudaPeekAtLastError() or cudaGetLastError(). imperative API: Most objects are referenced by opaque handles that may be specified to functions to manipulate the objects. Kernel launches do not return any error code. for example. otherwise. Kernel launches are asynchronous. one has to make sure that the runtime error variable is set to cudaSuccess just before the kernel launch.x. When the call stack overflows. by calling cudaGetLastError() just before the kernel launch. so to check for asynchronous errors.Chapter 3.3 Driver API The driver API is a handle-based. 3. so cudaPeekAtLastError() or cudaGetLastError() must be called just after the kernel launch to retrieve any pre-launch errors. the application must synchronize in-between the kernel launch and the call to cudaPeekAtLastError() or cudaGetLastError().2.2 .2. the size of the call stack can be queried using cudaThreadGetLimit() and set using cudaThreadSetLimit(). cudaPeekAtLastError() returns this variable. Objects Available in the CUDA Driver API Object Device Context Module Function Handle CUdevice CUcontext CUmodule CUfunction Description CUDA-enabled device Roughly equivalent to a CPU process Roughly equivalent to a dynamic library Kernel 52 CUDA C Programming Guide Version 3.

Kernels written in C must therefore be compiled separately into PTX or binary objects.. 0). } // Get handle for device 0 CUdevice cuDevice.3.2. float* h_B = (float*)malloc(size). CUDA C Programming Guide Version 3. This is because binary code is architecture-specific and therefore incompatible with future architectures. // Get number of devices supporting CUDA int deviceCount = 0. cuDeviceGetCount(&deviceCount). readable via texture or surface references Object that describes how to interpret texture memory data Object that describes how to read or write CUDA arrays Texture reference Surface reference CUtexref CUsurfref The driver API is implemented in the nvcuda dynamic library and all its entry points are prefixed with cu. kernels are explicitly loaded as PTX or binary objects by the host code as described in Section 3.3. size_t size = N * sizeof(float). Kernels are launched using API entry points as described in Section 3. not binary code. Programming Interface Heap memory CUDA array CUdeviceptr CUarray Pointer to device memory Opaque container for one-dimensional or two-dimensional data on the device.\n"). A CUDA context must then be created that is attached to a specific device and made current to the calling host thread as detailed in Section 3.. cuDeviceGet(&cuDevice. // Initialize cuInit(0).. // Create context CUcontext cuContext. whereas PTX code is compiled to binary code at load time by the driver. // Initialize input vectors .Chapter 3.1 written using the driver API: int main() { int N = . Here is the host code of the sample from Section 2. exit (0).. // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size).2 53 . The driver API must be initialized with cuInit() before any function from the driver API is called. 0. Within a CUDA context.1.3. Any application that wants to run on future device architectures must load PTX.3. if (deviceCount == 0) { printf("There is no device supporting CUDA. cuDevice).. cuCtxCreate(&cuContext.

Besides objects such as modules and texture or surface references. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. CUdeviceptr d_C. // Invoke kernel #define ALIGN_UP(offset. h_B. cuParamSetv(vecAdd. // Copy vectors from host memory to device memory cuMemcpyHtoD(d_A. offset). 1). &d_A. &d_C. . // Get function handle from module CUfunction vecAdd. All resources and actions performed within the driver API are encapsulated inside a CUDA context.Chapter 3. cuParamSetv(vecAdd. ALIGN_UP(offset. size). cuMemcpyHtoD(d_B. CUdeviceptr d_B. cuFuncSetBlockShape(vecAdd.1 Context A CUDA context is analogous to a CPU process. sizeof(d_C)). size). offset. int threadsPerBlock = 256. 1). // Allocate vectors in device memory CUdeviceptr d_A. "VecAdd"). each context has 54 CUDA C Programming Guide Version 3. cuLaunchGrid(vecAdd. size). int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock. __alignof(d_B)). __alignof(N)). size). h_A.3. offset += sizeof(d_A). cuMemAlloc(&d_C. offset += sizeof(d_B). 1. N). ALIGN_UP(offset. cuModuleGetFunction(&vecAdd. and the system automatically cleans up these resources when the context is destroyed. threadsPerBlock. cuModuleLoad(&cuModule. ALIGN_UP(offset. offset. sizeof(d_A)). cuParamSeti(vecAdd. __alignof(d_C)). } Full code can be found in the vectorAddDrv SDK code sample.2 . __alignof(d_A)). offset. cuParamSetSize(vecAdd. &d_B. blocksPerGrid. offset += sizeof(N). cuMemAlloc(&d_A. Programming Interface // Create module from binary file CUmodule cuModule. ALIGN_UP(offset. 3. “VecAdd. cuParamSetv(vecAdd.ptx”). cuModule. size). sizeof(d_B)). cuMemAlloc(&d_B. offset... offset += sizeof(d_C).

For example. and the library simply operates on the context handed to it.2 Module Modules are dynamically loadable packages of device code and data.3. As a result. CUDA functions that operate in a context (most functions that do not involve device enumeration or context management) will return CUDA_ERROR_INVALID_CONTEXT if a valid context is not current to the thread. cuCtxCreate() pushes the new context onto the top of the stack.1). The context is then "floating" and may be pushed as the current context for any host thread. it is expected that the application will have created a context before loading or initializing the library. The names for all symbols. including functions. that way. A usage count is also maintained for each context.2 55 . cuCtxAttach() increments the usage count and cuCtxDetach() decrements it. are CUDA C Programming Guide Version 3. global variables. the application can create the context using its own heuristics. Library Initialization Call cuCtxCreate() Initialize context cuCtxPopCurrent() Library Call cuCtxPushCurrent() Use context cuCtxPopCurrent() Figure 3-3. Usage count facilitates interoperability between third party authored code operating in the same context. Libraries that wish to create their own contexts – unbeknownst to their API clients who may or may not have created contexts of their own – would use cuCtxPushCurrent() and cuCtxPopCurrent() as illustrated in Figure 3-3. akin to DLLs in Windows. it is made current to the calling host thread. A context is destroyed when the usage count goes to 0 when calling cuCtxDetach() or cuCtxDestroy(). each library would call cuCtxAttach() to increment the usage count and cuCtxDetach() to decrement the usage count when the library is done using the context. Library Context Management 3. CUdeviceptr values from different contexts reference different memory locations.Chapter 3. and texture or surface references. A host thread may have only one device context current at a time. cuCtxPopCurrent() also restores the previous current context. For most libraries. Each host thread has a stack of current contexts. cuCtxCreate() creates a context with a usage count of 1. Programming Interface its own distinct 32-bit address space. if three libraries are loaded to use the same context. that are output by nvcc (see Section 3. cuCtxPopCurrent() may be called to detach the context from the host thread. if any. When a context is created with cuCtxCreate().

void* values[3]. This code sample compiles and loads a new module from PTX code and parses compilation errors: #define ERROR_BUFFER_SIZE 100 CUmodule cuModule. using gcc‟s compilation flag -mno-aligndouble) since in device code these types are always aligned on a two-word boundary. options. The only exception is when the host compiler aligns double and long long (and long on a 64-bit system) on a one-word boundary instead of a two-word boundary (for example. values[0] = (void*)malloc(ERROR_BUFFER_SIZE). but represents a pointer. values). cuFuncSetSharedSize() sets the size of shared memory for the function. options[2] = CU_ASM_TARGET_FROM_CUCONTEXT. i < values[1]. and how their threadIDs are assigned. for (int i = 0. “myModule. The second argument of each of the cuParam*() functions specifies the offset of the parameter in the parameter stack. char* PTXCode = “some PTX code”. values[2] = 0. values[1] = (void*)ERROR_BUFFER_SIZE. This code sample loads a module and retrieves a handle to some kernel: CUmodule cuModule. 56 CUDA C Programming Guide Version 3. 3. Alignment requirements in device code for the built-in vector types are listed in Table B-1. cuModuleLoadDataEx(&cuModule. The following code sample uses a macro to adjust the offset of each parameter to meet its alignment requirement.Chapter 3. CUfunction myKernel. so its alignment requirement is __alignof(void*).ptx”). cuModuleLoad(&cuModule.3.2 . options[1] = CU_ASM_ERROR_LOG_BUFFER_SIZE_BYTES. Programming Interface maintained at module scope so that modules written by independent third parties may interoperate in the same CUDA context. The cuParam*() family of functions is used to specify the parameters that will be provided to the kernel the next time cuLaunchGrid() or cuLaunch() is invoked to launch the kernel.3 Kernel Execution cuFuncSetBlockShape() sets the number of threads per block for a given function. ++i) { // Parse error string here } 3. cuModuleGetFunction(&myKernel. For all other basic types. the alignment requirement in device code matches the alignment requirement in host code and can therefore be obtained using __alignof(). options[0] = CU_ASM_ERROR_LOG_BUFFER. “MyKernel”). This offset must match the alignment requirement for the parameter type in device code. PTXCode. cuModule. CUjit_option options[3]. CUdeviceptr is an integer.

cuLaunchGrid(cuFunction. f). float4 f4. cuParamSetf(cuFunction. &f4. typedef struct { float f. int i. __alignof(i)). offset). The following structure. CUdeviceptr. might therefore differ between device code and host code. offset += sizeof(i). sizeof(f4)). ALIGN_UP(offset. Programming Interface #define ALIGN_UP(offset. int offset = 0. ALIGN_UP(offset. } myStruct. offset.2 57 . float2 f2. offset += sizeof(dptr). float f.Chapter 3. cuParamSeti(cuFunction. __alignof(dptr)). float4 f4. c). The alignment requirement of a structure is equal to the maximum of the alignment requirements of its fields. offset += sizeof(f). is not padded at all in host code. offset. // float4‟s alignment is 16 cuParamSetv(cuFunction. 8). offset += sizeof(f2). cuParamSetv(cuFunction. gridWidth. sizeof(f2)). but it is padded in device code with 12 bytes after field f since the alignment requirement for field f4 is 16. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. cuParamSetSize(cuFunction. gridHeight). such as: myStruct s. offset. 1). offset. cuFuncSetBlockShape(cuFunction. char c. ALIGN_UP(offset. __alignof(f)). ALIGN_UP(offset. cuParamSeti(cuFunction. blockWidth. sizeof(dptr)). Such a structure might also be padded differently. blockHeight. // float2‟s alignment is 8 cuParamSetv(cuFunction. offset += sizeof(c). offset. __alignof(c)). offset. for example. &dptr. The alignment requirement of a structure that contains built-in vector types. ALIGN_UP(offset. CUdeviceptr dptr. Any parameter of type myStruct must therefore be passed using separate calls to cuParam*(). or non-aligned double and long long. i). CUDA C Programming Guide Version 3. offset += sizeof(f4). &f2. ALIGN_UP(offset. 16).

ptx”). cuModuleGetFunction(&vecAdd.3. cuDeviceGet(&cuDevice. 0. // Create context CUcontext cuContext.f4).f4. cuModuleLoad(&cuModule. CUdeviceptr d_C. cuMemAlloc(&d_C. &s. offset += sizeof(s. size). // Create module from binary file CUmodule cuModule. cuCtxCreate(&cuContext.f4)). // Copy vectors from host memory to device memory // h_A and h_B are input vectors stored in host memory 58 CUDA C Programming Guide Version 3. // float4‟s alignment is 16 cuParamSetv(cuFunction.Chapter 3. "VecAdd"). ALIGN_UP(offset. cuMemAlloc(&d_B. Programming Interface cuParamSetv(cuFunction. cuMemAlloc(&d_A. size). CUdeviceptr d_A. if (deviceCount == 0) { printf("There is no device supporting CUDA. // Get number of devices supporting CUDA int deviceCount = 0.1 written using the driver API: // Host code int main() { // Initialize if (cuInit(0) != CUDA_SUCCESS) exit (0). Here is the host code of the sample from Section 3. offset. “VecAdd. 16). cuDeviceGetCount(&deviceCount).4 Device Memory Linear memory is allocated using cuMemAlloc() or cuMemAllocPitch() and freed using cuMemFree(). // Allocate vectors in device memory size_t size = N * sizeof(float). 3. &s.2 .f. sizeof(s. // Get function handle from module CUfunction vecAdd. } // Get handle for device 0 CUdevice cuDevice = 0.\n"). offset += sizeof(s. cuModule. 0). CUdeviceptr d_B. exit (0).f).f)). size).2. cuDevice). sizeof(s. offset.

} Linear memory can also be allocated through cuMemAllocPitch(). sizeof(devPtr)). The following code sample allocates a width×height 2D array of floating-point values and shows how to loop over the array elements in device code: // Host code (assuming cuModule has been loaded) CUdeviceptr devPtr. 100. // Copy result from device memory to host memory // h_C contains the result in host memory cuMemcpyDtoH(h_C. CUfunction myKernel. cuParamSetv(vecAdd. offset. The returned pitch (or stride) must be used to access array elements. sizeof(devPtr)). offset. offset). &d_A. “MyKernel”). sizeof(d_B)). // Invoke kernel #define ALIGN_UP(offset. sizeof(d_A)). // Free device memory cuMemFree(d_A). cuParamSetSize(VecAdd. sizeof(d_C)). cuParamSetv(vecAdd. cuLaunchGrid(myKernel. 1). 1).Chapter 3. &d_C. This function is recommended for allocations of 2D arrays as it makes sure that the allocation is appropriately padded to meet the alignment requirements described in Section 5. &devPtr. h_A. __alignof(d_A)). 4). Programming Interface cuMemcpyHtoD(d_A. &pitch. 1. d_C. cuMemFree(d_B). size). therefore ensuring best performance when accessing the row addresses or performing copies between 2D arrays and other regions of device memory (using the cuMemcpy2D()). cuMemAllocPitch(&devPtr. cuMemFree(d_C). cuLaunchGrid(VecAdd. cuModuleGetFunction(&myKernel.1.2 59 . ALIGN_UP(offset.3. 0.2. size). offset. offset += sizeof(d_A). size). cuFuncSetBlockShape(myKernel. cuFuncSetBlockShape(vecAdd. cuMemcpyHtoD(d_B. int blocksPerGrid = (N + threadsPerBlock – 1) / threadsPerBlock. cuParamSetv(myKernel. ALIGN_UP(offset. 512. 1). width * sizeof(float). 1. h_B. size_t pitch. offset += sizeof(d_B). __alignof(d_B)). alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. height. cuParamSetv(vecAdd. cuParamSetSize(myKernel. ALIGN_UP(offset. 1). cuModule. // Device code __global__ void MyKernel(float* devPtr) CUDA C Programming Guide Version 3. threadsPerBlock. offset += sizeof(d_C). &d_B. __alignof(d_C)). int threadsPerBlock = 256. blocksPerGrid.

copyParam. &bytes. size_t bytes. CUarray cuArray. cuMemcpyHtoD(devPtr. Programming Interface { for (int r = 0. “devData”).NumChannels = 1. float data[256]. cuMemcpyHtoD(devPtr. copyParam. 0. &bytes. “constData”). desc. bytes). desc. ++c) { float element = row[c]. sizeof(ptr)). linear memory allocated with cuMemAllocPitch(). c < width. cuMemcpy2D(&copyParam). r < height. and CUDA arrays. sizeof(float)). } } } The following code sample allocates a width×height CUDA array of one 32-bit floating-point component: CUDA_ARRAY_DESCRIPTOR desc. for (int c = 0. cuArrayCreate(&cuArray. &ptr. copyParam. cuMemcpyHtoD(devPtr.2 . &bytes. cuModuleGetGlobal(&devPtr.dstMemoryType = CU_MEMORYTYPE_ARRAY. &value. The reference manual lists all the various functions used to copy memory between linear memory allocated with cuMemAlloc(). cuModule. __constant__ float constData[256]. cuModuleGetGlobal(&devPtr. copyParam. sizeof(copyParam)). copyParam.srcDevice = devPtr. cuMemcpyDtoH(data.Format = CU_AD_FORMAT_FLOAT. &desc). devPtr. The following code sample copies the 2D array to the CUDA array allocated in the previous code samples: CUDA_MEMCPY2D copyParam. cuModule. CUdeviceptr ptr.WidthInBytes = width * sizeof(float). cuModule. desc. __device__ float devData. copyParam. cuMemAlloc(&ptr. __device__ float* devPointer. copyParam. “devPointer”). desc. 256 * sizeof(float)).14f.Height = height. float value = 3.Height = height. ++r) { float* row = (float*)((char*)devPtr + r * pitch).srcPitch = pitch. data. bytes). memset(&copyParam.Chapter 3. 60 CUDA C Programming Guide Version 3. cuModuleGetGlobal(&devPtr.srcMemoryType = CU_MEMORYTYPE_DEVICE.dstArray = cuArray.Width = width. The following code sample illustrates various ways of accessing global variables via the driver API: CUdeviceptr devPtr.

elements.height. // Matrix multiplication .width. cuMemAlloc(&elements. "MatMulKernel").2 61 . shared memory is statically allocated within the kernel as opposed to allocated at runtime through cuFuncSetSharedSize(). cuModuleGetFunction(&matMulKernel. cuParamSetSize(matMulKernel.stride = C.height.width * B. d_A. cuMemAlloc(elements.Chapter 3. d_C.width * C. size). // Load A and B to device memory Matrix d_A. offset). offset.2.width. } Matrix. size).elements.width = d_C. sizeof(d_B)).5 Shared Memory The following code sample is the driver version of the host code of the sample from Section 3. size). BLOCK_SIZE.stride + col) typedef struct { int width.height. d_A. 1). float* elements.height = C. d_B. Programming Interface 3. // Invoke kernel (assuming cuModule has been loaded) CUfunction matMulKernel. offset += sizeof(d_C). sizeof(d_C)).2.height * sizeof(float).stride = A.elements = (float*)elements.3. &d_B. In this sample. offset. CUDA C Programming Guide Version 3. cuParamSetv(matMulKernel. size = B. offset += sizeof(d_B).width = d_B. // Allocate C in device memory Matrix d_C. cuFuncSetBlockShape(matMulKernel.height * sizeof(float). offset += sizeof(d_A).elements = (float*)elements. cuMemcpyHtoD(elements. const Matrix B. offset. d_A. d_C. int height.width * A. cuModule. int offset = 0. A. col) = *(M. BLOCK_SIZE.Host code // Matrix dimensions are assumed to be multiples of BLOCK_SIZE void MatMul(const Matrix A.width. cuLaunchGrid(matMulKernel. size). B. d_C. cuParamSetv(matMulKernel. size). d_B.height = A. &d_A. size = C. // Matrices are stored in row-major order: // M(row.width = d_A. cuMemcpyHtoD(elements. int stride. cuMemAlloc(&elements. d_B. cuParamSetv(matMulKernel.stride = B.height = B. &d_C. Matrix C) { CUdeviceptr elements. size_t size = A.elements = (float*)elements.elements + row * M. sizeof(d_A)).height * sizeof(float). Matrix d_B.

devPtr. cuModule. cuTexRefSetAddress2D(cuTexRef. The reference manual lists various functions used to set address mode.3. cuModuleGetTexRef(&cuTexRef.elements). } 3. device). A. // Free device memory cuMemFree((CUdeviceptr)d_A. } 3. ++device) { CUdevice cuDevice. the following code sample retrieves texRef„s handle: CUtexref cuTexRef. cuMemFree((CUdeviceptr)d_C. otherwise. cuDevice).width / dimBlock. cuArray.7.7 3. The following code samples bind texRef to a CUDA array cuArray: cuTexRefSetArray(cuTexRef. The following code sample binds texRef to some linear memory pointed to by devPtr: CUDA_ARRAY_DESCRIPTOR desc. 2.6 Multiple Devices cuDeviceGetCount() and cuDeviceGet() provide a way to enumerate the devices present in the system and other functions (described in the reference manual) to retrieve their properties: int deviceCount. device < deviceCount. cuDeviceGet(&cuDevice.height / dimBlock. and other flags for some texture reference.2 . The format specified when binding a texture to a texture reference must match the parameters specified when declaring the texture reference. &desc.x. &minor.3. the results of texture fetches are undefined.elements). for (int device = 0. int device. “texRef”). minor. cuMemFree((CUdeviceptr)d_B.elements. (CUdeviceptr)d_C. cuDeviceComputeCapability(&major. cudaReadModeElementType> texRef.elements.elements). 62 CUDA C Programming Guide Version 3. Programming Interface B. cuDeviceGetCount(&deviceCount). filter mode. // Read C from device memory cuMemcpyDtoH(C.y). CU_TRSA_OVERRIDE_FORMAT).Chapter 3. pitch).1 Texture and Surface Memory Texture Memory Texure binding is done using cuTexRefSetAddress() for linear memory and cuTexRefSetArray() for CUDA arrays. int major. size).3. format. If a module cuModule contains some texture reference texRef defined as texture<float.

CU_TRSA_OVERRIDE_FORMAT). desc. cuModuleGetFunction(&transformKernel. CU_TR_ADDRESS_MODE_WRAP). cuParamSetv(transformKernel. offset. Programming Interface The following code sample is the driver version of the host code of the sample from Section 3. CU_TR_ADDRESS_MODE_WRAP). alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0.Width = width. #define ALIGN_UP(offset. cuTexRefSetFlags(texRef. // Set texture parameters CUtexref texRef. width * height * sizeof(float)). sizeof(copyParam)).srcHost = h_data. __alignof(width)). 0. CU_AD_FORMAT_FLOAT. "transformKernel").Format = CU_AD_FORMAT_FLOAT.3. copyParam. 1).dstArray = cuArray. desc.NumChannels = 1. cuArray.dstMemoryType = CU_MEMORYTYPE_ARRAY. copyParam. cuModule.1.4. sizeof(output)).srcPitch.srcPitch = width * sizeof(float).2 63 . cuMemAlloc(&output. // Bind the array to the texture reference cuTexRefSetArray(texRef. // Host code int main() { // Allocate CUDA array in device memory CUarray cuArray.2. CU_TRSF_NORMALIZED_COORDINATES).Height = height. desc.Height = height. // Allocate result of transformation in device memory CUdeviceptr output. desc. CUDA C Programming Guide Version 3. &desc).Chapter 3. offset += sizeof(output). cuTexRefSetAddressMode(texRef. copyParam. copyParam. memset(&copyParam. ALIGN_UP(offset. // Copy to device memory some data located at address h_data // in host memory CUDA_MEMCPY2D copyParam. cuTexRefSetFormat(texRef. cuArrayCreate(&cuArray. copyParam. offset. cuModuleGetTexRef(&texRef. width). ALIGN_UP(offset. 0. cuMemcpy2D(&copyParam). cuParamSeti(transformKernel. &output. CU_TR_FILTER_MODE_LINEAR). __alignof(output)). cuTexRefSetFilterMode(texRef. cuTexRefSetAddressMode(texRef. 1.srcMemoryType = CU_MEMORYTYPE_HOST. cuModule. "texRef")).WidthInBytes = copyParam. offset += sizeof(width). copyParam. copyParam. CUDA_ARRAY_DESCRIPTOR desc. // Invoke kernel (assuming cuModule has been loaded) CUfunction transformKernel.

} 3. offset += sizeof(height). 16. desc. “surfRef”). 64 CUDA C Programming Guide Version 3. __alignof(height)). ALIGN_UP(offset. copyParam. 16. copyParam.dstArray = cuInputArray. CU_SRSA_USE_ARRAY_FORMAT).1. cuFuncSetBlockShape(transformKernel. sizeof(copyParam)).srcMemoryType = CU_MEMORYTYPE_HOST. // Host code int main() { // Allocate CUDA arrays in device memory CUDA_ARRAY_DESCRIPTOR desc.Chapter 3. copyParam. cuMemFree(output). memset(&copyParam.Height = height.Height = height. offset += sizeof(angle). cuArray.x – 1) / dimBlock. The following code sample is the driver version of the host code of the sample from Section 3. CUarray cuOutputArray.srcHost = h_data. copyParam.y – 1) / dimBlock.2 Surface Memory Surface binding is done using cuSurfRefSetArray() for CUDA arrays.x.2 . cuParamSeti(transformKernel. cuParamSetf(transformKernel.7. cuParamSetSize(transformKernel. __alignof(angle)).srcPitch. (height + dimBlock.4. // Copy to device memory some data located at address h_data // in host memory CUDA_MEMCPY2D copyParam.2. cuModuleGetSurfRef(&cuSurfRef. If a module cuModule contains some surface reference surfRef defined as surface<void. copyParam. The following code samples bind surfRef to a CUDA array cuArray: cuSurfRefSetArray(cuSurfRef. Programming Interface ALIGN_UP(offset.NumChannels = 4. (width + dimBlock.y). cuArrayCreate(&cuOutputArray.srcPitch = width * sizeof(float). offset)). CUarray cuInputArray. // Free device memory cuArrayDestroy(cuArray). desc. 1). desc.Width = width. 0. offset.WidthInBytes = copyParam. cuModule. desc. cuLaunchGrid(transformKernel. &desc).Format = CU_AD_FORMAT_UNSIGNED_INT8. &desc). 2> surfRef. copyParam.dstMemoryType = CU_MEMORYTYPE_ARRAY.3. copyParam. cuArrayCreate(&cuInputArray. the following code sample retrieves surfRef„s handle: CUsurfref cuSurfRef. angle).4. offset. height).

__alignof(width)). (height + dimBlock.y – 1) / dimBlock. CU_MEMHOSTALLOC_WRITECOMBINED to allocate memory as write- combining (see Section 3. __alignof(height)). (width + dimBlock. cuLaunchGrid(copyKernel. 16.2. } 3. // Free device memory cuArrayDestroy(cuInputArray). Applications may query whether a device supports mapped page-locked host memory or not by checking the CU_DEVICE_ATTRIBUTE_CAN_MAP_HOST_MEMORY attribute using cuDeviceGetAttribute(). 1). cuModuleGetFunction(&copyKernel. offset.5. offset += sizeof(width).5. height). Page-locked host memory is freed using cuMemFreeHost().8 Page-Locked Host Memory Page-locked host memory can be allocated using cuMemHostAlloc() with optional mutually non-exclusive flags:    CU_MEMHOSTALLOC_PORTABLE to allocate memory that is portable across CUDA contexts (see Section 3. cuParamSeti(copyKernel. // Invoke kernel (assuming cuModule has been loaded) CUfunction copyKernel.2.1). Programming Interface cuMemcpy2D(&copyParam). CU_SRSA_USE_ARRAY_FORMAT). width).5.3. cuOutputArray. Page-locked memory mapping is enabled for a CUDA context by creating the context with the CU_CTX_MAP_HOST flag and device pointers to mapped pagelocked memory are retrieved using cuMemHostGetDevicePointer().2 65 . offset)). cuInputArray. cuModule. cuFuncSetBlockShape(copyKernel. "copyKernel"). cuParamSeti(copyKernel. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. // Bind the arrays to the surface references cuSurfRefSetArray(inputSurfRef. cuParamSetSize(copyKernel.x – 1) / dimBlock. ALIGN_UP(offset. CU_MEMHOSTALLOC_DEVICEMAP to allocate mapped page-locked memory (see Section 3. 16. offset. #define ALIGN_UP(offset. cuSurfRefSetArray(outputSurfRef.Chapter 3. CU_SRSA_USE_ARRAY_FORMAT).y).2). ALIGN_UP(offset.x.3). cuArrayDestroy(cuOutputArray). CUDA C Programming Guide Version 3.2. offset += sizeof(height).

cuFuncSetBlockShape(cuFunction.3.6. for (int i = 0. 66 CUDA C Programming Guide Version 3. cuParamSeti(cuFunction. offset += sizeof(inputDevPtr). i < 2. stream[i]). cuParamSetv(cuFunction. 2 * size). ALIGN_UP(offset. i < 2. offset). stream[i]).4. The following code sample is the driver version of the code sample from Section 3. &inputDevPtr. outputDevPtr + i * size.3. i < 2. ALIGN_UP(offset. 1. } for (int i = 0. cuCtxSynchronize(). 100. __alignof(outputDevPtr)). Applications may query if a device supports multiple kernels running concurrently by checking the CU_DEVICE_ATTRIBUTE_CONCURRENT_KERNELS attribute using cuDeviceGetAttribute(). alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. CUstream stream[2]. hostPtr + i * size. 0).2 . for (int i = 0. ALIGN_UP(offset. offset += sizeof(int).1 Stream The driver API provides functions similar to the runtime API to manage streams.9. ++i) cuStreamDestroy(&stream[i]). __alignof(inputDevPtr)). 1). size). &outputDevPtr. ++i) { #define ALIGN_UP(offset. for (int i = 0. 1. i < 2. ++i) cuStreamCreate(&stream[i]. Programming Interface 3. 3. for (int i = 0. offset.Chapter 3. cuParamSetSize(cuFunction. __alignof(size)). size. offset. stream[i]). float* hostPtr. 512. cuMemAllocHost(&hostPtr. sizeof(inputDevPtr)).9 Asynchronous Concurrent Execution Applications may query if a device can perform copies between page-locked host memory and device memory concurrently with kernel execution by checking the CU_DEVICE_ATTRIBUTE_GPU_OVERLAP attribute using cuDeviceGetAttribute(). offset. cuLaunchGridAsync(cuFunction. ++i) cuMemcpyHtoDAsync(inputDevPtr + i * size. sizeof(outputDevPtr)). i < 2. cuParamSetv(cuFunction. size. ++i) cuMemcpyDtoHAsync(hostPtr + i * size.2. offset += sizeof(outputDevPtr).

cuEventSynchronize(stop). cuParamSetv(cuFunction. cuEventDestroy(stop). sizeof(inputDevPtr)). i < 2. 3. They are destroyed this way: cuEventDestroy(start).2 Event Management The driver API provides functions similar to the runtime API to manage events. for (int i = 0. __alignof(size)).6. stop). CUevent start. hostPtr + i * size. cuEventElapsedTime(&elapsedTime. 1). offset += sizeof(outputDevPtr). stream[i]). ++i) { #define ALIGN_UP(offset. } for (int i = 0.Chapter 3. outputDevPtr + i * size. 3. 1. size. offset. i < 2. cuFuncSetBlockShape(cuFunction. __alignof(inputDevPtr)). cuEventCreate(&stop). Programming Interface 3. ALIGN_UP(offset. ALIGN_UP(offset. sizeof(outputDevPtr)). for (int i = 0. &outputDevPtr. 512. cuEventRecord(start. offset += sizeof(inputDevPtr).9. or spin on a synchronous function call can be specified by calling cuCtxCreate() with some specific flags as described in the reference manual. block. start.3 Synchronous Calls Whether the host thread will yield. stream[i]). CUDA C Programming Guide Version 3. cuParamSetSize(cuFunction. cuLaunchGridAsync(cuFunction. 100. offset). size).2. ALIGN_UP(offset. &inputDevPtr. cuParamSeti(cuFunction. cuParamSetv(cuFunction. float elapsedTime. offset += sizeof(size). ++i) cuMemcpyHtoDAsync(inputDevPtr + i * size.10 Graphics Interoperability The driver API provides functions similar to the runtime API to manage graphics interoperability. __alignof(outputDevPtr)).9.3. offset.3. ++i) cuMemcpyDtoHAsync(hostPtr + i * size. 1. The following code sample is the driver version of the code sample from Section 3. stream[i]). alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. cuEventRecord(stop. i < 2.6. 0). 0).2 67 . size.3. stop. offset. cuEventCreate(&start).

10.1. // Create context CUcontext cuContext.1 apply. 0. 3. int main() { // Initialize driver API . Programming Interface A resource must be registered to CUDA before it can be mapped using the functions mentioned in Sections 3.3.10. Once a resource is registered to CUDA. // Get handle for device 0 CUdevice cuDevice = 0. cuDeviceGet(&cuDevice. A texture or renderbuffer object is registered using cuGraphicsGLRegisterImage().1 and 3. cuDevice).2.2 give specifics for each graphics API and some code samples.1 and 3.3.ptx”). “createVertices..10.7. 68 CUDA C Programming Guide Version 3. Sections 3.3.. A CUDA graphics resource is unregistered using cuGraphicsUnregisterResource().3.10. // Create module from binary file CUmodule cuModule.Chapter 3. Registering a resource is potentially high-overhead and therefore typically called only once per resource.1 OpenGL Interoperability Interoperability with OpenGL requires that the CUDA context be specifically created using cuGLCtxCreate() instead of cuCtxCreate(). CUfunction createVertices.2 . The same restrictions described in Section 3. read-only) that the CUDA driver can use to optimize resource management. A mapped resource can be read from or written to by kernels using the device memory address returned by cuGraphicsResourceGetMappedPointer() for buffers and cuGraphicsSubResourceGetMappedArray() for CUDA arrays. and renderbuffer objects.2. cuGraphicsResourceSetMapFlags() can be called to specify usage hints (write-only. cuGLCtxCreate(&cuContext. it can be mapped and unmapped as many times as necessary using cuGraphicsMapResources() and cuGraphicsUnmapResources().10.3. GLuint positionsVBO. A buffer object is registered using cuGraphicsGLRegisterBuffer().7. 0). The OpenGL resources that may be mapped into the address space of CUDA are OpenGL buffer.2. Accessing a resource through OpenGL or Direct3D while it is mapped to CUDA produces undefined results. struct cudaGraphicsResource* positionsVBO_CUDA. The following code sample is the driver version of the code sample from Section 3. cuModuleLoad(&cuModule. These functions return a CUDA graphics resource of type CUgraphicsResource. texture.

cuParamSetv(createVertices. } void display() { // Map OpenGL buffer object for writing from CUDA CUdeviceptr positions. glBindBuffer(GL_ARRAY_BUFFER. width / threadsPerBlock. &positions. sizeof(positions)). ALIGN_UP(offset. cudaGraphicsMapFlagsWriteDiscard). cuModule.2 69 . // Create buffer object and register it with CUDA glGenBuffers(1. 0). time). &positionsVBO_CUDA. cuParamSetf(createVertices. positionsVBO. // Execute kernel #define ALIGN_UP(offset. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. cuGraphicsGLRegisterBuffer(&positionsVBO_CUDA. offset += sizeof(height). "createVertices"). 0. __alignof(positions)). // Unmap buffer object CUDA C Programming Guide Version 3. offset. 0).. __alignof(height)). ALIGN_UP(offset. ALIGN_UP(offset. cuLaunchGrid(createVertices. threadsPerBlock. cuGraphicsResourceGetMappedPointer((void**)&positions. // Launch rendering loop glutMainLoop(). height). threadsPerBlock. int threadsPerBlock = 16. offset. glBindBuffer(GL_ARRAY_BUFFER. offset += sizeof(positions). offset).. offset. height / threadsPerBlock). GL_DYNAMIC_DRAW). positionsVBO_CUDA)). Programming Interface // Get function handle from module cuModuleGetFunction(&createVertices. offset. 1). size_t num_bytes. cuGraphicsMapResources(1. cuParamSeti(createVertices. glutDisplayFunc(display). offset += sizeof(width). glBufferData(GL_ARRAY_BUFFER. cuParamSeti(createVertices. __alignof(width)).Chapter 3. &num_bytes. unsigned int size = width * height * 4 * sizeof(float). cuFuncSetBlockShape(createVertices. size. ALIGN_UP(offset. cuParamSetSize(createVertices. positionsVBO). &vbo). // Initialize OpenGL and GLUT . width). __alignof(time)). offset += sizeof(time).

3. and cuGraphicsD3D11RegisterResource(). Two sets of calls are also available to allow the creation of CUDA devices with interoperability with Direct3D devices that use NVIDIA SLI in AFR (Alternate Frame Rendering) mode.10. cuWGLGetDevice() can be used to retrieve the CUDA device associated to the handle returned by wglEnumGpusNV(). width * height).Chapter 3. The Direct3D resources that may be mapped into the address space of CUDA are Direct3D buffers. These two new sets of calls are cuD3D[9|10|11]CtxCreateOnDevice() and cuD3D[9|10|11]GetDevices(). // Swap buffers glutSwapBuffers(). cuGraphicsD3D10RegisterResource(). GL_FLOAT. they can call cuCtxPushCurrent() and cuCtxPopCurrent()to change the CUDA context active at a given time. Applications that intend to support interoperability between Direct3D devices in SLI configurations and CUDA should be written to only use these calls instead of the cuD3D[9|10|11]CtxCreate() calls. glDrawArrays(GL_POINTS. These resources are registered using cuGraphicsD3D9RegisterResource(). In addition. &positionsVBO). glutPostRedisplay(). Programming Interface cuGraphicsUnmapResources(1. 0. 70 CUDA C Programming Guide Version 3. &positionsVBO_CUDA. glVertexPointer(4. positionsVBO). 0). This is done by creating the CUDA context using cuD3D9CtxCreate() or cuD3D9CtxCreateOnDevice() (resp. 0. // Render from buffer object glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT). glEnableClientState(GL_VERTEX_ARRAY).3. 0).2 . glDisableClientState(GL_VERTEX_ARRAY). and surfaces. } void deleteVBO() { cuGraphicsUnregisterResource(positionsVBO_CUDA).3 for general recommendations related to interoperability between Direct3D devices using SLI and CUDA contexts. See Section 4.2 Direct3D Interoperability Interoperability with Direct3D requires that the Direct3D device be specified when the CUDA context is created. glBindBuffer(GL_ARRAY_BUFFER. cuD3D10CtxCreate()or cuD3D10CtxCreateOnDevice() and cuD3D11CtxCreate()or cuD3D11CtxCreateOnDevice()) instead of cuCtxCreate(). } On Windows and for Quadro GPUs. glDeleteBuffers(1. A call to cuD3D[9|10|11]GetDevices()should be used to obtain a list of CUDA device handles that can be passed as the last parameter to cuD3D[9|10|11]CtxCreateOnDevice(). textures.

0. struct cudaGraphicsResource* positionsVB_CUDA. Direct3D 9 Version: IDirect3D9* D3D. &params. // Create context CUdevice cuDevice. // Get function handle from module cuModuleGetFunction(&createVertices. struct CUSTOMVERTEX { FLOAT x. &positionsVB. &adapterId). 0)..Chapter 3. // Create module from binary file CUmodule cuModule. IDirect3DVertexBuffer9* positionsVB.ptx”). adapter < g_pD3D->GetAdapterCount(). device->CreateVertexBuffer(size.DeviceName) == cudaSuccess) break. adapterId. D3DDEVTYPE_HAL. int dev. } // Create device . D3DPOOL_DEFAULT. D3D->CreateDevice(adapter. cuModule. 0. CUDA C Programming Guide Version 3. cuGraphicsD3D9RegisterResource(&positionsVB_CUDA. "createVertices"). // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX). int main() { // Initialize Direct3D D3D = Direct3DCreate9(D3D_SDK_VERSION). D3DCREATE_HARDWARE_VERTEXPROCESSING. 0. if (cuD3D9GetDevice(&dev. cuD3D9CtxCreate(&cuContext. CUcontext cuContext.. y. hWnd. cuModuleLoad(&cuModule. adapter++) { D3DADAPTER_IDENTIFIER9 adapterId. g_pD3D->GetAdapterIdentifier(adapter. D3DFVF_CUSTOMVERTEX. &cuDevice. }. Programming Interface The following code sample is the driver version of the host code of the sample from Section 3.2 71 . device). // Get a CUDA-enabled adapter unsigned int adapter = 0.2. DWORD color. z. for (.. IDirect3DDevice9 device.2.7. // Initialize driver API . &device). “createVertices..

offset += sizeof(width). ALIGN_UP(offset. 72 CUDA C Programming Guide Version 3. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0.) { . int threadsPerBlock = 16. offset. ALIGN_UP(offset. 0). __alignof(width)). &positionsVB_CUDA. offset += sizeof(height).2 . cudaGraphicsMapFlagsWriteDiscard)... // Execute kernel #define ALIGN_UP(offset.. cudaGraphicsRegisterFlagsNone). &positionsVB_CUDA. cuParamSeti(createVertices. . cuFuncSetBlockShape(createVertices. width / threadsPerBlock. sizeof(positions)). // Unmap vertex buffer cuGraphicsUnmapResources(1. __alignof(height)). offset += sizeof(positions). Render(). offset. ALIGN_UP(offset. } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. offset). 1). height). threadsPerBlock. offset. 0). // Launch rendering loop while (. size_t num_bytes.Chapter 3. offset... // Draw and present . cuParamSetf(createVertices. time). &num_bytes. __alignof(time)). cuGraphicsResourceSetMapFlags(positionsVB_CUDA. offset += sizeof(time). cuParamSeti(createVertices.. } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA). positionsVB_CUDA)). cuGraphicsMapResources(1. ALIGN_UP(offset.. cuGraphicsResourceGetMappedPointer((void**)&positions. __alignof(positions)).. threadsPerBlock. cuParamSetSize(createVertices. cuParamSetv(createVertices. height / threadsPerBlock). Programming Interface positionsVB. cuLaunchGrid(createVertices. &positions. width).

if (cuD3D10GetDevice(&dev. &swapChainDesc &swapChain. !adapter. }. ID3D10Buffer* positionsVB. CUDA C Programming Guide Version 3. y. int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory. adapter) == cudaSuccess) break. D3D10_CREATE_DEVICE_DEBUG. } factory->Release(). CUcontext cuContext. “createVertices. D3D10_SDK_VERSION. 0. "createVertices"). // Create context CUdevice cuDevice..2 73 . 0. &device).. D3D10_BUFFER_DESC bufferDesc. Programming Interface positionsVB->Release().. z. // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX). } Direct3D 10 Version: ID3D10Device* device. for (unsigned int i = 0. // Create swap chain and device . CreateDXGIFactory(__uuidof(IDXGIFactory). &adapter)) break. // Initialize driver API . cuModuleLoad(&cuModule.Chapter 3. struct cudaGraphicsResource* positionsVB_CUDA. int dev. D3D10_DRIVER_TYPE_HARDWARE. // Create module from binary file CUmodule cuModule. adapter->Release(). // Get function handle from module cuModuleGetFunction(&createVertices. &cuDevice. ++i) { if (FAILED(factory->EnumAdapters(i.. DWORD color. struct CUSTOMVERTEX { FLOAT x. D3D10CreateDeviceAndSwapChain(adapter. cuD3D10CtxCreate(&cuContext. IDXGIAdapter* adapter = 0.ptx”). cuModule. (void**)&factory). adapter->Release(). device).

CPUAccessFlags = 0. } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. height / threadsPerBlock). &positionsVB). cuLaunchGrid(createVertices. cuParamSeti(createVertices. offset += sizeof(positions). int threadsPerBlock = 16.ByteWidth = size. 1). &num_bytes. offset). &positionsVB_CUDA. bufferDesc.. // Launch rendering loop while (. . cudaGraphicsMapFlagsWriteDiscard). offset. offset. size_t num_bytes. offset += sizeof(height).. // Execute kernel #define ALIGN_UP(offset. cudaGraphicsRegisterFlagsNone). bufferDesc. __alignof(width)). threadsPerBlock. 0. cuParamSetf(createVertices. // Unmap vertex buffer cuGraphicsUnmapResources(1.. offset. ALIGN_UP(offset. __alignof(positions)). cuGraphicsD3D10RegisterResource(&positionsVB_CUDA. bufferDesc. sizeof(positions)). 0). positionsVB_CUDA)).2 . offset += sizeof(time). cuParamSetv(createVertices.. positionsVB. ALIGN_UP(offset. __alignof(height)). offset += sizeof(width). cuGraphicsResourceSetMapFlags(positionsVB_CUDA. cuFuncSetBlockShape(createVertices.BindFlags = D3D10_BIND_VERTEX_BUFFER. bufferDesc. Programming Interface bufferDesc. height). __alignof(time)). cuParamSetSize(createVertices. ALIGN_UP(offset.Usage = D3D10_USAGE_DEFAULT. 74 CUDA C Programming Guide Version 3.. &positions. alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0. 0). width).) { . &positionsVB_CUDA.. cuGraphicsMapResources(1. device->CreateBuffer(&bufferDesc. offset. threadsPerBlock. cuGraphicsResourceGetMappedPointer((void**)&positions. ALIGN_UP(offset.Chapter 3. width / threadsPerBlock. Render(). cuParamSeti(createVertices.MiscFlags = 0. time).

. ++i) { if (FAILED(factory->EnumAdapters(i. struct cudaGraphicsResource* positionsVB_CUDA. adapter->Release(). Programming Interface // Draw and present .. positionsVB->Release()... if (cuD3D11GetDevice(&dev. cuD3D11CtxCreate(&cuContext. CUcontext cuContext. adapter) == cudaSuccess) break. IDXGIAdapter* adapter = 0. int dev. D3D11_DRIVER_TYPE_HARDWARE. 0. struct CUSTOMVERTEX { FLOAT x. CreateDXGIFactory(__uuidof(IDXGIFactory). (void**)&factory). } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA).2 75 . &device. // Create swap chain and device . 3. 0. &adapter)) break. featureLevels. DWORD color. D3D11_CREATE_DEVICE_DEBUG. CUDA C Programming Guide Version 3. // Initialize driver API . } factory->Release(). int main() { // Get a CUDA-enabled adapter IDXGIFactory* factory. D3D11_SDK_VERSION. !adapter.. y. ID3D11Buffer* positionsVB. adapter->Release(). for (unsigned int i = 0. }. } Direct3D 11 Version: ID3D11Device* device. &swapChain. device). &featureLevel.. &swapChainDesc. &deviceContext). &cuDevice.Chapter 3. // Create context CUdevice cuDevice. z. sFnPtr_D3D11CreateDeviceAndSwapChain(adapter.

&positionsVB). Render(). device->CreateBuffer(&bufferDesc.. 76 CUDA C Programming Guide Version 3. bufferDesc. cuParamSeti(createVertices. ALIGN_UP(offset.ptx”). __alignof(height)).2 . offset.Usage = D3D11_USAGE_DEFAULT. width). . cuGraphicsD3D11RegisterResource(&positionsVB_CUDA.. offset. “createVertices.. cuModule. size_t num_bytes. ALIGN_UP(offset.ByteWidth = size. } } void Render() { // Map vertex buffer for writing from CUDA float4* positions. 0). bufferDesc. cuParamSeti(createVertices. &positionsVB_CUDA. ALIGN_UP(offset. sizeof(positions)). cudaGraphicsRegisterFlagsNone). &num_bytes. "createVertices"). cuGraphicsResourceGetMappedPointer((void**)&positions.. cuGraphicsResourceSetMapFlags(positionsVB_CUDA. cuGraphicsMapResources(1. __alignof(width)). cudaGraphicsMapFlagsWriteDiscard). cuModuleLoad(&cuModule. height). // Execute kernel #define ALIGN_UP(offset.CPUAccessFlags = 0. cuParamSetv(createVertices. __alignof(time)). alignment) \ (offset) = ((offset) + (alignment) – 1) & ~((alignment) – 1) int offset = 0.BindFlags = D3D10_BIND_VERTEX_BUFFER. bufferDesc.MiscFlags = 0. // Get function handle from module cuModuleGetFunction(&createVertices. // Launch rendering loop while (. Programming Interface // Create module from binary file CUmodule cuModule. positionsVB_CUDA)). ALIGN_UP(offset. cuParamSetf(createVertices. bufferDesc.Chapter 3.. offset += sizeof(positions). __alignof(positions)). offset += sizeof(time). &positions. 0. time). offset. positionsVB. bufferDesc. offset.. // Create vertex buffer and register it with CUDA unsigned int size = width * height * sizeof(CUSTOMVERTEX).) { . offset += sizeof(width). D3D11_BUFFER_DESC bufferDesc.

if an asynchronous error occurs.11 Error Handling All driver functions return an error code. CUDA C Programming Guide Version 3. positionsVB->Release(). threadsPerBlock. cuParamSetSize(createVertices. } void releaseVB() { cuGraphicsUnregisterResource(positionsVB_CUDA). int threadsPerBlock = 16.3. the error code only reports errors that occur on the host prior to executing the task. The only way to check for asynchronous errors just after some asynchronous function call is therefore to synchronize just after the call by calling cuCtxSynchronize() (or by using any other synchronization mechanisms described in Section 3.3. 1). offset).Chapter 3. height / threadsPerBlock). 3. 0). cuLaunchGrid(createVertices..x. typically related to parameter validation. threadsPerBlock. it will be reported by some subsequent unrelated runtime function call.3. 3.2. but for an asynchronous function (see Section 3.6). the size of the call stack can be queried using cuCtxGetLimit() and set using cuCtxSetLimit().12 Call Stack On devices of compute capability 2. // Unmap vertex buffer cuGraphicsUnmapResources(1. subsequent runtime calls will pick up this context instead of creating a new one. &positionsVB_CUDA. If a context is created and made current via the driver API. // Draw and present . this error code cannot possibly report any of the asynchronous errors that could occur on the device since the function returns before the device has completed the task. width / threadsPerBlock..4 Interoperability between Runtime and Driver APIs An application can mix runtime API code with driver API code.9) and checking the error code returned by cuCtxSynchronize(). Programming Interface offset += sizeof(height).2 77 . cuFuncSetBlockShape(createVertices. } 3.

size). plug-ins. …). specifically: All applications.5) and the version of the CUDA driver API that describes the features supported by the driver API and runtime. …). The driver API is not forward compatible. float* d_data. size).2 . since only one version of the CUDA driver can be installed on a system. In particular. this means that applications written using the driver API can invoke libraries written using the runtime API (such as CUFFT. The version of the driver API is defined in the driver header file as CUDA_VERSION. and libraries on a system must use the same version of the CUDA driver API. // Allocation using runtime API cudaMalloc(&d_data.  78 CUDA C Programming Guide Version 3. and libraries (including the C runtime) compiled against a particular version of the driver API will continue to work on subsequent driver releases as illustrated in Figure 3-4. 3. d_data = (float*)devPtr. because the driver API is backward compatible.Chapter 3. It is important to note that mixing and matching versions is not supported. CUdeviceptr can be cast to regular pointers and vice-versa: CUdeviceptr devPtr. plug-ins. // Allocation using driver API cuMemAlloc(&devPtr. This context can be used by subsequent driver API calls. devPtr = (CUdeviceptr)d_data. and libraries (including the C runtime) compiled against a particular version of the driver API will not work on previous versions of the driver. CUBLAS. plug-ins. Device memory can be allocated and freed using either API. It allows developers to check whether their application requires a newer driver than the one currently installed. meaning that applications.5 Versioning and Compatibility There are two version numbers that developers should care about when developing a CUDA application: The compute capability that describes the general specifications and features of the compute device (see Section 2. Programming Interface If the runtime is initialized (implicitly as mentioned in Section 3.  All plug-ins and libraries used by an application must use the same version of any libraries that use the runtime (such as CUFFT. CUBLAS.2). All functions from the device and version management sections of the reference manual can be used interchangeably.  All plug-ins and libraries used by an application must use the same version of the runtime. which means that applications. This is important. cuCtxAttach() can be used to retrieve the context created during initialization.

1 Driver Incompatible 2. When users initiate CUDA C Programming Guide Version 3.. when using the driver API) at the same time. one can set any device in a system in one of the three following modes using NVIDIA‟s System Management Interface (nvidia-smi)..  Exclusive compute mode: Only one host thread can use the device at any given time.2 79 . which is a tool distributed as part of the Linux driver: Default compute mode: Multiple host threads can use the device (by calling cudaSetDevice() on this device. Programming Interface Apps.  Prohibited compute mode: No host thread can use the device. but Not Forward Compatible 3. cudaSetValidDevices() can be used to set a device from a prioritized list of devices. This means. that a host thread using the runtime API without explicitly calling cudaSetDevice() might be associated with a device other than device 0 if device 0 turns out to be in prohibited compute mode or in exclusive compute mode and used by another host thread.. Figure 3-4.  Applications may query the compute mode of a device by calling cudaGetDeviceProperties() and checking the computeMode property or checking the CU_DEVICE_COMPUTE_MODE attribute using cuDeviceGetAttribute(). or by making current a context associated to the device. 3. which is used to refresh the display device whose output is viewed by the user.Chapter 3.7 Mode Switches GPUs dedicate some DRAM memory to the so-called primary surface. The Driver API is Backward. Libs & Plug-ins . in particular.0 Driver Compatible 1.0 Driver . Libs & Plug-ins Apps. when using the runtime API.. 1.6 Compute Modes On Tesla solutions running Linux. Libs & Plug-ins Apps.

a mode switch results in any call to the CUDA runtime to fail and return an invalid context error.Chapter 3. other events that may initiate display mode switches include launching a full-screen DirectX application. the system may have to cannibalize memory allocations dedicated to CUDA applications. Programming Interface a mode switch of the display by changing the resolution or bit depth of the display (using NVIDIA control panel or the Display control panel on Windows).) On Windows. hitting Alt+Tab to task switch away from a full-screen DirectX application. if the user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit. (Fullscreen graphics applications running with anti-aliasing enabled may require much more display memory for the primary surface. or hitting Ctrl+Alt+Del to lock the computer. Therefore. the amount of memory needed for the primary surface changes. the system must dedicate 7.68 MB to the primary surface rather than 5. 80 CUDA C Programming Guide Version 3.24 MB.2 . For example. If a mode switch increases the amount of memory needed for the primary surface.

Individual threads composing a warp start together at the same program address. or fourth quarter of a warp. To maximize utilization of its functional units. Hardware Implementation The CUDA architecture is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). schedules. The term warp originates from weaving. The threads of a thread block execute concurrently on one multiprocessor. Sections G. A quarter-warp is either the first.x and 2. second. The way a block is partitioned into warps is always the same. 4. and multiple thread blocks can execute concurrently on one multiprocessor.1 81 . A half-warp is either the first or second half of a warp. but unlike CPU cores they are executed in order and there is no branch prediction and no speculative execution).1 provide the specifics for devices of compute capabilities 1. it employs a unique architecture called SIMT (Single-Instruction.2 describes how thread IDs relate to thread indices in the block. increasing thread IDs with the first warp containing thread 0. more so than instruction-level parallelism within a single thread (instructions are pipelined. manages.4.1 and G. CUDA C Programming Guide Version 3. respectively.Chapter 4. each warp contains threads of consecutive. third.3.1.2. it partitions them into warps that get scheduled by a warp scheduler for execution. new blocks are launched on the vacated multiprocessors. When a CUDA program on the host CPU invokes a kernel grid. and executes threads in groups of 32 parallel threads called warps.2 describe the architecture features of the streaming multiprocessor that are common to all devices.x. the blocks of the grid are enumerated and distributed to multiprocessors with available execution capacity. but they have their own instruction address counter and register state and are therefore free to branch and execute independently. To manage such a large amount of threads. When a multiprocessor is given one or more thread blocks to execute. it leverages thread-level parallelism by using hardware multithreading as detailed in Section 4. the first parallel thread technology. Sections 4.1 and 4. Multiple-Thread) that is described in Section 4. A multiprocessor is designed to execute hundreds of threads concurrently. Section 2.1 SIMT Architecture The multiprocessor creates. As thread blocks terminate.

the number of serialized writes that occur to that location varies depending on the compute capability of the device (see Sections G. and G. as well as data-parallel code for coordinated threads. In contrast with SIMD vector machines.4. The number of blocks and warps that can reside and be processed together on the multiprocessor for a given kernel depends on the amount of registers and shared memory used by the kernel and the amount of registers and shared memory available on the multiprocessor. etc) for each warp processed by a multiprocessor is maintained on-chip during the entire lifetime of the warp. The SIMT architecture is akin to SIMD (Single Instruction. on the other hand.2 . different warps execute independently regardless of whether they are executing common or disjoint code paths. the threads converge back to the same execution path. These limits as well the amount of registers and shared memory available on the multiprocessor 82 CUDA C Programming Guide Version 3. For the purposes of correctness.3) and which thread performs the final write is undefined. Switching from one execution context to another therefore has no cost. and a parallel data cache or shared memory that is partitioned among the thread blocks.3. registers.3. In practice. this is analogous to the role of cache lines in traditional code: Cache line size can be safely ignored when designing for correctness but must be considered in the code structure when designing for peak performance. Branch divergence occurs only within a warp. In particular. and when all paths complete.2. SIMT enables programmers to write thread-level parallel code for independent. modify. scalar threads.11) executed by a warp reads. require the software to coalesce loads into vectors and manage divergence manually. There are also a maximum number of resident blocks and a maximum number of resident warps per multiprocessor. whereas SIMT instructions specify the execution and branching behavior of a single thread. Multiple Data) vector organizations in that a single instruction controls multiple processing elements. write to that location occurs and they are all serialized. If threads of a warp diverge via a data-dependent conditional branch.Chapter 4. disabling threads that are not on that path. each multiprocessor has a set of 32-bit registers that are partitioned among the warps. Hardware Implementation A warp executes one common instruction at a time. A key difference is that SIMD vector organizations expose the SIMD width to the software. substantial performance improvements can be realized by taking care that the code seldom requires threads in a warp to diverge. Vector architectures. If an atomic instruction (see Section B. the programmer can essentially ignore the SIMT behavior. and at every instruction issue time.2 Hardware Multithreading The execution context (program counters. however. If a non-atomic instruction executed by a warp writes to the same location in global or shared memory for more than one of the threads of the warp.3. modifies. each read. so full efficiency is realized when all 32 threads of a warp agree on their execution path. and writes to the same location in global memory for more than one of the threads of the warp. a warp scheduler selects a warp that has threads ready to execute its next instruction (active threads) and issues the instruction to those threads. but the order in which they occur is undefined. G. the warp serially executes each branch path taken. 4. G.4.2.

1) Wsize T is the number of threads per block. GW )  Wsize  Rk .  GT is the thread allocation granularity.  4.2 and 1. The total number of warps Wblock in a block is as follows: Wblock  ceil (  T . all CUDA-enabled GPUs are accessible via the CUDA driver and runtime as separate devices.x. the kernel will fail to launch.2 83 .0 and 1. and 64 for devices of compute capability 2. equal to 256 for devices of compute capability 1. Because of this.x.3. an allocation in one CUDA device on one GPU will consume memory on other GPUs that are part of the SLI configuration of the Direct3D device. There are however special considerations as described below when the system is in SLI mode.  Wsize is the warp size. GT ) For devices of compute capability 2.3 Multiple Devices In a system with multiple GPUs. y) is equal to x rounded up to the nearest multiple of y. which is equal to 32. applications have to create multiple CUDA contexts. one for each GPU in the SLI configuration and deal with the fact that a different GPU is used for rendering by the Direct3D device at every frame. The total number of registers Rblock allocated for a block is as follows: For devices of compute capability 1.  Rk is the number of registers used by the kernel. equal to 2 (compute capability 1.  GS is the shared memory allocation granularity. If there are not enough registers or shared memory available per multiprocessor to process at least one block.x: Rblock  ceil (ceil (Wblock .Chapter 4: Hardware Implementation are a function of the compute capability of the device and are given in Appendix G. The total amount of shared memory Sblock in bytes allocated for a block is as follows:  S block  ceil (S k . First. GT )  Wblock GW is the warp allocation granularity. and 512 for devices of compute capability 1. Second. which is equal to 512 for devices of compute capability 1. allocations may fail earlier than otherwise expected.x: Rblock  ceil ( Rk  Wsize . The application can use the cuD3D[9|10|11]GetDevices() set of calls to identify the CUDA device handle(s) for the GPU(s) that are performing the rendering in the current and next frame. GS ) Sk is the amount of shared memory used by the kernel in bytes.x only).  ceil(x.1.x and 128 for devices of compute capability 2. Given this information the application will typically map Direct3D resources to the CUDA context corresponding to the CUDA device returned by cuD3D[9|10|11]GetDevices() when the deviceList parameter is set to CUDA C Programming Guide Version 3.

See Sections 3. 84 CUDA C Programming Guide Version 3. Hardware Implementation CU_D3D10_DEVICE_LIST_CURRENT_FRAME.2 .7.Chapter 4.3.2.10.2 for details on how to use CUDA-Direct3D interoperability.2 and 3.

for example using the CUDA profiler. and the bus connecting the host to the devices. Optimization efforts should therefore be constantly directed by measuring and monitoring the performance limiters. optimizing instruction usage of a kernel that is mostly limited by memory accesses will not yield any significant performance gain. 5.1 Application Level At a high level. at points in the algorithm where parallelism is broken because some threads need to synchronize in order to share data with each other.  Optimize instruction usage to achieve maximum instruction throughput. Which strategies will yield the best performance gain for a particular portion of an application depends on the performance limiters for that portion.2. for example. the application should maximize parallel execution between the host.  Optimize memory usage to achieve maximum memory throughput.1 85 . For the parallel workloads.1 Overall Performance Optimization Strategies Performance optimization revolves around three basic strategies: Maximize parallel execution to achieve maximum utilization. parallel workloads to the devices. It should assign to each processor the type of work it does best: serial workloads to the host. in which case CUDA C Programming Guide Version 3.  5. the devices. Performance Guidelines 5.Chapter 5. by using asynchronous functions calls and streams as described in Section 3.6.2. comparing the floating-point operation throughput or memory throughput – whichever makes more sense – of a particular kernel to the corresponding peak theoretical throughput of the device indicates how much room for improvement there is for the kernel. there are two cases: Either these threads belong to the same block.2 Maximize Utilization To maximize utilization the application should be structured in a way that it exposes as much parallelism as possible and efficiently maps this parallelism to the various components of the system to keep them busy most of the time. Also.

only one kernel can execute on a device at one time. and issues the instruction to the active threads of the warp.2. one for writing to and one for reading from global memory.1 since a multiprocessor issues a pair of instructions per warp over 2 clock cycles for 2 warps at a time. as mentioned in Section G.x since a multiprocessor issues one instruction per warp over 4 clock cycles.2. as mentioned in Section G. if any.Chapter 5.4.x. so maximum utilization can also be achieved by using streams to enable enough kernels to execute concurrently as described in Section 3. At every instruction issue time.  86 CUDA C Programming Guide Version 3. Its occurrence should therefore be minimized by mapping the algorithm to the CUDA programming model in such a way that the computations that require inter-thread communication are performed within a single thread block as much as possible.2 . The number of instructions required to hide a latency of L clock cycles depends on the respective throughputs of these instructions (see Section 5.1 for the throughputs of various arithmetic instructions). so the kernel should be launched with at least as many thread blocks as there are multiprocessors in the device.1.  2L (rounded up to nearest integer) for devices of compute capability 2.0 since a multiprocessor issues one instruction per warp over 2 clock cycles for 2 warps at a time. a warp scheduler selects a warp that is ready to execute its next instruction. Performance Guidelines they should use __syncthreads() and share data through shared memory within the same kernel invocation. As described in Section 4. multiple kernels can execute concurrently on a device.1.x.1. as mentioned in Section G. 5.2 Device Level At a lower level.  L (rounded up to nearest integer) for devices of compute capability 2. the application should maximize parallel execution between the various functional units within a multiprocessor.3. or in other words. a GPU multiprocessor relies on thread-level parallelism to maximize utilization of its functional units.2. when latency is completely “hidden”.2. or they belong to different blocks. For devices of compute capability 2.4. it is: L/4 (rounded up to nearest integer) for devices of compute capability 1.4. For devices of compute capability 1. The second case is much less optimal since it adds the overhead of extra kernel invocations and global memory traffic. in which case they must share data through global memory using two separate kernel invocations. assuming maximum throughput for all instructions. Utilization is therefore directly linked to the number of resident warps.6.3 Multiprocessor Level At an even lower level. The number of clock cycles it takes for a warp to be ready to execute its next instruction is called the latency. the application should maximize parallel execution between the multiprocessors of a device. 5. and full utilization is achieved when all warp schedulers always have some instruction to issue for some warp at every clock cycle during that latency period.

the CUDA Software Development Kit provides a spreadsheet. and for devices of compute capability 1. shared. the amount of shared memory used to pass the kernel‟s arguments (see Section B.Chapter 5. The most common reason a warp is not ready to execute its next instruction is that the instruction‟s input operands are not yet available. called the CUDA Occupancy Calculator. then to hide latencies of about 600 clock cycles.2. If this ratio is 15. If some input operand resides in off-chip memory. Execution time varies depending on the instruction. and the resource requirements of the kernel as described in Section 4. for devices of compute capability 1. the latency is equal to the execution time of the previous instruction and the warp scheduler must schedule instructions for different warps during that time. Having multiple resident blocks per multiprocessor can help reduce idling in this case. the four instructions issued every other cycle are two pairs for two different warps.1. latency is caused by register dependencies. Another reason a warp is not ready to execute its next instruction is that it is waiting at some memory fence (Section B.16). For example. Register. which translates to 6 warps for devices of compute capability 1.0.e. The number of blocks and warps residing on each multiprocessor for a given kernel call depends on the execution configuration of the call (Section B. If all input operands are registers. Performance Guidelines For devices of compute capability 2. For devices of compute capability 2.x.2.x. in general. the amount of dynamically allocated shared memory. the two instructions issued every other cycle are for two different warps.e. and constant memory usages are reported by the compiler when compiling with the --ptxas-options=-v option. some of the input operands are written by some previous instruction(s) whose execution has not completed yet. for example. i.1.4). where occupancy is defined as the ratio of the number of resident warps to the maximum number of resident warps (given in Appendix G for various compute capabilities). more warps are required if the ratio of the number of instructions with no off-chip memory operands (i. but it is typically about 22 clock cycles.x and about 40 for devices of compute capability 2. A synchronization point can force the multiprocessor to idle as more and more warps wait for other warps in the same block to complete execution of instructions prior to the synchronization point.x and 22 warps for devices of compute capability 2. as warps from different blocks do not need to wait for each other at synchronization points. The number of warps required to keep the warp schedulers busy during such high latency periods depends on the kernel code.6). arithmetic instructions most of the time) to the number of instructions with off-chip memory operands is low (this ratio is commonly called the arithmetic intensity of the program). To assist programmers in choosing thread block size based on register and shared memory requirements. if a CUDA C Programming Guide Version 3. the memory resources of the multiprocessor. In the case of a back-to-back register dependency (i. the latency is much higher: 400 to 800 clock cycles.5) or synchronization point (Section B. The number of registers used by a kernel can have a significant impact on the number of resident warps. about 10 warps are required for devices of compute capability 1. The total amount of shared memory required for a block is equal to the sum of the amount of statically allocated shared memory.e. each pair being for the same warp. some input operand is written by the previous instruction).x.2 87 . local.

texture cache and constant cache available on all devices).  88 CUDA C Programming Guide Version 3.2 .3 Maximize Memory Throughput The first step in maximizing overall memory throughput for the application is to minimize data transfers with low bandwidth. then two blocks (i. which depends on the compute capability of the device.  Synchronize again if necessary to make sure that shared memory has been updated with the results. in other words. only one block (i. 16 warps) can be resident since two blocks would require 2x512x17 registers. But as soon as the kernel uses one more register.x. all of which can be queried using the runtime or driver API (see reference manual). The number of threads per block should be chosen as a multiple of the warp size to avoid wasting computing resources with under-populated warps as much as possible.2.e.17. 5. the compiler attempts to minimize register usage while keeping register spilling (see Section 5. as well as on the number of multiprocessors and memory bandwidth of the device. devices of compute capability 1. i.2 and higher have at least twice as many registers per multiprocessor as devices with lower compute capability. However. As illustrated in Section 3. Register usage can be controlled using the -maxrregcount compiler option or launch bounds as described in Section B. which are more registers than are available on the multiprocessor.e. since these have much lower bandwidth than data transfers between global memory and the device. 32 warps) can reside on the multiprocessor since they require 2x512x16 registers. The effect of execution configuration on performance for a given kernel call generally depends on the kernel code. Shared memory is equivalent to a user-managed cache: The application explicitly allocates and accesses it. Applications can also parameterize execution configurations based on register file size and shared memory size.e.  Process the data in shared memory.Chapter 5. which exactly matches the number of registers available on the multiprocessor. That means minimizing data transfers between the host and the device. Each double variable (on devices that supports native double precision. Therefore.3.2.  Synchronize with all the other threads of the block so that each thread can safely read shared memory locations that were populated by different threads. as detailed in Section 5. to have each thread of a block: Load data from device memory to shared memory. L1/L2 caches available on devices of compute capability 2. Experimentation is therefore recommended. a typical programming pattern is to stage data coming from device memory into shared memory.e.1.3.2. That also means minimizing data transfers between global memory and the device by maximizing use of on-chip memory: shared memory and caches (i. devices of compute capability 1.2 and higher) and each long long variable uses two registers. Performance Guidelines kernel uses 16 registers and each block has 512 threads and requires very little shared memory.2) and the number of instructions to a minimum.

Applications may query whether a device is integrated or not by calling cudaGetDeviceProperties() and checking the integrated property or checking the CU_DEVICE_ATTRIBUTE_INTEGRATED attribute using cuDeviceGetAttribute(). Performance Guidelines Write the results back to device memory.2. Intermediate data structures may be created in device memory. For some applications (e. for which global memory accesses are data-dependent). these memory accesses must be coalesced as with accesses to global memory (see Section 5. operated on by the device.4. 5.3.5.3. the same on-chip memory is used for both L1 and shared memory.3). when using mapped page-locked memory (Section 3.1.4.2.2 Device Memory Accesses An instruction that accesses addressable memory (i. In addition. there is no need to allocate any device memory and explicitly copy data between device and host memory. batching many small transfers into a single large transfer always performs better than making each transfer separately. local.4. 5. so non-optimal global memory accesses have a higher impact on performance.3. constant.3. or texture memory) might need to be re-issued multiple times depending on the CUDA C Programming Guide Version 3. and how much of it is dedicated to L1 versus shared memory is configurable for each kernel call.1 Data Transfer between Host and Device Applications should strive to minimize data transfer between the host and the device.2 89 .2. 5.2.1).1. any copy between host and device memory is superfluous and mapped pagelocked memory should be used instead.3. shared.2. As mentioned in Section G. One way to accomplish this is to move more code from the host to the device. using mapped page-locked memory instead of explicit copies between device and host memory can be a win for performance. On systems with a front-side bus.e. for devices of compute capability 2.2. and destroyed without ever being mapped by the host or copied to host memory.2.x.  The throughput of memory accesses by a kernel can vary by an order of magnitude depending on access pattern for each type of memory.g.Chapter 5. Data transfers are implicitly performed each time the kernel accesses the mapped memory. This optimization is especially important for global memory accesses as global memory bandwidth is low. The next step in maximizing memory throughput is therefore to organize memory accesses as optimally as possible based on the optimal memory access patterns described in Sections 5.3.5. higher performance for data transfers between host and device is achieved by using page-locked host memory as described in Section 3. For maximum performance. and 5.3.3.1. On integrated systems where device memory and host memory are physically the same. Assuming that they are and that the mapped memory is read or written only once. Also.4. even if that means running kernels with low parallelism computations. global. because of the overhead associated with each transfer. a traditional hardware-managed cache is more appropriate to exploit data locality. 5.

1. 90 CUDA C Programming Guide Version 3. 5. or 128-byte segments of device memory that are aligned to their size (i.4. 64-. the more scattered the addresses are. the more unused words are transferred in addition to the words accessed by the threads. throughput is divided by 8.e. the more transactions are necessary. 2.3.1 Size and Alignment Requirement Global memory instructions support reading or writing words of size equal to 1.x.1 Global Memory Global memory resides in device memory and device memory is accessed via 32-.3.3. To maximize global memory throughput.4.2 and G.2 give more details on how global memory accesses are handled for various compute capabilities. 4.2.1. They are much more relaxed for devices of higher compute capabilities.2 and G.3. whose first address is a multiple of their size) can be read or written by memory transactions. 8. Using data types that meet the size and alignment requirement detailed in Section 5. How many transactions are necessary and how throughput is ultimately affected varies with the compute capability of the device. For devices of compute capability 1. for example. In general. the access compiles to multiple instructions with interleaved access patterns that prevent these instructions from fully coalescing. if a 32-byte memory transaction is generated for each thread‟s 4-byte access.2. If this size and alignment requirement is not fulfilled. 4.1. It is therefore recommended to use types that meet this requirement for data that resides in global memory. it is therefore important to maximize coalescing by: Following the most optimal access patterns based on Sections G.1. Sections G. or 16 bytes and the data is naturally aligned (i. For example. Any access (via a variable or a pointer) to data residing in global memory compiles to a single global memory instruction if and only if the size of the data type is 1. or 16 bytes. as a general rule. the more reduced the throughput is. or 128-byte memory transactions. when accessing a two-dimensional array as described in Section 5.2 . For devices of compute capability 2. 2. When a warp executes an instruction that accesses global memory. its address is a multiple of that size).2. How the distribution affects the instruction throughput this way is specific to each type of memory and described in the following sections. 8.3. the memory transactions are cached. the requirements on the distribution of the addresses across the threads to get any coalescing at all are very strict.1.2. 64-. These memory transactions must be naturally aligned: Only the 32-.   5.3.Chapter 5.e. for global memory. For example. so data locality is exploited to reduce impact on throughput.2.0 and 1. it coalesces the memory accesses of the threads within the warp into one or more of these memory transactions depending on the size of the word accessed by each thread and the distribution of the memory addresses across the threads. Performance Guidelines distribution of the memory addresses across the threads within the warp.2.  Padding data in some cases. reducing the instruction throughput accordingly.

}. such as struct __align__(8) { float x.2 Local Memory Local memory accesses only occur for some automatic variables as mentioned in Section B.3. float z. Automatic variables that the compiler is likely to place in local memory are: CUDA C Programming Guide Version 3. the size and alignment requirements can be enforced by the compiler using the alignment specifiers __align__(8) or __align__(16). 5.2 Two-Dimensional Arrays A common global memory access pattern is when each thread of index (tx.3.Chapter 5.1. Any address of a variable residing in global memory or returned by one of the memory allocation routines from the driver or runtime API is always aligned to at least 256 bytes.1): BaseAddress + width * ty + tx For these accesses to be fully coalesced.ty) uses the following address to access one element of a 2D array of width width. so special care must be taken to maintain alignment of the starting address of any value or array of values of these types.4. in which case the starting address of each array is offset from the block‟s starting address.2. 5.3.2 91 . Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results (off by a few words).1. float y.1 like float2 or float4.x). whereby the allocations of multiple arrays (with multiple calls to cudaMalloc() or cuMemAlloc()) is replaced by the allocation of a single large block of memory partitioned into multiple arrays.2.3.2. }. In particular. For structures. this means that an array whose width is not a multiple of this size will be accessed much more efficiently if it is actually allocated with a width rounded up to the closest multiple of this size and its rows padded accordingly. float y. or struct __align__(16) { float x. A typical case where this might be easily overlooked is when using some custom global memory allocation scheme. The cudaMallocPitch() and cuMemAllocPitch() functions and associated memory copy functions described in the reference manual enable programmers to write non-hardware-dependent code to allocate arrays that conform to these constraints. both the width of the thread block and the width of the array must be a multiple of the warp size (or only half the warp size for devices of compute capability 1. located at address BaseAddress of type type* (where type meets the requirement described in Section 5. Performance Guidelines The alignment requirement is automatically fulfilled for the built-in types of Section B.2.

so local memory accesses have same high latency and low bandwidth as global memory accesses and are subject to the same requirements for memory coalescing as described in Section 5. which can be accessed simultaneously. The hardware splits a memory request with bank conflicts into as many separate conflict-free requests as necessary. In fact. shared memory is divided into equally-sized memory modules.x.4 Constant Memory The constant memory space resides in device memory and is cached in the constant cache mentioned in Sections G. same index in an array variable. 5. Accesses are therefore fully coalesced as long as all threads in a warp access the same relative address (e.2.x and 2. On devices of compute capability 2.1. accessing shared memory is fast as long as there are no bank conflicts between the threads.Chapter 5.2). called banks. Any memory read or write request made of n addresses that fall in n distinct memory banks can therefore be serviced simultaneously.  Large structures or arrays that would consume too much register space.2.4. respectively. Also.4. the initial memory request is said to cause n-way bank conflicts. the shared memory space is much faster than the local and global memory spaces. it is therefore important to understand how memory addresses map to memory banks in order to schedule the memory requests so as to minimize bank conflicts. However.3.  The local memory space resides in device memory.3. 5.3 for devices of compute capability 1. Even if it has not.2 .3 and G. Inspection of the PTX assembly code (obtained by compiling with the –ptx or -keep option) will tell if a variable has been placed in local memory during the first compilation phases as it will be declared using the .2. if two addresses of a memory request fall in the same memory bank. To achieve high bandwidth. To get maximum performance. local memory accesses are always cached in L1 and L2 in the same way as global memory accesses (see Section G.3. 92 CUDA C Programming Guide Version 3. Performance Guidelines Arrays for which it cannot determine that they are indexed with constant quantities. yielding an overall bandwidth that is n times as high as the bandwidth of a single module. for all threads of a warp. This is described in Sections G.x.g.3. Note that some mathematical functions have implementation paths that might access local memory. same member in a structure variable).4.local and st. there is a bank conflict and the access has to be serialized. decreasing throughput by a factor equal to the number of separate memory requests.1. Local memory is however organized such that consecutive 32-bit words are accessed by consecutive thread IDs.3 Shared Memory Because it is on-chip. If the number of separate memory requests is n.  Any variable if the kernel uses more registers than available (this is also known as register spilling). as detailed below. the compiler reports total local memory usage per kernel (lmem) when compiling with the --ptxas-options=-v option.local mnemonic and accessed using the ld.1 and G.3. subsequent compilation phases might still decide otherwise though if they find it consumes too much register space for the targeted architecture: Inspection of the cubin object using cuobjdump will tell if this is the case.local mnemonics.

Also. single-precision instead of double-precision.3. so threads of the same warp that read texture or surface addresses that are close together in 2D will achieve best performance.x. for example.1.1).3 or by using restricted pointers as described in Section E. such as using intrinsic instead of regular functions (intrinsic functions are listed in Section C.5 Texture and Surface Memory The texture and surface memory spaces reside in device memory and are cached in texture cache.2.  Addressing calculations are performed outside the kernel by dedicated units. decreasing throughput by a factor equal to the number of separate requests. the instruction throughput is one instruction every 32/T clock cycles.2. 1. Therefore. Performance Guidelines For devices of compute capability 1.0. it is designed for streaming fetches with a constant latency.Chapter 5. Reading device memory through texture or surface fetching present some benefits that can make it an advantageous alternative to reading device memory from global or constant memory: If the memory reads do not follow the access patterns that global or constant memory reads must respect to get good performance (see Sections 5. a constant memory request for a warp is first split into two requests. For a warp size of 32. a cache hit reduces DRAM bandwidth demand but not fetch latency. or flushing denormalized numbers to zero.  Reduce the number of instructions.2.0] (see Section 3.2.1 and 5.2 93 .4.  CUDA C Programming Guide Version 3. The texture cache is optimized for 2D spatial locality. this includes trading precision for speed when it does not affect the end result.0. In this section.3. higher bandwidth can be achieved providing that there is locality in the texture fetches or surface reads (this is less likely for devices of compute capability 2.4.4. if T is the number of operations per clock cycle. otherwise it just costs one read from texture cache.  8-bit and 16-bit integer input data may be optionally converted to 32-bit floating-point values in the range [0. one instruction results in 32 operations.4 Maximize Instruction Throughput To maximize instruction throughput the application should: Minimize the use of arithmetic instructions with low throughput. throughputs are given in number of operations per clock cycle per multiprocessor.  5.0] or [-1.3. so a texture fetch or surface read costs one memory read from device memory only on a cache miss. 5.3.x given that global memory reads are cached on these devices). one for each half-warp. 1.4).2). by optimizing out synchronization points whenever possible as described in Section 5.  Packed data may be broadcast to separate variables in a single operation. The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit. or at the throughput of device memory otherwise. that are issued independently. A request is then split into as many separate requests as there are different memory addresses in the initial request.  Minimize divergent warps caused by control flow instructions as detailed in Section 5.2.

sine (__sinf). multiply. Performance Guidelines All throughputs are for one multiprocessor. sum of absolute difference 24-bit integer multiply (__[u]mul24) 32-bit floating-point reciprocal. logical operation 32-bit integer shift.Chapter 5.2 . base-2 logarithm (__log2f).1 48 4 48 16 32-bit floating-point add. reciprocal square root. and the number of native instructions after compilation may fluctuate with every compiler version. and 94 CUDA C Programming Guide Version 3. device_functions.x Compute Capability 2. The implementation of some functions are readily available on the CUDA header files (math_functions. The implementation may be different for devices of compute capability 1.4. cuobjdump can be used to inspect a particular implementation in a cubin object. multiply-add 32-bit integer add.h. cosine (__cosf) Type conversions 8 1 8 8 Multiple instructions 8 16 Multiple instructions 16 Multiple instructions 2 4 8 8 16 16 Other instructions and functions are implemented on top of the native instructions. compare 32-bit integer multiply. Table 5-1. multiply. multiply-add. For complicated functions.1 Arithmetic Instructions Table 5-1 gives the throughputs of the arithmetic instructions that are natively supported in hardware for devices of various compute capabilities. base-2 exponential (exp2f). multiply-add 64-bit floating-point add. Throughput of Native Arithmetic Instructions (Operations per Clock Cycle per Multiprocessor) Compute Capability 1.x. code compiled with -prec-div=false (less precise division) tends to have higher performance code than code compiled with -prec-div=true. Similarly. In general. there can be multiple code paths depending on input.0 32 16 32 16 Compute Capability 2. code compiled with -ftz=true (denormalized numbers are flushed to zero) tends to have higher performance than code compiled with -ftz=false. 5. They must be multiplied by the number of multiprocessors in the device to get throughput for the whole device.h. …).x and devices of compute capability 2.

e. and __fmaf_r[n.x.x and 2 operations per clock cycle for devices of compute capability 2.2. Single-Precision Floating-Point Reciprocal Square Root To preserve IEEE-754 semantics the compiler can optimize 1.1) compile to tens of instructions for devices of compute capability 1.h for implementation) comprises two code paths referred to as the fast path and the slow path. the argument reduction code (see math_functions. Therefore. Single-Precision Floating-Point Square Root Single-precision floating-point square root is implemented as a reciprocal square root followed by a reciprocal instead of a reciprocal square root followed by a multiplication so that it gives correct results for 0 and infinity. but map to a single native instruction for devices of compute capability 2.2. tanf(x).2). Sine and Cosine sinf(x). and 44 bytes are used by doubleprecision functions.u] (see Section C.u]. the argument reduction code for the trigonometric functions selects the fast path for arguments whose magnitude is less than 48039. an attempt has been made to reduce register pressure in the slow path by storing some intermediate variables in local memory. with -prec-div=false and -prec-sqrt=false). __fmul_r[d.3. The nvcc user manual describes these compilation flags in more details. which may affect performance because of local memory high latency and bandwidth (see Section 5.0/sqrtf() into rsqrtf() only when both reciprocal and square root are approximate. sincosf(x). The fast path is used for arguments sufficiently small in magnitude and essentially consists of a few multiply-add operations. cosf(x). More precisely. Performance Guidelines code compiled with -prec-sqrt=false (less precise square root) tends to have higher performance than code compiled with -prec-sqrt=true.x. its throughput is 1 operation per clock cycle for devices of compute capability 1. and less than 2147483648.2 95 .Chapter 5. CUDA C Programming Guide Version 3.u]. Single-Precision Floating-Point Addition and Multiplication Intrinsics __fadd_r[d.1) provides faster single-precision floating- point division than the division operator. As the slow path requires more registers than the fast path.2. and corresponding double- precision instructions are much more expensive and even more so if the argument x is large in magnitude.d. Single-Precision Floating-Point Division __fdividef(x.0f for the singleprecision functions. y) (see Section C. It is therefore recommended to invoke rsqrtf() directly where desired. (i. At present. the exact amount is subject to change. At present.0 for the double-precision functions. respectively. 28 bytes of local memory are used by single-precision functions. However.x.z. The slow path is used for arguments large in magnitude and consists of lengthy computations required to achieve correct results over the entire argument range.

3) compile to fewer instructions for devices of compute capability 2. those constants defined without any type suffix) used as input to single-precision floating-point computations (as mandated by C/C++ standards).3) compile to tens of instructions for devices of compute capability 1. 24-bit integer multiplication is natively supported however via the __[u]mul24 intrinsic (see Section C.x.x and __brevll and __popcll to just a few. __brev. This last case can be avoided by using single-precision floating-point constants. Integer Arithmetic On devices of compute capability 1. (i/n) is equivalent to (i>>log2(n)) and (i%n) is equivalent to (i&(n-1)).  5.x. Using __[u]mul24 instead of the 32-bit multiplication operator whenever possible usually improves performance for instruction bound kernels. Performance Guidelines Due to the lengthy computations and use of local memory in the slow path. but __brev and __popc map to a single instruction for devices of compute capability 2.2.x. __ffs. On devices of compute capability 2. 32-bit integer multiplication is natively supported.141592653589793f. __popc.2 Control Flow Instructions Any flow control instruction (if. and __ffsll (see Section C. If this happens. defined with an f suffix such as 3. They can be replaced with bitwise operations in some cases: If n is a power of 2. and __popcll (see Section C. the different executions paths have to be serialized. below 20 instructions on devices of compute capability 2. 32-bit integer multiplication is implemented using multiple instructions as it is not natively supported.x. __clzll.e. __clz. This is the case for: Functions operating on variables of type char or short whose operands generally need to be converted to int. to follow different execution paths).4. the compiler will perform these conversions if n is literal.5f. CUDA C Programming Guide Version 3.2. It can have the opposite effect however in cases where the use of __[u]mul24 inhibits compiler optimizations.  Double-precision floating-point constants (i. introducing additional execution cycles. __[u]mul24 is therefore implemented using multiple instructions and should not be used. the throughput of these trigonometric functions is lower by one order of magnitude when the slow path reduction is required as opposed to the fast path reduction. switch. while) can significantly impact the effective instruction throughput by causing threads of the same warp to diverge (i. increasing the total number of instructions executed for this warp. When all the different execution paths have completed.3).e. for.x.x than for devices of compute capability 1. but 24-bit integer multiplication is not.2.2 96 . 1.0f.x. Type Conversion Sometimes. the compiler must insert conversion instructions.Chapter 5. Integer division and modulo operation are costly: tens of instructions on devices of compute capability 1. __brevll. the threads converge back to the same execution path. 0. do.

x and 16 operations per clock cycle for devices of compute capability 2.2 97 . The compiler replaces a branch instruction with predicated instructions only if the number of instructions controlled by the branch condition is less or equal to a certain threshold: If the compiler determines that the condition is likely to produce many divergent warps. Instead. Without synchronization. Instructions with a false predicate do not write results.2). In the following code sample. __syncthreads(). In this case.1. Note that __syncthreads() can impact performance by forcing the multiprocessor to idle as detailed in Section 5.3. only the instructions with a true predicate are actually executed. the controlling condition should be written so as to minimize the number of divergent warps.3 Synchronization Instruction Throughput for __syncthreads() is 8 operations per clock cycle for devices of compute capability 1. CUDA C Programming Guide Version 3. Performance Guidelines To obtain best performance in cases where the control flow depends on the thread ID..x. as detailed below. result[i] = 2 * myArray[i] for i > 0). otherwise it is 4. A trivial example is when the controlling condition only depends on (threadIdx / warpSize) where warpSize is the warp size. Sometimes. 5.. any of the two references to myArray[tid] could return either 2 or the value initially stored in myArray. The programmer can also control loop unrolling using the #pragma unroll directive (see Section E. In these cases. int ref1 = myArray[tid]. depending on whether the memory read occurs before or after the memory write from myArray[tid + 1] = 2. each of them is associated with a per-thread condition code or predicate that is set to true or false based on the controlling condition and although each of these instructions gets scheduled for execution. __syncthreads(). . and also do not evaluate addresses or read operands. this threshold is 7.Chapter 5. both calls to __syncthreads() are required to get the expected result (i. myArray[tid + 1] = 2. When using branch predication none of the instructions whose execution depends on the controlling condition gets skipped.e. // myArray is an array of integers located in global or shared // memory __global__ void MyKernel(int* result) { int tid = threadIdx. threads within a warp are implicitly synchronized and this can sometimes be used to omit __syncthreads() for better performance. no warp diverges since the controlling condition is perfectly aligned with the warps.2. for example. Because a warp executes one common instruction at a time.x. This is possible because the distribution of the warps across the block is deterministic as mentioned in Section 4.4. the compiler may unroll loops or it may optimize out if or switch statements by using branch predication instead. no warp can ever diverge.

result[tid] = ref1 * ref2.5.. Performance Guidelines int ref2 = myArray[tid]..Chapter 5..2 . myArray must also be declared as volatile as described in Section B. so that there is no need for any __syncthreads(). } However. . 98 CUDA C Programming Guide Version 3. myArray[tid + 1] = 2. if (tid < warpSize) { int ref1 = myArray[tid]... } Simply removing the __syncthreads() is not enough however. threads are guaranteed to belong to the same warp. in the following slightly modified code sample.x. int ref2 = myArray[tid]. result[tid] = ref1 * ref2. . } . // myArray is an array of integers located in global or shared // memory __global__ void MyKernel(int* result) { int tid = threadIdx.2..

Table A-1. and Number of CUDA Cores Compute Capability Number of Multiprocessors 7 6 4 3 2 1 16 15 14 11 2x30 30 24 2x16 16 Number of CUDA Cores 336 288 192 144 96 48 512 480 448 352 2x240 240 192 2x128 128 GeForce GTX 460 GeForce GTX 470M GeForce GTS 450. number of multiprocessors. GTX 260M.0 1. GTX 480M GeForce GTX 295 GeForce GTX 285. and number of CUDA cores. These.3 1. CUDA-Enabled Devices with Compute Capability. CUDA-Enabled GPUs Table A-1 lists all CUDA-enabled devices with their compute capability. GTX 280M GeForce 8800 Ultra. 8800 GT.1 2. 8800 GTS 512.0 2. 9800 GTX+. GTX 460M GeForce GT 445M GeForce GT 435M.1 2.1 2.1 2. GTX 285M. 9800M GTX 2. GT 425M.3 1.1 16 14 128 112 CUDA C Programming Guide Version 3. GTX 275 GeForce GTX 260 GeForce 9800 GX2 GeForce GTS 250. can be queried using the runtime or driver API (see reference manual).3 1. 8800 GTX GeForce 9800 GT. GT 420M GeForce GT 415M GeForce GTX 580 GeForce GTX 480 GeForce GTX 470 GeForce GTX 465.1 1.1 99 .1 2.0 2. GTS 150.0 1.0 2. Number of Multiprocessors.1 1.1 2. GTX 280.Appendix A. as well as the clock frequency and the total amount of device memory. 9800 GTX.

0 1.2 1. GTS 250M.2 1.3 1.1 2. 9800M GTS GeForce GT 220.1 1.2 1.2 1. 9800M GT GeForce 8800 GTS GeForce GT 335M GeForce 9600 GT. 9300 mGPU. 8600M GS GeForce 210. 8700M GT. 8100 mGPU. 8400M GS. 8400M GT.3 1.2 .0 1 14 4x30 30 4x16 2x16 16 4 2 14 11 10 8 2x30 4x14 4x16 2x16 30 24 2x14 16 16 8 448 4x240 240 4x128 2x128 128 192 96 448 352 320 256 2x240 4x112 4x128 2x128 240 192 2x112 128 128 100 CUDA C Programming Guide Version 3. 305M GeForce G100.1 1.1 1. 9650M GS. 8400M G. 8200 mGPU. 9200M GS.0 1. 9300M G. 9400 mGPU. 8800M GTX. 8300 mGPU. FX 3800M Quadro FX 5600 1. G110M GeForce 9300M GS. 9600M GS. 9500M G.0 2.0 2.1 12 9 8 6 6 4 96 72 64 48 48 32 1.0 1. 9600M GT. GT 325M.1 2. 8800 GS. 8400 GS.1 2. 8600 GTS. 9100M G. CUDA-Enabled GPUs Compute Capability GeForce GT 240.0 1.0 2. 9500 GT.0 1.0 1. GT 330M. GTS 260M. G105M Tesla C2050 Tesla S1070 Tesla C1060 Tesla S870 Tesla D870 Tesla C870 Quadro 2000 Quadro 600 Quadro 6000 Quadro 5000 Quadro 5000M Quadro 4000 Quadro Plex 2200 D2 Quadro Plex 2100 D4 Quadro Plex 2100 Model S4 Quadro Plex 1000 Model IV Quadro FX 5800 Quadro FX 4800 Quadro FX 4700 X2 Quadro FX 3700M. 8600 GT.0 2.1 1.3 1. GTS 350M GeForce GT 130. 9500M GS. 9600 GSO. 9700M GT. 310M. G210M. 8600M GT.0 1.3 1.1 2 2 16 16 1. 8500 GT. GTS 360M. GT 240M GeForce 9700M GT. GT 230M GeForce GT 120.3 1.Appendix A. 8800M GTS.1 Number of Multiprocessors 12 12 Number of CUDA Cores 96 96 1.1 1.

NVS 5100M Quadro FX 2700M Quadro FX 1700.Appendix A. FX 770M.1 1. FX 360M Quadro FX 370M.1 Number of Multiprocessors 14 12 12 9 8 6 6 4 Number of CUDA Cores 112 96 96 72 64 48 48 32 1. NVS 2100M Quadro FX 370. FX 380M.2 1.1 1. NVS 290. FX 570M Quadro FX 380 LP. NVS 160M. NVS 320M.2 101 . NVS 130M 1. NVS 150M. FX 1600M.1 2 2 16 16 1. NVS 135M.2 1. NVS 140M. NVS 3100M.2 1. FX 570.1 1. CUDA-Enabled GPUs Compute Capability Quadro FX 3700 Quadro FX 2800M Quadro FX 4600 Quadro FX 1800M Quadro FX 3600M Quadro FX 880M.1 1.1 1 8 CUDA C Programming Guide Version 3.0 1. FX 1700M.

.

1 __device__ The __device__ qualifier declares a function that is: Executed on the device  Callable from the device only.1).  B.x.  Any call to a __global__ function must specify its execution configuration as described in Section B.1.1 103 .3 __host__ The __host__ qualifier declares a function that is: CUDA C Programming Guide Version 3. a __device__ function is always inlined by default. C Language Extensions B. B.  Callable from the host only. A call to a __global__ function is asynchronous.1 Function Type Qualifiers Function type qualifiers specify whether a function executes on the host or on the device and whether it is callable from the host or from the device. B.1. The __noinline__ function qualifier however can be used as a hint for the compiler not to inline the function if possible (see Section E. Such a function is: Executed on the device.16. meaning it returns before the device has completed its execution.2 __global__ The __global__ qualifier declares a function as being a kernel.Appendix B. __global__ functions must have void return type.1. In device code compiled for devices of compute capability 1.

It is not allowed to take the address of a __device__ function in host code.  via constant memory and are limited to 4 KB on devices of compute capability 2.4.3 B.1.4.  Callable from the host only.1.4. It is equivalent to declare a function with only the __host__ qualifier or to declare it without any of the __host__. __device__.x. Function Pointers Function pointers to __global__ functions are supported. Static Variables __device__ and __global__ functions cannot declare static variables inside their body. or __global__ qualifier.x. C Language Extensions Executed on the host.1.2 .  The __global__ and __host__ qualifiers cannot be used together.1.4 Variadic Functions __device__ and __global__ functions cannot have a variable number of arguments. in which case the function is compiled for both the host and the device. in either case the function is compiled for the host only.0 #elif !defined(__CUDA_ARCH__) // Host code path #endif } B. but function pointers to __device__ functions are only supported in device code compiled for devices of compute capability 2. The __device__ and __host__ qualifiers can be used together however.2 B.x. The __CUDA_ARCH__ macro introduced in Section 3.4 can be used to differentiate code paths between host and device: __host__ __device__ func() { #if __CUDA_ARCH__ == 100 // Device code path for compute capability 1.1.1 Restrictions Functions Parameters __global__ function parameters are passed to the device: via shared memory and are limited to 256 bytes on devices of compute capability 1.Appendix B. 104 CUDA C Programming Guide Version 3.0 #elif __CUDA_ARCH__ == 200 // Device code path for compute capability 2.5 Recursion __global__ functions do not support recursion.1.1.4.4 B.4.  B. B.

x.2.2 B. __device__ The __device__ qualifier declares a variable that resides on the device.2. When declaring a variable in shared memory as an external array such as  extern __shared__ float shared[]. start at the same address in memory.Appendix B.  B. the size of the array is determined at launch time (see Section B.  Has the lifetime of an application.  Is accessible from all the threads within the grid and from the host through the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime API and cuModuleGetGlobal() for the driver API). B. the variable: Resides in global memory space. Has the lifetime of an application.  Is only accessible from all the threads within the block.   B. declares a variable that: Resides in constant memory space.3 __shared__ The __shared__ qualifier.2 105 .  Is accessible from all the threads within the grid and from the host through the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMemcpyFromSymbol() for the runtime API and cuModuleGetGlobal() for the driver API).1 Variable Type Qualifiers Variable type qualifiers specify the memory location on the device of a variable. optionally used together with __device__. At most one of the other type qualifiers defined in the next three sections may be used together with __device__ to further specify which memory space the variable belongs to.2 __constant__ The __constant__ qualifier.16). so that the layout of CUDA C Programming Guide Version 3. C Language Extensions __device__ functions only support recursion in device code compiled for devices of compute capability 2. declares a variable that: Resides in the shared memory space of a thread block. If none of them is present.  Has the lifetime of the block. All variables declared in this fashion. optionally used together with __device__.2.

does not work since array1 is not aligned to 4 bytes. __shared__ and __constant__ qualifiers generally resides in a register. For example.2. B. __shared__ and __constant__ variables cannot be defined as external using the extern keyword.4 Restrictions The __device__. in dynamically allocated shared memory. } Alignment requirements for the built-in vector types are listed in Table B-1. __device__ void func() // __device__ or __global__ function { short* array0 = (short*)array. B. C Language Extensions the variables in the array must be explicitly managed through offsets. float* array1 = (float*)&array0[127].1 Storage and Scope __shared__ and __constant__ variables have implied static storage.2.3.2 Assignment __constant__ variables cannot be assigned to from the device. __shared__ and __constant__ qualifiers are not allowed on struct and union members. int* array2 = (int*)&array1[64]. float array1[64].3 Automatic Variable An automatic variable declared in device code without any of the __device__.2.3.3.2. } Note that pointers need to be aligned to the type they point to. B. extern __shared__ float array[]. The only exception is for dynamically allocated __shared__ variables as described in Section B. __shared__ variables cannot have an initialization as part of their declaration.4. on formal parameters and on local variables within a function that executes on the host. which can have adverse performance consequences as detailed in Section 5. if one wants the equivalent of short array0[128]. __device__ void func() // __device__ or __global__ function { short* array0 = (short*)array. __device__ and __constant__ variables are only allowed at file scope.2. B.Appendix B. However in some cases the compiler might choose to place it in local memory. so the following code.2.2. __device__.2.4. for example. float* array1 = (float*)&array0[128].2 .1 and 3.4). 106 CUDA C Programming Guide Version 3. only from the host through host runtime functions (Sections 3. int array2[256]. one could declare and initialize the arrays the following way: extern __shared__ float array[].4.

For devices of compute capability 2.4 can only be used in host code. pointers are supported without any restriction. pointers in code that is executed on the device are supported as long as the compiler is able to resolve whether they point to either the shared memory space or the global memory space. } Therefore.2.2. the compiler assumes that its value can be changed at any time by another thread and therefore any reference to this variable compiles to an actual memory read instruction. ref2 cannot possibly be equal to 2 in thread tid as a result of thread tid-1 overwriting myArray[tid] by 2. B. the compiler is free to optimize reads and writes to global or shared memory. CUDA C Programming Guide Version 3. For example. As long as this requirement is met. or __syncthreads() (Sections B.3.Appendix B.4. int ref1 = myArray[tid] * 1. The address of a __device__ or __constant__ variable obtained through cudaGetSymbolAddress() as described in Section 3. result[tid] = ref1 * ref2. most often in a segmentation fault and application termination.2 107 . but the second reference does not as the compiler simply reuses the result of the first read. The address obtained by taking the address of a __device__.5 volatile Only after the execution of a __threadfence_block(). __threadfence().6) are prior writes to global or shared memory guaranteed to be visible by other threads.3. Synchronization is required as mentioned in Section 5. myArray[tid + 1] = 2. Dereferencing a pointer either to global or shared memory in code that is executed on the host or to host memory in code that is executed on the device results in an undefined behavior. C Language Extensions B. in general. This behavior can be changed using the volatile keyword: If a variable located in global or shared memory is declared as volatile. __shared__ or __constant__ variable can only be used in device code.4 Pointers For devices of compute capability 1.x. the first reference to myArray[tid] compiles into a global or shared memory read instruction. otherwise they are restricted to only point to memory allocated or declared in the global memory space.5 and B. // myArray is an array of non-zero integers // located in global or shared memory __global__ void MyKernel(int* result) { int tid = threadIdx. there is no guarantee.4. in the code sample below.x. int ref2 = myArray[tid] * 1. that ref2 will be equal to 2 in thread tid since thread tid might read myArray[tid] into ref2 before thread tid-1 overwrites its value by 2.x. Note that even if myArray is declared as volatile in the code sample above.

Appendix B.

C Language Extensions

B.3
B.3.1

Built-in Vector Types
char1, uchar1, char2, uchar2, char3, uchar3, char4, uchar4, short1, ushort1, short2, ushort2, short3, ushort3, short4, ushort4, int1, uint1, int2, uint2, int3, uint3, int4, uint4, long1, ulong1, long2, ulong2, long3, ulong3, long4, ulong4, longlong1, ulonglong1, longlong2, ulonglong2, float1, float2, float3, float4, double1, double2
These are vector types derived from the basic integer and floating-point types. They are structures and the 1st, 2nd, 3rd, and 4th components are accessible through the fields x, y, z, and w, respectively. They all come with a constructor function of the form make_<type name>; for example,
int2 make_int2(int x, int y);

which creates a vector of type int2 with value (x, y). In host code, the alignment requirement of a vector type is equal to the alignment requirement of its base type. This is not always the case in device code as detailed in Table B-1.

Table B-1. Alignment Requirements in Device Code
Type
char1, uchar1 char2, uchar2 char3, uchar3 char4, uchar4 short1, ushort1 short2, ushort2 short3, ushort3 short4, ushort4 int1, uint1 int2, uint2 int3, uint3 int4, uint4 long1, ulong1 long2, ulong2 long3, ulong3 long4, ulong4

Alignment
1 2 1 4 2 4 2 8 4 8 4 16 4 if sizeof(long) is equal to sizeof(int), 8, otherwise 8 if sizeof(long) is equal to sizeof(int), 16, otherwise 4 if sizeof(long) is equal to sizeof(int), 8, otherwise 16

108

CUDA C Programming Guide Version 3.2

Appendix B.

C Language Extensions

longlong1, ulonglong1 longlong2, ulonglong2 float1 float2 float3 float4 double1 double2

8 16 4 8 4 16 8 16

B.3.2

dim3
This type is an integer vector type based on uint3 that is used to specify dimensions. When defining a variable of type dim3, any component left unspecified is initialized to 1.

B.4

Built-in Variables
Built-in variables specify the grid and block dimensions and the block and thread indices. They are only valid within functions that are executed on the device.

B.4.1

gridDim
This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the grid.

B.4.2

blockIdx
This variable is of type uint3 (see Section B.3.1) and contains the block index within the grid.

B.4.3

blockDim
This variable is of type dim3 (see Section B.3.2) and contains the dimensions of the block.

B.4.4

threadIdx
This variable is of type uint3 (see Section B.3.1) and contains the thread index within the block.

CUDA C Programming Guide Version 3.2

109

Appendix B.

C Language Extensions

B.4.5

warpSize
This variable is of type int and contains the warp size in threads (see Section 4.1 for the definition of a warp).

B.4.6

Restrictions
It is not allowed to take the address of any of the built-in variables.  It is not allowed to assign values to any of the built-in variables.

B.5

Memory Fence Functions
void __threadfence_block();

waits until all global and shared memory accesses made by the calling thread prior to __threadfence_block() are visible to all threads in the thread block.
void __threadfence();

waits until all global and shared memory accesses made by the calling thread prior to __threadfence() are visible to: All threads in the thread block for shared memory accesses,  All threads in the device for global memory accesses.
 void __threadfence_system();

waits until all global and shared memory accesses made by the calling thread prior to __threadfence_system() are visible to: All threads in the thread block for shared memory accesses,  All threads in the device for global memory accesses,  Host threads for page-locked host memory accesses (see Section 3.2.5.3). __threadfence_system() is only supported by devices of compute capability 2.x.

In general, when a thread issues a series of writes to memory in a particular order, other threads may see the effects of these memory writes in a different order. __threadfence_block(), __threadfence(), and __threadfence_system() can be used to enforce some ordering. One use case is when threads consume some data produced by other threads as illustrated by the following code sample of a kernel that computes the sum of an array of N numbers in one call. Each block first sums a subset of the array and stores the result in global memory. When all blocks are done, the last block done reads each of these partial sums from global memory and sums them to obtain the final result. In order to determine which block is finished last, each block atomically increments a counter to signal that it is done with computing and storing its partial sum (see Section B.11 about atomic functions). The last block is the one that receives the counter value equal to gridDim.x-1. If no fence is placed between storing the partial sum and incrementing the counter, the counter might increment before the partial sum is stored and therefore, might reach gridDim.x-1 and let

110

CUDA C Programming Guide Version 3.2

// Thread 0 of each block determines if its block is // the last block to be done isLastBlockDone = (value == (gridDim. __shared__ bool isLastBlockDone. if (threadIdx. float* result) { // Each block sums a subset of the input array float partialSum = calculatePartialSum(array.1)). C Language Extensions the last block start reading partial sums before they have been actually updated in memory.Appendix B.6 Synchronization Functions void __syncthreads(). } } } B.x] = partialSum. gridDim. if (threadIdx.x-1] float totalSum = calculateTotalSum(result). count = 0. } // Synchronize to make sure that each thread reads // the correct value of isLastBlockDone __syncthreads().x).x . unsigned int N. // Thread 0 makes sure its result is visible to // all other threads __threadfence().x == 0) { // Thread 0 of each block stores the partial sum // to global memory result[blockIdx. // Thread 0 of each block signals that it is done unsigned int value = atomicInc(&count. N).x == 0) { // Thread 0 of last block stores total sum // to global memory and resets count so that // next kernel call works properly result[0] = totalSum.. __global__ void sum(const float* array. gridDim. if (isLastBlockDone) { // The last block sums the partial sums // stored in result[0 . CUDA C Programming Guide Version 3.2 111 . __device__ unsigned int count = 0.

A more robust approach is to selectively replace mathematical function calls by calls to intrinsic functions only where it is merited by the performance gains and where changed properties such as reduced accuracy and different special case handling can be tolerated.7 Mathematical Functions Section C. it may also cause some differences in special case handling. is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns the number of threads for which predicate evaluates to non-zero. When some threads within a block access the same addresses in shared or global memory. a less accurate. along with their respective error bounds. When executed in host code. int __syncthreads_or(int predicate). otherwise the code execution is likely to hang or produce unintended side effects. it has the same name prefixed with __ (such as __sinf(x)). int __syncthreads_count(int predicate). is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns non-zero if and only if predicate evaluates to non-zero for any of them. write-after-read.2 . there are potential read-after-write. These intrinsic functions are listed in Section C.1. B. C Language Extensions waits until all threads in the thread block have reached this point and all global and shared memory accesses made by these threads prior to __syncthreads() are visible to all threads in the block.x support three variations of __syncthreads() described below. Devices of compute capability 2.2. These data hazards can be avoided by synchronizing threads in-between these accesses. along with their respective error bounds. a given function uses the C runtime implementation if available. The compiler has an option (-use_fast_math) that forces each function in Table B-2 to compile to its intrinsic counterpart. For some of the functions of Section C. In addition to reduce accuracy of the affected functions. int __syncthreads_and(int predicate). __syncthreads() is allowed in conditional code but only if the conditional evaluates identically across the entire thread block.Appendix B. __syncthreads() is used to coordinate communication between the threads of the same block. Functions Affected by –use_fast_math 112 CUDA C Programming Guide Version 3. or write-after-write hazards for some of these memory accesses. is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads of the block and returns non-zero if and only if predicate evaluates to non-zero for all of them. but faster version exists in the device runtime component.1 contains a comprehensive list of the C/C++ standard library mathematical functions that are currently supported in device code. Table B-2.

int x). these functions may optionally promote the integer to single-precision floating point.2 113 .8.sptr. 1.e. float tex1Dfetch( texture<unsigned short. For integer types. int x).4. fetch the region of linear memory bound to texture reference texRef using integer texture coordinate x. float tex1Dfetch( texture<signed char. Immutable attributes are described in Section 3. CUDA C Programming Guide Version 3. compile-time) and mutable (i. 1.sptr. cudaReadModeNormalizedFloat> texRef.1 tex1Dfetch() template<class Type> Type tex1Dfetch( texture<Type. cudaReadModeNormalizedFloat> texRef.1. C Language Extensions Operator/Function x/y sinf(x) cosf(x) tanf(x) sincosf(x. runtime) attributes determine how the texture coordinates are interpreted. cudaReadModeNormalizedFloat> texRef. int x). a combination of the texture reference‟s immutable (i. what processing occurs during the texture fetch. cudaReadModeElementType> texRef.8 Texture Functions For texture functions.cptr) logf(x) log2f(x) log10f(x) expf(x) exp10f(x) powf(x. float tex1Dfetch( texture<signed short.y) Device Function __fdividef(x. Texture fetching is described in Appendix F.2. int x). int x). B.2. 1.cptr) __logf(x) __log2f(x) __log10f(x) __expf(x) __exp10f(x) __powf(x. and the return value delivered by the texture fetch. float tex1Dfetch( texture<unsigned char. cudaReadModeNormalizedFloat> texRef.y) __sinf(x) __cosf(x) __tanf(x) __sincosf(x.4.2. Mutable attributes are described in Section 3.1. No texture filtering and addressing modes are supported.e.1. 1. 1.Appendix B.y) B.

114 CUDA C Programming Guide Version 3. boundaryMode specifies the boundary mode.2. readMode> texRef.2. B. float x). for example: float4 tex1Dfetch( texture<uchar4. int x).8. B. 2. 2-. fetches the CUDA array bound to texture reference texRef using texture coordinate x.2. C Language Extensions Besides the functions shown above. readMode> texRef. float z).9 Surface Functions Surface functions are only supported by devices of compute capability 2. in which case out-of-range reads return zero and out-of-range writes are ignored.2.4 tex3D() template<class Type. and 4-tuples are supported.8. float y. 1.1 and surface binding in Section 3. float x. it is equal to either cudaBoundaryModeClamp.Appendix B. Surface reference declaration is described in Section 3. enum cudaTextureReadMode readMode> Type tex2D(texture<Type.2 tex1D() template<class Type. fetches the CUDA array or the region of linear memory bound to texture reference texRef using texture coordinates x and y. y. or cudaBoundaryModeZero. in which case out-of-range accesses cause the kernel execution to fail. B. readMode> texRef. float y). float x. and z.8.3 tex2D() template<class Type. enum cudaTextureReadMode readMode> Type tex3D(texture<Type.2. 3.4. cudaReadModeNormalizedFloat> texRef. that is how out-of-range surface coordinates are handled. fetches the CUDA array bound to texture reference texRef using texture coordinates x.0 and higher.2 . 1. B. enum cudaTextureReadMode readMode> Type tex1D(texture<Type. in which case out-of-range coordinates are clamped to the valid range. or cudaBoundaryModeTrap.4. In the sections below. fetches the region of linear memory bound to texture reference texRef using texture coordinate x.

1> surfRef.9. writes value data to the CUDA array bound to surface reference surfRef at coordinate x and y.10 Time Function clock_t clock(). B. int x. B. surface<void. int y. boundaryMode = cudaBoundaryModeTrap). B. int x. Sampling this counter at the beginning and at the end of a kernel. surface<void. but not of the number of clock cycles the device actually spent executing thread instructions. int x. reads the CUDA array bound to surface reference surfRef using coordinate x. C Language Extensions B.3 surf2Dread() template<class Type> Type surf2Dread(surface<void. int x. returns the value of a per-multiprocessor counter that is incremented every clock cycle. and recording the result per thread provides a measure for each thread of the number of clock cycles taken by the device to completely execute the thread. boundaryMode = cudaBoundaryModeTrap). 2> surfRef. boundaryMode = cudaBoundaryModeTrap).9. boundaryMode = cudaBoundaryModeTrap).1 surf1Dread() template<class Type> Type surf1Dread(surface<void.2 surf1Dwrite() template<class Type> void surf1Dwrite(Type data.2 115 .4 surf2Dwrite() template<class Type> void surf2Dwrite(Type data. int y. 1> surfRef. B. writes value data to the CUDA array bound to surface reference surfRef at coordinate x. reads the CUDA array bound to surface reference surfRef using coordinates x and y.9. The former number is greater that the latter since threads are time sliced.Appendix B.9. CUDA C Programming Guide Version 3. taking the difference of the two samples. when executed in device code. 2> surfRef.

Atomic functions operating on shared memory and atomic functions operating on 64-bit words are only available for devices of compute capability 1. For example. unsigned int val).3) are not atomic from the point of view of the host or other devices.11 Atomic Functions An atomic function performs a read-modify-write atomic operation on one 32-bit or 64-bit word residing in global or shared memory.Appendix B.1 Arithmetic Functions atomicAdd() int atomicAdd(int* address.x and atomicExch() for all devices. } while (assumed != old). that also work for single-precision floating-point numbers. float val). reads the 32-bit or 64-bit word old located at the address address in global or shared memory. Atomic functions operating on 64-bit words in shared memory are only available for devices of compute capability 2. int val). unsigned int atomicAdd(unsigned int* address. do { assumed = old. old = __longlong_as_double( atomicCAS((unsigned long long int*)address. In other words. computes (old + val). double val) { double old = *address.11. and stores the result back to memory at 116 CUDA C Programming Guide Version 3.1 B. and writes the result back to the same address.1 and above. return old. float atomicAdd(float* address.2 .11. C Language Extensions B. Note however that any atomic operation can be implemented based on atomicCAS() (Compare And Swap). atomicAdd() for double-precision floating-point numbers can be implemented as follows: __device__ double atomicAdd(double* address. Atomic operations only work with signed and unsigned integers with the exception of atomicAdd() for devices of compute capability 2. For example. no other thread can access this address until the operation is complete. unsigned long long int atomicAdd(unsigned long long int* address. unsigned long long int val). __double_as_longlong(assumed).x and higher. The operation is atomic in the sense that it is guaranteed to be performed without interference from other threads.5.1.2 and above. Atomic functions operating on mapped page-locked memory (Section 3. assumed. } B. atomicAdd() reads a 32-bit word at some address in global or shared memory.2. adds a number to it. Atomic functions can only be used in device functions and are only available for devices of compute capability 1. __double_as_longlong(val + assumed))).

reads the 32-bit word old located at the address address in global or shared memory.11.1. float val).2 atomicSub() int atomicSub(int* address. computes ((old >= val) ? 0 : (old+1)).1. computes the minimum of old and val. and stores the result back to memory at the same address. These three operations are performed in one atomic transaction.11. unsigned int atomicMax(unsigned int* address. reads the 32-bit word old located at the address address in global or shared memory.1.4 atomicMin() int atomicMin(int* address. unsigned long long int atomicExch(unsigned long long int* address. unsigned int atomicSub(unsigned int* address. B. The function returns old. These three operations are performed in one atomic transaction. These three operations are performed in one atomic transaction. These three operations are performed in one atomic transaction. unsigned int atomicExch(unsigned int* address. B.11.2 117 . The function returns old. The function returns old.x. B.Appendix B.3 atomicExch() int atomicExch(int* address.6 atomicInc() unsigned int atomicInc(unsigned int* address.val). These three operations are performed in one atomic transaction. B. reads the 32-bit or 64-bit word old located at the address address in global or shared memory and stores val back to memory at the same address. B. and stores the result back to memory at the same address. int val). C Language Extensions the same address.1. unsigned int atomicMin(unsigned int* address. and stores the result back to memory at the same address. CUDA C Programming Guide Version 3. The floating-point version of atomicAdd() is only supported by devices of compute capability 2. int val).11. unsigned int val). and stores the result back to memory at the same address. unsigned int val). These two operations are performed in one atomic transaction. unsigned int val). unsigned long long int val). computes (old .1. computes the maximum of old and val. reads the 32-bit word old located at the address address in global or shared memory. The function returns old. float atomicExch(float* address.5 atomicMax() int atomicMax(int* address. unsigned int val). int val).11. int val). The function returns old. The function returns old. reads the 32-bit word old located at the address address in global or shared memory. unsigned int val).

7 atomicDec() unsigned int atomicDec(unsigned int* address. B. unsigned int val). reads the 32-bit word old located at the address address in global or shared memory. computes (old & val).3 atomicXor() int atomicXor(int* address. unsigned long long int val).2. B. reads the 32-bit word old located at the address address in global or shared memory.11. reads the 32-bit word old located at the address address in global or shared memory. computes (old | val). unsigned long long int atomicCAS(unsigned long long int* address. computes (old == compare ? val : old). B. unsigned int val). The function returns old.2. unsigned int atomicXor(unsigned int* address. These three operations are performed in one atomic transaction. These three operations are performed in one atomic transaction. unsigned int val).2 atomicOr() int atomicOr(int* address. C Language Extensions B. unsigned int val).Appendix B.11.2 .1 Bitwise Functions atomicAnd() int atomicAnd(int* address. These three operations are performed in one atomic transaction.2 B.11. int compare. unsigned int atomicCAS(unsigned int* address. int val). The function returns old. int val). unsigned int val). These three operations are performed in one atomic transaction. and stores the result back to memory at the same address. computes (((old == 0) | (old > val)) ? val : (old-1)). and stores the result back to memory at the same address.1. unsigned long long int compare.1.11. int val). The function returns old. unsigned int compare. reads the 32-bit or 64-bit word old located at the address address in global or shared memory. B. unsigned int atomicAnd(unsigned int* address.8 atomicCAS() int atomicCAS(int* address. and stores the result back to memory at the same address.11. 118 CUDA C Programming Guide Version 3.11. The function returns old (Compare And Swap). unsigned int atomicOr(unsigned int* address. and stores the result back to memory at the same address.2. int val).

13 Profiler Counter Function Each multiprocessor has a set of sixteen hardware counters that an application can increment with a single instruction by calling the __prof_trigger() function. Counters 8 to 15 are reserved and should not be used by applications. evaluates predicate for all threads of the warp and returns non-zero if and only if predicate evaluates to non-zero for all of them. The value of counters 0. prof_trigger_01. Parallel Nsight). computes (old ^ val). All counters are reset before each kernel call (note that when an application is run via a CUDA debugger or profiler (cuda-gdb. in the profiler. int __all(int predicate). with substitutions made from the CUDA C Programming Guide Version 3. int __any(int predicate). evaluates predicate for all threads of the warp and returns non-zero if and only if predicate evaluates to non-zero for any of them.conf file (see the profiler manual for more details). 7 for the first multiprocessor can be obtained via the CUDA profiler by listing prof_trigger_00. In essence.]).x. . increments by one per warp the per-multiprocessor hardware counter of index counter.2 119 . B. This function is only supported by devices of compute capability 2. prof_trigger_07. …. evaluates predicate for all threads of the warp and returns an integer whose Nth bit is set if and only if predicate evaluates to non-zero for the Nth thread of the warp. B.2 and higher (see Section 4.14 Formatted Output Formatted output is only supported by devices of compute capability 2.. 1. unsigned int __ballot(int predicate). arg. The in-kernel printf() function behaves in a similar way to the standard C-library printf() function.12 Warp Vote Functions Warp vote functions are only supported by devices of compute capability 1.1 for the definition of a warp). prints formatted output from a kernel to a host-side output stream. int printf(const char *format[. and the user is referred to the host system‟s manual pages for a complete description of printf() behavior. The function returns old. void __prof_trigger(int counter).Appendix B. …. B.x. CUDA Visual Profiler. and stores the result back to memory at the same address. all launches are synchronous).. etc. C Language Extensions reads the 32-bit word old located at the address address in global or shared memory. These three operations are performed in one atomic transaction. the string passed in as format is output to a stream on the host.

From a multi-threaded kernel.4 for an illustrative example). Every effort has been made to ensure that the format specifiers supported by CUDA‟s printf function form a universal subset from the most common host compilers.1. As described in Section B. Unlike the C-standard printf(). Supported format specifiers are listed below. If the format string is NULL. -1 is returned. CUDA‟s printf() returns the number of arguments parsed. “%hd” will be accepted and printf will expect a double-precision variable in the corresponding location in the argument list. The effect of this is that output may be undefined if the program emits a format string which contains invalid combinations.14.14.14. This is because it cannot determine what will and will not be valid on the host system where the final output is formatted. If no arguments follow the format string. 0 is returned. If an internal error occurs. It is up to the programmer to limit the output to a single thread if only a single output string is desired (see Section B. and is flushed at any host-side synchronisation point 120 CUDA C Programming Guide Version 3. C Language Extensions argument list wherever a format specifier is encountered.  B. Multiple versions of the output string will then appear at the host stream. but exact behavior will be host-O/S-dependent. This buffer is circular. once for each thread which encountered the printf(). this means that a straightforward call to printf() will be executed by every thread.2 Limitations Final formatting of the printf() output takes place on the host system. The output buffer for printf() is set to a fixed size before kernel launch (see below). The printf() command is executed as any other device-side function: per-thread.1 Format Specifiers As for standard printf(). which returns the number of characters printed. and in the context of the calling thread. printf() will accept all combinations of valid flags and types. whether or not overall they form a valid format specifier.precision][size]type The following fields are supported (see widely-available documentation for a complete description of all behaviors): Flags: „#‟ „ „ „0‟ „+‟ „-„  Width: „*‟ „0-9‟  Precision: „0-9‟  Size: „h‟ „l‟ „ll‟  Type: „%cdiouxXpeEfgGaAs‟ Note that CUDA‟s printf() will accept any combination of flag.Appendix B. In other words. width. format specifiers take the form: %[flags][width][. -2 is returned. size and type.2 . This means that the format string must be understood by the host-system‟s compiler and C library. precision. using that thread‟s data as specified. B.14.

Appendix B.

C Language Extensions

and at when the context is explicitly destroyed; if more output is produced during kernel execution than can fit in the buffer, older output is overwritten. The printf() command can accept at most 32 arguments in addition to the format string. Additional arguments beyond this will be ignored, and the format specifier output as-is. Owing to the differing size of the long type on 64-bit Windows platforms (four bytes on 64-bit Windows platforms, eight bytes on other 64-bit platforms), a kernel which is compiled on a non-Windows 64-bit machine but then run on a win64 machine will see corrupted output for all format strings which include “%ld”. It is recommended that the compilation platform matches the execution platform to ensure safety. The output buffer for printf() is not flushed automatically to the output stream, but instead is flushed only when one of these actions is performed: Kernel launch via <<<>>> or cuLaunch(),  Synchronization via cudaThreadSynchronize(), cuCtxSynchronize(), cudaStreamSynchronize(), or cuStreamSynchronize(),  Module loading/unloading via cuModuleLoad() or cuModuleUnload(),  Context destruction via cudaThreadExit() or cuCtxDestroy(). Note that the buffer is not flushed automatically when the program exits. The user must call cudaThreadExit() or cuCtxDestroy() explicitly, as shown in the examples below.

B.14.3

Associated Host-Side API
The following API functions get and set the size of the buffer used to transfer the printf() arguments and internal metadata to the host (default is 1 megabyte):

Driver API:
cuCtxGetLimit(size_t* size, CU_LIMIT_PRINTF_FIFO_SIZE) cuCtxSetLimit(CU_LIMIT_PRINTF_FIFO_SIZE, size_t size)

Runtime API:
cudaThreadGetLimit(size_t* size,cudaLimitPrintfFifoSize) cudaThreadSetLimit(cudaLimitPrintfFifoSize, size_t size)

B.14.4

Examples
The following code sample:
__global__ void helloCUDA(float f) { printf(“Hello thread %d, f=%f\n”, threadIdx.x, f) ; } void main() { helloCUDA<<<1, 5>>>(1.2345f);

CUDA C Programming Guide Version 3.2

121

Appendix B.

C Language Extensions
cudaThreadExit(); }

will output:
Hello Hello Hello Hello Hello thread thread thread thread thread 0, 1, 2, 3, 4, f=1.2345 f=1.2345 f=1.2345 f=1.2345 f=1.2345

Notice how each thread encounters the printf() command, so there are as many lines of output as there were threads launched in the grid. As expected, global values (i.e. float f) are common between all threads, and local values (i.e. threadIdx.x) are distinct per-thread. The following code sample:
__global__ void helloCUDA(float f) { if (threadIdx.x == 0) printf(“Hello thread %d, f=%f\n”, threadIdx.x, f) ; } void main() { helloCUDA<<<1, 5>>>(1.2345f); cudaThreadExit(); }

will output:
Hello thread 0, f=1.2345

Self-evidently, the if() statement limits which threads will call printf, so that only a single line of output is seen.

B.15

Dynamic Global Memory Allocation
void* malloc(size_t size); void free(void* ptr);

allocate and free memory dynamically from a fixed-size heap in global memory. The CUDA in-kernel malloc() function allocates at least size bytes from the device heap and returns a pointer to the allocated memory or NULL if insufficient memory exists to fulfill the request. The returned pointer is guaranteed to be aligned to a 16-byte boundary. The CUDA in-kernel free() function deallocates the memory pointed to by ptr, which must have been returned by a previous call to malloc(). If ptr is NULL, the call to free() is ignored. Repeated calls to free() with the same ptr has undefined behavior. The memory allocated by a given CUDA thread via malloc() remains allocated for the lifetime of the CUDA context, or until it is explicitly released by a call to free(). It can be used by any other CUDA threads even from subsequent kernel launches. Any CUDA thread may free memory allocated by another thread, but care should be taken to ensure that the same pointer is not freed more than once.

122

CUDA C Programming Guide Version 3.2

Appendix B.

C Language Extensions

B.15.1

Heap Memory Allocation
The device memory heap has a fixed size that must be specified before any program using malloc() or free() is loaded into the context. A default heap of eight megabytes is allocated if any program uses malloc() without explicitly specifying the heap size. The following API functions get and set the heap size:

Driver API:
cuCtxGetLimit(size_t* size, CU_LIMIT_MALLOC_HEAP_SIZE) cuCtxSetLimit(CU_LIMIT_MALLOC_HEAP_SIZE, size_t size)

Runtime API:
cudaThreadGetLimit(size_t* size, cudaLimitMallocHeapSize) cudaThreadSetLimit(cudaLimitMallocHeapSize, size_t size)

The heap size granted will be at least size bytes. cuCtxGetLimit() and cudaThreadGetLimit() return the currently requested heap size. The actual memory allocation for the heap occurs when a module is loaded into the context, either explicitly via the CUDA driver API (see Section 3.3.2), or implicitly via the CUDA runtime API (see Section 3.2). If the memory allocation fails, the module load will generate a CUDA_ERROR_SHARED_OBJECT_INIT_FAILED error. Heap size cannot be changed once a module load has occurred and it does not resize dynamically according to need. Memory reserved for the device heap is in addition to memory allocated through host-side CUDA API calls such as cudaMalloc().

B.15.2

Interoperability with Host Memory API
Memory allocated via malloc() cannot be freed using the runtime or driver API (i.e. by calling any of the free memory functions from Sections 3.2.1 and 3.3.4). Similarly, memory allocated via the runtime or driver API (i.e. by calling any of the memory allocation functions from Sections 3.2.1 and 3.3.4) cannot be freed via free(). Memory allocated via malloc() can be copied using the runtime or driver API (i.e. by calling any of the copy memory functions from Sections 3.2.1 and 3.3.4).

B.15.3
B.15.3.1

Examples
Per Thread Allocation
The following code sample:
__global__ void mallocTest() { char* ptr = (char*)malloc(123); printf(“Thread %d got pointer: %p\n”, threadIdx.x, ptr); free(ptr);

CUDA C Programming Guide Version 3.2

123

ensuring coalescence int* ptr = data. mallocTest<<<10. // Only one thread may free the memory! if (threadIdx. cudaThreadSynchronize(). } void main() { cudaThreadSetLimit(cudaLimitMallocHeapSize.3. ++i) ptr[i * blockDim. 128*1024*1024). } will output: Thread Thread Thread Thread Thread 0 1 2 3 4 got got got got got pointer: pointer: pointer: pointer: pointer: 00057020 0005708c 000570f8 00057164 000571d0 Notice how each thread encounters the malloc() command and so receives its own allocation.x == 0) free(data). Note that this must // be done before any kernel is launched. C Language Extensions } void main() { // Set a heap size of 128 megabytes.x] = threadIdx. cudaThreadSetLimit(cudaLimitMallocHeapSize.x. (threadIdx.) B. i < 64. __syncthreads().2 .2 Per Thread Block Allocation __global__ void mallocTest() { __shared__ int* data. cudaThreadSynchronize().x * 64). // Check for failure if (data == NULL) return.x == 0) data = (int*)malloc(blockDim. so that access can easily be coalesced. 124 CUDA C Programming Guide Version 3. 64 bytes per thread are allocated.15.Appendix B. mallocTest<<<1. (Exact pointer values will vary: these are illustrative. 5>>>(). 128*1024*1024). // // // // if The first thread in the block does the allocation and then shares the pointer with all other threads through shared memory. for (int i = 0. 128>>>(). // Ensure all threads complete before freeing __syncthreads(). // Threads index into the memory.x + threadIdx.

Appendix B.

C Language Extensions

}

B.15.3.3

Allocation Persisting Between Kernel Launches
#define NUM_BLOCKS 20 __device__ int* dataptr[NUM_BLOCKS]; // Per-block pointer __global__ void allocmem() { // Only the first thread in the block does the allocation // since we want only one allocation per block. if (threadIdx.x == 0) dataptr[blockIdx.x] = (int*)malloc(blockDim.x * 4); __syncthreads(); // Check for failure if (dataptr[blockIdx.x] == NULL) return; // Zero the data with all threads in parallel dataptr[blockIdx.x][threadIdx.x] = 0; } // Simple example: store thread ID into each element __global__ void usemem() { int* ptr = dataptr[blockIdx.x]; if (ptr != NULL) ptr[threadIdx.x] += threadIdx.x; } // Print the content of the buffer before freeing it __global__ void freemem() { int* ptr = dataptr[blockIdx.x]; if (ptr != NULL) printf(“Block %d, Thread %d: final value = %d\n”, blockIdx.x, threadIdx.x, ptr[threadIdx.x]); // Only free from one thread! if (threadIdx.x == 0) free(ptr); } void main() { cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); // Allocate memory allocmem<<< NUM_BLOCKS, 10 >>>(); // Use memory usemem<<< NUM_BLOCKS, 10 >>>(); usemem<<< NUM_BLOCKS, 10 >>>(); usemem<<< NUM_BLOCKS, 10 >>>(); // Free memory

CUDA C Programming Guide Version 3.2

125

Appendix B.

C Language Extensions
freemem<<< NUM_BLOCKS, 10 >>>(); cudaThreadSynchronize(); }

B.16

Execution Configuration
Any call to a __global__ function must specify the execution configuration for that call. The execution configuration defines the dimension of the grid and blocks that will be used to execute the function on the device, as well as the associated stream (see Section 3.3.9.1 for a description of streams). When using the driver API, the execution configuration is specified through a series of driver function calls as detailed in Section 3.3.3. When using the runtime API (Section 3.2), the execution configuration is specified by inserting an expression of the form <<< Dg, Db, Ns, S >>> between the function name and the parenthesized argument list, where:

Dg is of type dim3 (see Section B.3.2) and specifies the dimension and size of the grid, such that Dg.x * Dg.y equals the number of blocks being launched; Dg.z must be equal to 1; Db is of type dim3 (see Section B.3.2) and specifies the dimension and size of each block, such that Db.x * Db.y * Db.z equals the number of threads

per block; Ns is of type size_t and specifies the number of bytes in shared memory that is dynamically allocated per block for this call in addition to the statically allocated memory; this dynamically allocated memory is used by any of the variables declared as an external array as mentioned in Section B.2.3; Ns is an optional argument which defaults to 0;  S is of type cudaStream_t and specifies the associated stream; S is an optional argument which defaults to 0. As an example, a function declared as
 __global__ void Func(float* parameter);

must be called like this:
Func<<< Dg, Db, Ns >>>(parameter);

The arguments to the execution configuration are evaluated before the actual function arguments and like the function arguments, are currently passed via shared memory to the device. The function call will fail if Dg or Db are greater than the maximum sizes allowed for the device as specified in Appendix G, or if Ns is greater than the maximum amount of shared memory available on the device, minus the amount of shared memory required for static allocation, functions arguments (for devices of compute capability 1.x), and execution configuration.

126

CUDA C Programming Guide Version 3.2

Appendix B.

C Language Extensions

B.17

Launch Bounds
As discussed in detail in Section 5.2.3, the fewer registers a kernel uses, the more threads and thread blocks are likely to reside on a multiprocessor, which can improve performance. Therefore, the compiler uses heuristics to minimize register usage while keeping register spilling (see Section 5.3.2.2) and instruction count to a minimum. An application can optionally aid these heuristics by providing additional information to the compiler in the form of launch bounds that are specified using the __launch_bounds__() qualifier in the definition of a __global__ function:
__global__ void __launch_bounds__(maxThreadsPerBlock, minBlocksPerMultiprocessor) MyKernel(...) { ... } 

maxThreadsPerBlock specifies the maximum number of threads per block with which the application will ever launch MyKernel(); it compiles to the .maxntid PTX directive; minBlocksPerMultiprocessor is optional and specifies the desired

minimum number of resident blocks per multiprocessor; it compiles to the .minnctapersm PTX directive. If launch bounds are specified, the compiler first derives from them the upper limit L on the number of registers the kernel should use to ensure that minBlocksPerMultiprocessor blocks (or a single block if minBlocksPerMultiprocessor is not specified) of maxThreadsPerBlock threads can reside on the multiprocessor (see Section 4.2 for the relationship between the number of registers used by a kernel and the number of registers allocated per block). The compiler then optimizes register usage in the following way: If the initial register usage is higher than L, the compiler reduces it further until it becomes less or equal to L, usually at the expense of more local memory usage and/or higher number of instructions;  If the initial register usage is lower than L,  If maxThreadsPerBlock is specified and minBlocksPerMultiprocessor is not, the compiler uses maxThreadsPerBlock to determine the register usage thresholds for the transitions between n and n+1 resident blocks (i.e. when using one less register makes room for an additional resident block as in the example of Section 5.2.3) and then applies similar heuristics as when no launch bounds are specified;  If both minBlocksPerMultiprocessor and maxThreadsPerBlock are specified, the compiler may increase register usage as high as L to reduce the number of instructions and better hide single thread instruction latency. A kernel will fail to launch if it is executed with more threads per block than its launch bound maxThreadsPerBlock.

CUDA C Programming Guide Version 3.2

127

Register usage is reported by the --ptxas-options=-v compiler option. The sample code below shows how this is typically handled in device code using the __CUDA_ARCH__ macro introduced in Section 3. The value of -maxrregcount is ignored for functions with launch bounds.4.. threadsPerBlock>>>(. it is tempting to use MY_KERNEL_MAX_THREADS as the number of threads per block in the execution configuration: // Host code MyKernel<<<blocksPerGrid. MY_KERNEL_MIN_BLOCKS) MyKernel(. C Language Extensions Optimal launch bounds for a given kernel will usually differ across major architecture revisions..).1.). int threadsPerBlock = (deviceProp.. so MyKernel will launch with 256 threads per block even when __CUDA_ARCH__ is greater or equal to 200.. } In the common case where MyKernel is invoked with the maximum number of threads per block (specified as the first parameter of __launch_bounds__()).2. Instead the number of threads per block should be determined:  Either at compile time using a macro that does not depend on __CUDA_ARCH__. Register usage can also be controlled for all __global__ functions in a file using the -maxrregcount compiler option. MyKernel<<<blocksPerGrid.). #define THREADS_PER_BLOCK #if __CUDA_ARCH__ >= 200 #define MY_KERNEL_MAX_THREADS #define MY_KERNEL_MIN_BLOCKS #else #define MY_KERNEL_MAX_THREADS #define MY_KERNEL_MIN_BLOCKS #endif 256 (2 * THREADS_PER_BLOCK) 3 THREADS_PER_BLOCK 2 // Device code __global__ void __launch_bounds__(MY_KERNEL_MAX_THREADS.3 for a definition of occupancy)... MY_KERNEL_MAX_THREADS>>>(. for example // Host code MyKernel<<<blocksPerGrid.  Or at runtime based on the compute capability // Host code cudaGetDeviceProperties(&deviceProp.1. THREADS_PER_BLOCK>>>(.major >= 2 ? 2 * THREADS_PER_BLOCK : THREADS_PER_BLOCK).4. This will not work however since __CUDA_ARCH__ is undefined in host code as mentioned in Section 3.) { . 128 CUDA C Programming Guide Version 3. device)...2 .. The number of resident blocks can be derived from the occupancy reported by the CUDA profiler (see Section 5.Appendix B..

2. It also specifies the error bounds of each function when executed on the device.2 can only be used in device code. The recommended way to round a single-precision floating-point operand to an integer. there are three prototypes for a given function <func-name>: (1) double <func-name>(double). This combination can be avoided by using the __fadd_rn() and __fmul_rn() intrinsic functions (see Section C. so have a maximum error of 0. float log(float) (3) float <func-name>f(float). truncf().1 129 . Mathematical Functions Functions from Section C.2). Note that floating-point functions are overloaded. FMAD truncates the intermediate result of the multiplication as mentioned in Section G.1 Standard Functions This section lists all the mathematical standard library functions supported in device code.5 ulp.g. the compiler often combines them into a single multiply-add instruction (FMAD) and for devices of compute capability 1. e. ceilf().1 can be used in both host and device code whereas functions from Section C. not roundf().1 Single-Precision Floating-Point Functions Addition and multiplication are IEEE-compliant. e.g. so they are not guaranteed bounds. in particular. float logf(float) This means. on the device. They are generated from extensive but not exhaustive tests. C. whereas rintf() maps to a single instruction. These error bounds also apply when the function is executed on the host in the case where the host does not supply the function. with the result being a single-precision floating-point number is rintf(). C.1.g.Appendix C.x. The reason is that roundf() maps to an 8-instruction sequence on the device. so that in general. that passing a float argument always results in a float result (variants (2) and (3) above). double log(double) (2) float <func-name>(float). e. and floorf() each map to a single instruction as well. However. CUDA C Programming Guide Version 3.

0 for compute capability ≥ 2 when compiled with -prec-sqrt=true 3 (full range).Appendix C.2 .x when multiplication is merged into an FMAD) 0 for compute capability ≥ 2 when compiled with -prec-div=true 2 (full range).cptr) sinpif(x) asinf(x) acosf(x) atanf(x) atan2f(y. otherwise 1 (full range) 2 (full range) 3 (full range) 2 (full range) 2 (full range) 2 (full range) 1 (full range) 1 (full range) 3 (full range) 3 (full range) 2 (full range) 2 (full range) 2 (full range) 4 (full range) 2 (full range) 2 (full range) 4 (full range) 3 (full range) 2 (full range) 3 (full range) 3 (full range) 2 (full range) x*y x/y 1/x rsqrtf(x) 1/sqrtf(x) sqrtf(x) cbrtf(x) rcbrtf(x) hypotf(x.y) expf(x) exp2f(x) exp10f(x) expm1f(x) logf(x) log2f(x) log10f(x) log1pf(x) sinf(x) cosf(x) tanf(x) sincosf(x.sptr. Mathematical Standard Library Functions with Maximum ULP Error The maximum error is stated as the absolute value of the difference in ulps between a correctly rounded single-precision result and the result returned by the CUDA library function. Mathematical Functions Table C-1.x when addition is merged into an FMAD) 0 (IEEE-754 round-to-nearest-even) (except for devices of compute capability 1. Function x+y Maximum ulp error 0 (IEEE-754 round-to-nearest-even) (except for devices of compute capability 1. otherwise 2 (full range) Applies to 1/sqrtf(x) only when it is converted to rsqrtf(x) by the compiler.x) sinhf(x) coshf(x) 130 CUDA C Programming Guide Version 3. otherwise 0 for compute capability ≥ 2 when compiled with -prec-div=true 1 (full range).

l) logbf(x) ilogbf(x) fmodf(x.y) fminf(x.264.001 .exp) scalbnf(x.y) fabsf(x) nanf(cptr) Maximum ulp error 2 (full range) 3 (full range) 4 (full range) 3 (full range) 8 (full range) 3 (full range) 6 (full range) 3 (full range) 7 (full range) 6 (outside interval -10. -2.iptr) modff(x.y.y) remainderf(x.y) remquof(x.2 131 ..Appendix C.n) scalblnf(x.exp) ldexpf(x.z) frexpf(x.y) erff(x) erfcf(x) erfinvf(x) erfcinvf(x) lgammaf(x) tgammaf(x) fmaf(x.y) fmaxf(x. larger inside) 11 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) N/A N/A N/A N/A N/A N/A N/A N/A N/A CUDA C Programming Guide Version 3..y. Mathematical Functions Function tanhf(x) asinhf(x) acoshf(x) atanhf(x) powf(x.y) truncf(x) roundf(x) rintf(x) nearbyintf(x) ceilf(x) floorf(x) lrintf(x) lroundf(x) llrintf(x) llroundf(x) signbit(x) isinf(x) isnan(x) isfinite(x) copysignf(x.iptr) fdimf(x.

cptr) sinpi(x) Maximum ulp error 0 (IEEE-754 round-to-nearest-even) 0 (IEEE-754 round-to-nearest-even) 0 (IEEE-754 round-to-nearest-even) 0 (IEEE-754 round-to-nearest-even) 0 (IEEE-754 round-to-nearest-even) 1 (full range) 1 (full range) 1 (full range) 2 (full range) 1 (full range) 1 (full range) 1 (full range) 1 (full range) 1 (full range) 1 (full range) 1 (full range) 1 (full range) 2 (full range) 2 (full range) 2 (full range) 2 (full range) 2 (full range) 132 CUDA C Programming Guide Version 3. trunc().2 .y) exp(x) exp2(x) exp10(x) expm1(x) log(x) log2(x) log10(x) log1p(x) sin(x) cos(x) tan(x) sincos(x. not round(). The reason is that round() maps to an 8-instruction sequence on the device. The recommended way to round a double-precision floating-point operand to an integer.Appendix C.sptr. ceil(). When compiling for devices without such support.1. Table C-2. Mathematical Standard Library Functions with Maximum ULP Error The maximum error is stated as the absolute value of the difference in ulps between a correctly rounded double-precision result and the result returned by the CUDA library function. with the result being a double-precision floating-point number is rint(). Mathematical Functions Function nextafterf(x. such as devices of compute capability 1. whereas rint() maps to a single instruction. the double type gets demoted to float by default and the double-precision math functions are mapped to their single-precision equivalents.y) Maximum ulp error N/A C. and floor() each map to a single instruction as well.2 Double-Precision Floating-Point Functions The errors listed below only apply when compiling for devices with native doubleprecision support.2 and lower. Function x+y x*y x/y 1/x sqrt(x) rsqrt(x) cbrt(x) rcbrt(x) hypot(x.

y.Appendix C.2 133 .y.iptr) modf(x. larger inside) 8 (full range) 0 (IEEE-754 round-to-nearest-even) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) 0 (full range) N/A N/A N/A CUDA C Programming Guide Version 3.l) logb(x) ilogb(x) fmod(x.exp) scalbn(x. Mathematical Functions Function asin(x) acos(x) atan(x) atan2(y.y) trunc(x) round(x) rint(x) nearbyint(x) ceil(x) floor(x) lrint(x) lround(x) llrint(x) llround(x) signbit(x) isinf(x) isnan(x) Maximum ulp error 2 (full range) 2 (full range) 2 (full range) 2 (full range) 1 (full range) 1 (full range) 1 (full range) 2 (full range) 2 (full range) 2 (full range) 2 (full range) 2 (full range) 5 (full range) 8 (full range) 8 (full range) 4 (outside interval -11..2637.z) frexp(x.x) sinh(x) cosh(x) tanh(x) asinh(x) acosh(x) atanh(x) pow(x.y) remquo(x. -2.y) remainder(x.0001 .iptr) fdim(x..y) erf(x) erfc(x) erfinv(x) erfcinv(x) lgamma(x) tgamma(x) fma(x.n) scalbln(x.exp) ldexp(x.

For 134 CUDA C Programming Guide Version 3.Appendix C.1 Single-Precision Floating-Point Functions __fadd_rn() and __fmul_rn() map to addition and multiplication operations that the compiler never merges into FMADs. For devices of compute capability 1. but for 2126 < y < 2128. By contrast.x when the code is compiled with -prec-div=false.x or for devices of compute capability 2.y) delivers a result of zero. whereas the “/” operator delivers the correct result to within the accuracy stated in Table C-3. Functions suffixed with _ru operate using the round-up (to positive infinity) rounding mode. both the regular division “/” operator and __fdividef(x.1.2 .y) have the same accuracy.y) and max(x. while the “/” operator returns infinity.y) are supported and map to a single instruction on the device.2.y) fmin(x.y) fmax(x. __fdividef(x. __fdividef(x. The accuracy of floating-point division varies depending on the compute capability of the device and whether the code is compiled with -prec-div=false or -prec-div=true.2 Intrinsic Functions This section lists the intrinsic functions that are only supported in device code.y) fabs(x) nan(cptr) nextafter(x. they have the same name prefixed with __ (such as __sinf(x)). if x is infinity. for 2126 < y < 2128. additions and multiplications generated from the '*' and '+' operators will frequently be combined into FMADs. C. Among these functions are the less accurate. C.y) Maximum ulp error N/A N/A N/A N/A N/A N/A N/A C. Functions suffixed with _rd operate using the round-down (to negative infinity) rounding mode. Also. Functions suffixed with _rz operate using the round-towards-zero rounding mode.y) delivers a NaN (as a result of multiplying infinity by zero). but faster versions of some of the functions of Section C.3 Integer Functions Integer min(x. Mathematical Functions Function isfinite(x) copysign(x.1. Functions suffixed with _rn operate using the round-to-nearest-even rounding mode.

rd](x. the maximum absolute error is 2-21.5.19. N/A __log2f(x) __log10f(x) __sinf(x) __cosf(x) __sincosf(x. and x otherwise.rz.y) __expf(x) __exp10f(x) __logf(x) 2 + floor(abs(1. 2].1.rd](x) __fdiv_[rn. ]. IEEE-compliant.16 * x)). otherwise. 2126]. the maximum absolute error is 2-21. the maximum ulp error is 2.ru. the maximum absolute error is 2-24. 2 + floor(abs(2.rz.rd](x)) converts single-precision floating-point parameter x to 64-bit signed (respectively unsigned) integer with specified IEEE- 754 rounding modes. Mathematical Functions devices of compute capability 2. and larger otherwise.rd](x. For y in [2-126.z) __frcp_[rn.y) __fdividef(x.ru. otherwise. the maximum ulp error is 3. IEEE-compliant. IEEE-compliant.Appendix C. Table C-3. and larger otherwise.y) __fmaf_[rn. IEEE-compliant. the maximum ulp error is 2.rd](x) __fsqrt_[rn.41.rz.1.ru.2 135 .ru.ru.ru.ru. otherwise.ru. y) __saturate(x) __sinf(x) * (1 / __cosf(x)).cptr) __tanf(x) __powf(x. For x in [0. the “/” operator is IEEE compliant as mentioned in Section C. For x in [-.y) __fmul_[rn.rz. 2]. Single-Precision Floating-Point Intrinsic Functions Supported by the CUDA Runtime Library with Respective Error Bounds Error bounds IEEE-compliant. Same as sinf(x) and cosf(x).rz.x when the code is compiled with -prec-div=true.y.5.95 * x)).rd](x. 2].rz.rd](x) (respectively __float2ull_[rn. the maximum ulp error is 3.rz. For x in [0. ]. For x in [-. 1 if x is more than 1.rz.sptr. __float2ll_[rn. the maximum absolute error is 2-22. The maximum ulp error is The maximum ulp error is Function __fadd_[rn. IEEE-compliant. For x in [0. __saturate(x) returns 0 if x is less than 0.5.41. the maximum absolute error is 2-21.rd](x. Derived from its implementation as Derived from its implementation as exp2f(y * __log2f(x)). CUDA C Programming Guide Version 3.

rd](x) __dsqrt_[rn. bit 31) of integer parameter x. __clzll(x) returns the number. IEEE-compliant. __[u]mulhi(x.e.2.y) computes the product of the 64-bit integer parameters x and y and delivers the 64 most significant bits of the 128-bit result.ru.rz.rd](x. IEEE-compliant.3 Integer Functions __[u]mul24(x.y. Requires compute capability ≥ 2 IEEE-compliant.Appendix C. Requires compute capability ≥ 2 Function __dadd_[rn.y. IEEE-compliant.y) __drcp_[rn.rd](x. __ffsll(x) returns the position of the first (least significant) bit set in 64-bit integer parameter x. Note that this is identical to the Linux function ffs.y) __dmul_[rn.e. additions and multiplications generated from the '*' and '+' operators will frequently be combined into FMADs.2 . The least significant bit is position 1. The 8 most significant bits of x or y are ignored.rz.z) __ddiv_[rn.ru.rd](x) C. __[u]sad(x.rz.ru. between 0 and 64 inclusive.rz. between 0 and 32 inclusive. __ffs() returns 0. __ffs(x) returns the position of the first (least significant) bit set in integer parameter x.y) computes the product of the 24 least significant bits of the integer parameters x and y and delivers the 32 least significant bits of the result. Table C-4. 136 CUDA C Programming Guide Version 3. of consecutive zero bits starting at the most significant bit (i. The least significant bit is position 1. If x is 0.ru.2 Double-Precision Floating-Point Functions __dadd_rn() and __dmul_rn() map to addition and multiplication operations that the compiler never merges into FMADs.2.rd](x. Mathematical Functions C.y) __fma_[rn.rd](x. Double-Precision Floating-Point Intrinsic Functions Supported by the CUDA Runtime Library with Respective Error Bounds Error bounds IEEE-compliant. By contrast.z) (Sum of Absolute Difference) returns the sum of integer parameter z and the absolute value of the difference between integer parameters x and y.rz. Note that this is identical to the Linux function ffsll. __ffsll() returns 0. __[u]mul64hi(x. bit 63) of 64-bit integer parameter x. IEEE-compliant.y) computes the product of the integer parameters x and y and delivers the 32 most significant bits of the 64-bit result. of consecutive zero bits starting at the most significant bit (i. __clz(x) returns the number.ru. If x is 0. Requires compute capability ≥ 2.y)(x.rz.ru.

__popcll(x) returns the number of bits that are set to 1 in the binary representation of 64-bit integer parameter x.ru. bit N of the result corresponds to bit 63-N of x. bit N of the result corresponds to bit 31-N of x.rd](x) __float2uint_[rn.rz. __brev(x) reverses the bits of 32-bit unsigned integer parameter x.rz. __int_as_float(0xC0000000) is equal to -2. i. as a 32-bit integer r. four bytes from eight input bytes provided in the two input integers x and y.y.0f.ru.s) returns. A type conversion function may change the binary representation of its input value.ru.0f) is equal to 1. __byte_perm(x.0f) is equal to 0x3f800000.ru. __int2float_rn(0xC0000000) is equal to -1073741824.0f.2. __float_as_int(1.rd](x) __uint2float_[rn.4 Type Casting Functions There are two categories of type casting functions: the type conversion functions (Table C-5) and the type reinterpretation functions (Table C-6).ru.2 137 .rz.ru.rz.e. A type reinterpretation function does not change the binary representation of its input value. i.rd](x) __int2float_[rn.rd](x) __float2ll_[rn. The input bytes are indexed as follows: input[0] = x<0:7> input[2] = x<16:23> input[4] = y<0:7> input[6] = y<16:23> input[1] = x<8:15> input[3] = x<24:31> input[5] = y<8:15> input[7] = y<24:31> The selector indices are stored in 4-bit nibbles (with the upper 16-bit of the selector not being used): selector[0] = s<0:3> selector[2] = s<8:11> selector[1] = s<4:7> selector[3] = s<12:15> The returned value r is computed to be: result[n] := input[selector[n]] where result[n] is the nth byte of r. Type Conversion Functions __float2int_[rn.rd](x) __float2ull_[rn. C.rz. For example. __float2int_rn(1.rd](x) CUDA C Programming Guide Version 3. Table C-5. For example.e.Appendix C. Mathematical Functions __popc(x) returns the number of bits that are set to 1 in the binary representation of 32-bit integer parameter x. __brevll(x) reverses the bits of 64-bit unsigned long long parameter x.rz.

rd](x) __ull2double_[rn.ru.rd](x) __double2ull_[rn. lo) 138 CUDA C Programming Guide Version 3.ru.ru. Type Reinterpretation Functions __int_as_float(x) __float_as_int(x) __double_as_longlong(x) __longlong_as_double(x) __double2hiint(x) __double2loint(x) __hiloint2double(hi.rz.rz.rd](x) __double2ll_[rn.rz.rz.rd](x) __double2uint_[rn.rd](x) __ull2float_[rn.ru.rz.rz.2 .rz.ru.rd](x) Table C-6.rz.rd](x) __float2half_rn(x) __half2float(x) __double2float_[rn.rz.ru.ru.ru.ru.rd](x) __int2double_rn(x) __uint2double_rn(x) __ll2double_[rn.Appendix C. Mathematical Functions __ll2float_[rn.rd](x) __double2int_[rn.

That means that the two functions either consume a different number of parameters or parameters of different types. see below) overloading. polymorphism is the ability to define that functions or operators behave differently in different contexts. In practice this means that the compiler will pick the closest match in case of multiple potential matches. It is valid to use any of these constructs in . still apply. C++ Language Constructs CUDA supports the following C++ language constructs for device code: Polymorphism  Default Parameters  Operator Overloading  Namespaces  Function Templates  Classes for devices of compute capability 2.x These C++ constructs are implemented as specified in “The C++ Programming Langue” reference. this means that it is permissible to define two different functions within the same scope (namespace) as long as they have a distinguishable function signature. When either of the multiple functions gets invoked the compiler resolves to the function‟s implementation that matches the function signature. D. Because of implicit typecasting.Appendix D. like the lack of support for recursion. a compiler may encounter multiple potential matches for a function invocation and in that case the matching rules as described in the C++ Language Standard apply.cu CUDA files for host. In practical terms.1 Polymorphism Generally.1 139 . Example: The following is valid CUDA code: __device__ void f(float x) { CUDA C Programming Guide Version 3. Any restrictions detailed in previous parts of this programming guide. device.  The following subsections provide examples of the various constructs. and kernel (__global__) functions. This is also referred to as function (and operator.

/. &. f(x). Default parameters can only be given for the last n parameters of a function. Example: The following is valid CUDA code. -. double y) { // do something with x and y } D. Examples of overloadable operators in C++ are: +. // or float x = /* some value */. etc.0f) { // do something with x } Kernel or other device functions can now invoke this version of f in one of two ways: f().x + b. C++ Language Constructs // do something with x } __device__ void f(int i) { // do something with i } __device__ void f(double x. *.3 Operator Overloading Operator overloading allows programmers to define operators for new data-types. return r. 140 CUDA C Programming Guide Version 3. } This new operator can now be used like this: uchar4 a.Appendix D. r.2 . const uchar4 & b) { uchar4 r. . [].. b.2 Default Parameters With support for polymorphism as described in the previous subsection and the function signature matching rules in place it becomes possible to provide support for default values for function parameters. c.. +=.x = a. D. Example: __device__ void f(float x = 0.x. implementing the + operation between two uchar4 vectors: __device__ uchar4 operator+ (const uchar4 & a. a = b = /* some initial value */.

Example: The following code defines two functions “f()” in two separate namespaces (“nvidia” and “other”): namespace nvidia { __device__ void f(float x) { /* do something with x */ .} } namespace other { __device__ void f(float x) { /* do something with x */ . Explicit instantiation. CUDA C Programming Guide Version 3.Appendix D. which commonly occurs when using multiple function libraries from different sources.5f). CUDA supports function templates to the full extent of the C++ standard.} } The functions can now be used anywhere via fully qualified names: nvidia::f(0. C++ Language Constructs c = a + b. All the symbols inside a namespace can be used within this namespaces without additional syntax. D.   Example: template <T> __device__ bool f(T x) { return /* some clever code that turns x into a bool here */ } This function will convert x of any data-type to a bool as long as the code in the function‟s body can be compiled for the actually type (T) of the variable x.2 141 .5f).5 Function Templates Function templates are a form of meta-programming that allows writing a generic function in a data-type independent fashion. The use of namespaces can be used to solve the problem of name-clashes (two different symbols using identical names).4 Namespaces Namespaces in C++ allow for the creation of a hierarchy of scopes of visibility. All the symbols in a namespace can be imported into another namespace (scope) like this: using namespace nvidia.  Template specialization. f() can be invoked in two ways: int x = 1. D. f(0. including the following concepts: Implicit template parameter deduction.

as long as none of the member functions are virtual (this restriction will be removed in some future release). In this case the compiler would deduce T to be int and instantiate f<int>(x).  D.x and higher may make use of C++ classes.1 Example 1 Pixel Data Type The following is an example of a data type for RGBA pixels with 8 bit per channel depth: class PixelRGBA { public: __device__ PixelRGBA(): r_(0). a). } CUDA C Programming Guide Version 3.2 142 . data types like pixels (r. vectors. etc. b_(0). This first type of invocation relies on the compiler‟s ability to implicitly deduce the correct function type for T. Function templates may be specialized: template <T> __device__ bool f(T x) { return false. 2D and 3D points. } template <> __device__ bool f<int>(T x) { return true. D.  Functor classes. The complete set of matching rules (for implicitly deducing template parameters) and matching polymorphous functions apply as specified in the C++ standard.g. There are two common use cases for classes without virtual member functions: Small-data aggregations. C++ Language Constructs bool result = f(x). The second type of invoking the template function is via explicit instantiation like this: bool result = f<double>(0. E. The use of functors is necessitated by the fact that devicefunction pointers are not supported and thus it is not possible to pass functions as template parameters. g. a_(0) { . A workaround for this restriction is the use of functor classes (see code sample below). all other types will be caught by the more general template and return false.Appendix D.5). b. g_(0). } In this case the implementation for T representing the int type are specialized to return true.6.6 Classes Code compiled for devices with compute capability 2.

.r_.] initialization of p1 and p2 here PixelRGBA p3 = p1 + p2. b_(b). a_. p1.b_ + p2.g_. } Other device code can now make use of this new data type as one would expect: PixelRGBA p1. p2.r_ + p2. }. } // other methods and operators left out for sake of brevity private: unsigned char r_. p1. g_.Appendix D. unsigned char g.a_ + p2. g_(g).2 Example 2 Functor Class The following example shows how functors may be used as function template parameters to implement a set of vector arithmetic operations. a_(a) { .g_ + p2. __device__ PixelRGBA operator+(const PixelRGBA & p1. Here are two functors for float addition and subtraction: class Add { public: __device__ float operator() (float a. D. const PixelRGBA &). const PixelRGBA & p2) { return PixelRGBA(p1. unsigned char b.b_. friend PixelRGBA operator+(const PixelRGBA &. p1. b_.a_). class Sub { public: __device__ float CUDA C Programming Guide Version 3. } }. unsigned char a = 255): r_(r). float b) const { return a + b.2 143 . // [.6. C++ Language Constructs __device__ PixelRGBA(unsigned char r..

v2.x + threadIdx.Appendix D. Add()).b. float b) const { return a . threads>>>(v1. 144 CUDA C Programming Guide Version 3. if (iElement < N) { C[iElement] = op(A[iElement]. O op) { unsigned int iElement = blockDim.2 . const float * B. B[iElement]). } } The VectorOperation kernel may now be launched like this in order to get a vector addition: // Host code VectorOperation<<<blocks. unsigned int N.x * blockIdx. } }. v3. C++ Language Constructs operator() (float a. N.x. float * C. The following templatized kernel makes use of the functors like the ones above in order to implement operations on vectors of floats: // Device code template<class O> __global__ void VectorOperation(const float * A.

The #pragma unroll directive however can be used to control unrolling of any given loop.Appendix E.2 #pragma unroll By default.1 __noinline__ and __forceinline__ When compiling code for devices of compute capability 1. a __device__ function is only inlined when deemed appropriate by the compiler. CUDA C Programming Guide Version 3. The __noinline__ function qualifier can be used as a hint for the compiler not to inline the function if possible. for example). The __forceinline__ function qualifier can be used to force the compiler to inline the function. i < n. When compiling code for devices of compute capability 2. For example. the compiler will not honor the __noinline__ qualifier for functions with pointer parameters and for functions with large parameter lists. For devices of compute capability 1. #pragma unroll 1 will prevent the compiler from ever unrolling a loop.x. in this code sample: #pragma unroll 5 for (int i = 0. E.x. It is optionally followed by a number that specifies how many times the loop must be unrolled.1 145 . to ensure that there will only be n iterations if n is less than 5. a __device__ function is always inlined by default. The function body must still be in the same file where it is called. NVCC Specifics E. the compiler unrolls small loops with a known trip count.x. It is up to the programmer to make sure that the specified unroll number gives the best performance. The compiler will also insert code to ensure correctness (in the example above. It must be placed immediately before the loop and only applies to that loop. For devices of compute capability 2.x. the compiler will always honor the __noinline__ qualifier. ++i) the loop will be unrolled 5 times.

the compiler cannot load a[0] and b[0] into registers. c[5] = b[0]. b. while retaining functionality identical with the abstract execution model: void foo(const float* __restrict__ a.3 __restrict__ nvcc supports restricted pointers via the __restrict__ keyword. the compiler can now reorder and do common sub-expression elimination at will. Here is an example subject to the aliasing issue. which in this case means writes through c would never overwrite elements of a or b. This changes the function prototype as follows: void foo(const float* __restrict__ a. and c restricted pointers. So the compiler cannot take advantage of the common sub-expression. where use of restricted pointer can help the compiler to reduce the number of instructions: void foo(const float* a. c[1] = a[0] * b[0]. float* c) { c[0] = a[0] * b[0].Appendix E.. so any write through c could modify elements of a or b. float* __restrict__ c). NVCC Specifics If no number is specified after #pragma unroll. Note that all pointer arguments need to be made restricted for the compiler optimizer to derive any benefit. By making a. float* __restrict__ c) { float t0 = a[0]. because the results would differ from the abstract execution model if. the programmer asserts to the compiler that the pointers are in fact not aliased. the loop is completely unrolled if its trip count is constant. } In C-type languages. const float* b. Likewise. c[4] = a[0] * b[0]. multiply them. Restricted pointers were introduced in C99 to alleviate the aliasing problem that exists in C-type languages.2 . . b. say. const float* __restrict__ b. otherwise it is not unrolled at all. the compiler cannot just reorder the computation of c[4] into the proximity of the computation of c[0] and c[1] because the preceding write to c[3] could change the inputs to the computation of c[4]. const float* __restrict__ b. c[3] = a[0] * a[1]. and store the result to both c[0] and c[1]. a[0] is really the same location as c[0]. and c may be aliased. and which inhibits all kind of optimization from code reordering to common sub-expression elimination. This means that to guarantee functional correctness. the pointers a. E. 146 CUDA C Programming Guide Version 3. With the __restrict keywords added. c[2] = a[0] * b[0] * a[1]..

2 147 . } The effects here are a reduced number of memory accesses and reduced number of computations. use of restricted pointers can have negative performance impact on CUDA code. c[2] = t2 * t3. c[4] = t2. c[0] = t2. due to reduced occupancy. This is balanced by an increase in register pressure due to "cached" loads and common sub-expressions. c[1] = t2. c[3] = t0 * t3. . float t2 = t0 * t2... Since register pressure is a critical issue in many CUDA codes. CUDA C Programming Guide Version 3. NVCC Specifics float t1 = b[0].Appendix E. float t3 = a[1]. c[5] = t1.

.

Texture Fetching This appendix gives the formula used to compute the value returned by the texture functions of Section B. x . and z as such: x  Nx . y . If x is non-normalized. It is fetched using texture coordinates x . and z are derived from ˆ ˆ ˆ ˆ ˆ the normalized texture coordinates x . and z are the non-normalized texture coordinates remapped to T ‟s valid addressing range. y . only the clamp addressing mode is supported and x is replaced by 0 if x  0 and N  1 if N  x . If x is normalized:   In clamp addressing mode. x . x is replaced by frac(x) .Appendix F. The texture bound to the texture reference is represented as an array T of N texels for a one-dimensional texture.4). and ˆ z  Lz . A texture coordinate must fall within T ‟s valid addressing range before it can be used to address T . N  M texels for a two-dimensional texture. y . y  My .8 depending on the various attributes of the texture reference (see Section 3. and z . y . CUDA C Programming Guide Version 3. where frac( x)  x  floor( x) and floor(x) is the largest integer not greater than x .1 149 . In the remaining of the appendix. The addressing mode specifies how an out-of-range texture coordinate x is remapped to the valid range. or N  M  L texels for a three-dimensional texture. In wrap addressing mode. x is replaced by 0 if x  0 and 1  1 N if 1  x .2.

2 Linear Filtering In this filtering mode. where i  floor(x) .0. j.Appendix F.1 Nearest-Point Sampling In this filtering mode. tex(x) T[3] T[0] T[2] T[1] x 0 0 1 0. 150 CUDA C Programming Guide Version 3.2. the value returned by the texture fetch is  tex( x)  (1   )T [i]  T [i  1] for a one-dimensional texture. y)  T [i.25 2 0.2 . y.1). tex( x. Texture Fetching F. tex( x. j  floor(y) . the value returned by the texture fetch is    tex( x)  T [i] for a one-dimensional texture.0] (see Section 3. Nearest-Point Sampling of a One-Dimensional Texture of Four Texels F. j ] for a two-dimensional texture.5 3 0. Figure D-1 illustrates nearest-point sampling for a one-dimensional texture with N 4. For integer textures.75 4 1 Non-Normalized Normalized Figure F-1.4. k ] for a three-dimensional texture. the value returned by the texture fetch can be optionally remapped to [0. z)  T [i. which is only available for floating-point textures. 1. and k  floor(z) .1.

k  1]   (1   )T [i  1. j  1]   T [i  1. z )  (1   )(1   )(1   )T [i.   frac( y B ) . j  1.   frac( x B ) . j ]   (1   )T [i  1.5 .5 . and  are stored in 9-bit fixed point format with 8 bits of fractional value (so 1. j  floor( y B ) .  . z B  z  0. j . k ]   (1   )T [i  1. j  1. k  1]  for a three-dimensional texture.  Figure F-2 illustrates nearest-point sampling for a one-dimensional texture with N 4.   frac( z B ) . where:    i  floor( x B ) .75 4 1 Non-Normalized Normalized Figure F-2. k  1]  T [i  1. Texture Fetching   tex( x. j . j .5 3 0. k  1]  (1   ) T [i. tex(x) T[3] T[0] T[2] T[1] x 0 0 1 0. y. k ]  (1   )  (1   )T [i. x B  x  0. j  1] for a two-dimensional texture. y B  y  0.0 is exactly represented). j ]  (1   )T [i. j  1. k ]   (1   )(1   )T [i  1. k  floor( z B ) . tex( x. Linear Filtering of a One-Dimensional Texture of Four Texels in Clamp Addressing Mode CUDA C Programming Guide Version 3. j  1. .5 .Appendix F. j . k ]  (1   )(1   )T [i.25 2 0.2 151 . y)  (1   )(1   )T [i.

One-Dimensional Table Lookup Using Linear Filtering 152 CUDA C Programming Guide Version 3.2 . R] can be implemented as N 1 x  0. R Figure F-3 illustrates the use of texture filtering to implement a table lookup with R  4 or R  1 from a one-dimensional texture with N  4 .Appendix F.5) in order to ensure that TL(0)  T [0] and TL( R)  T [ N  1] . Texture Fetching F.3 Table Lookup A table lookup TL(x) where TL( x)  tex ( x spans the interval [0. TL(x) T[3] T[0] T[2] T[1] x 0 0 4/3 1/3 8/3 2/3 4 1 Figure F-3.

x.x and 2.3 and 0 give more details on the architecture of devices of compute capability 1. CUDA C Programming Guide Version 3. Compute Capabilities The general specifications and features of a compute device depend on its compute capability (see Section 2. respectively.5). Section G. Section G.Appendix G.1 153 .2 reviews the compliance with the IEEE floating-point standard. Section G.1 gives the features and technical specifications associated to each compute capability.

11) __ballot() (Section B.12) __threadfence_system() (Section B.3 2.or y-dimension of a grid of thread blocks Maximum number of threads per block Maximum x.5) __syncthreads_count().3 2.Appendix G.1 Features and Technical Specifications Compute Capability Feature Support (Unlisted features are supported for all compute capabilities) Integer atomic functions operating on 32-bit words in global memory (Section B.1 1.0 1.11) Integer atomic functions operating on 64-bit words in global memory (Section B.9) No Yes No Yes No Yes No yes 1. __syncthreads_and().1 1.0 1. __syncthreads_or() (Section B.12) Double-precision floating-point numbers Floating-point atomic addition operating on 32-bit words in global and shared memory (Section B.6) Surface functions (Section B.x 1024 1024 154 CUDA C Programming Guide Version 3.2 1.2 65535 1.11) Integer atomic functions operating on 32-bit words in shared memory (Section B.11) Warp vote functions (Section B. Compute Capabilities G.x Compute Capability Technical Specifications Maximum x.2 .or y-dimension of a block Maximum z-dimension of a block Warp size Maximum number of resident blocks per multiprocessor Maximum number of resident warps per multiprocessor Maximum number of resident threads per multiprocessor Number of 32-bit registers per multiprocessor Maximum amount of shared memory 24 768 8K 16 KB 512 512 64 32 8 32 1024 16 K 48 1536 32 K 48 KB 1.

x 2048 x 2048 x 2048 128 8192 8192 x 8192 8 G.2 Floating-Point Standard All compute devices follow the IEEE 754-2008 standard for binary floating-point arithmetic with the following deviations: There is no dynamically configurable rounding mode.Appendix G.1 1.2 155 . and depth for a 3D texture reference bound to linear memory or a CUDA array Maximum number of textures that can be bound to a kernel Maximum width for a 1D surface reference bound to a CUDA array Maximum width and height for a 2D surface reference bound to a CUDA array Maximum number of surfaces that can be bound to a kernel Maximum number of instructions per kernel 2 million N/A 16 16 KB 64 KB 8 KB Device dependent. and deliver the masked response as defined by IEEE-754 if there is an exceptional event. however.  The result of a single-precision floating-point operation involving one or more input NaNs is the quiet NaN of bit pattern 0x7fffffff. height.  There is no mechanism for detecting that a floating-point exception has occurred and all operations behave as if the IEEE-754 exceptions are always masked. while SNaN encodings are supported.  CUDA C Programming Guide Version 3. between 6 KB and 8 KB 8192 227 65536 x 32768 65536 x 65535 32768 32 512 KB 1. exposed via device intrinsics.2 1. most of the operations support multiple IEEE rounding modes. for the same reason.0 1. Compute Capabilities Compute Capability Technical Specifications per multiprocessor Number of shared memory banks Amount of local memory per thread Constant memory size Cache working set per multiprocessor for constant memory Cache working set per multiprocessor for texture memory Maximum width for a 1D texture reference bound to a CUDA array Maximum width for a 1D texture reference bound to linear memory Maximum width and height for a 2D texture reference bound to linear memory or to a CUDA array Maximum width. they are not signaling and are handled as quiet.3 2.

e.x.z.e. each double variable is converted to single-precision floating-point format (but retains its size of 64 bits) and doubleprecision floating-point arithmetic gets demoted to single-precision floating-point arithmetic. -prec-div=false.  Some instructions are not IEEE-compliant:  Addition and multiplication are often combined into a single multiplyadd instruction (FMAD). devices of compute capability 1. __frcp_r[n.  To mitigate the impact of these restrictions. which truncates (i. When compiling for devices without native double-precision floating-point support.d](float. Section C. and square root. only round-to-nearest-even and round-towards-zero are supported via static rounding modes.  Square root is implemented via the reciprocal square root in a nonstandard-compliant way.d](float. IEEE-compliant software (and therefore slower) implementations are provided through the following intrinsics (c. float): single-precision fused multiply-add with IEEE rounding modes. these are passed through unchanged. see the nvcc user manual for description of these compilation flags).f.z.z.x:  Denormalized numbers are not supported. For devices of compute capability 2. division. float): single-precision division with IEEE rounding modes.d](float): single-precision reciprocal with IEEE rounding modes.Appendix G. __fsqrt_r[n.2 and lower.2. directed rounding towards +/.u.u. code must be compiled with -ftz=false. __fadd_r[u.d](float): single-precision square root with IEEE rounding modes. Compute Capabilities Double-precision floating-point absolute value and negation are not compliant with IEEE-754 with respect to NaNs. code compiled with -ftz=true.  For addition and multiplication.u. For double-precision floating-point numbers on devices of compute capability 1. float.  Underflowed results are flushed to zero.d}(float.u. and 156 CUDA C Programming Guide Version 3.infinity is not supported. -prec-div=true.z. float): single-precision addition with IEEE directed rounding. i. __fmul_r[u. floating-point arithmetic and comparison instructions convert denormalized operands to zero prior to the floating-point operation.  Division is implemented via the reciprocal in a non-standard-compliant way.2 . __fdiv_r[n.x:  Round-to-nearest-even is the only supported IEEE rounding mode for reciprocal. and -prec-sqrt=true to ensure IEEE compliance (this is the default setting.d](float.  For single-precision floating-point numbers on devices of compute capability 1. without rounding) the intermediate mantissa of the multiplication. float): single-precision multiplication with IEEE directed rounding.1):        __fmaf_r{n.

 32 clock cycles for a double-precision floating-point arithmetic instruction.3.  1 double-precision floating-point unit for double-precision floating-point arithmetic operations. Compute Capabilities -prec-sqrt=false comes closest to the code generated for devices of compute capability 1. on the other hand.x Architecture For devices of compute capability 1. the switch from FMAD to FFMA can cause slight changes in numeric results and can in rare circumstances lead to slighty larger error in final results.Appendix G.  16 clock cycles for a single-precision floating-point transcendental instruction.x. The conversion of a floating-point value to an integer value in the case where the floating-point value falls outside the range of the integer format is left undefined by IEEE-754.3 G. the warp scheduler must therefore issue the instruction over:  4 clock cycles for an integer or single-precision floating-point arithmetic instruction.x. is an IEEE-754(2008) compliant fused multiply-add instruction. FMAD truncates the mantissa prior to use it in the addition. G. While FFMA in general has superior numerical properties compared to FMAD.1 Compute Capability 1.  CUDA C Programming Guide Version 3.  FFMA for single precision on devices of compute capability 2. A multiprocessor also has a read-only constant cache that is shared by all functional units and speeds up reads from the constant memory space. but not the other. fmin(). Addition and multiplication are often combined into a single multiply-add instruction: FMAD for single precision on devices of compute capability 1. FFMA. a multiprocessor consists of: 8 CUDA cores for integer and single-precision floating-point arithmetic operations.  1 warp scheduler.2 157 . if one of the input parameters to fminf(). the behavior is to clamp to the end of the supported range.x.  2 special function units for single-precision floating-point transcendental functions (these units can also handle single-precision floating-point multiplications). so the full-width product is being used in the addition and a single rounding occurs during generation of the final result.  In accordance to the IEEE-754R standard. the result is the non-NaN parameter. For compute devices. fmaxf(). which resides in device memory. To execute an instruction for all threads of a warp. This is unlike the x86 architecture behavior. As mentioned above.x. or fmax() is NaN.

that are issued independently.3.2.2 .  16. or two 128-byte memory transactions are issued if the size of the words accessed by the threads is 4.3.   If the half-warp does not meet these requirements.2 Global Memory A global memory request for a warp is split into two memory requests.2 Devices of Compute Capability 1. including the same words.  The local and global memory spaces reside in device memory and are not cached.3. Each multiprocessor accesses the texture cache via a texture unit that implements the various addressing modes and data filtering mentioned in Section 3.2 and 1.  Threads must access the words in sequence: The kth thread in the half-warp must access the kth word. 158 CUDA C Programming Guide Version 3. Compute Capabilities Multiprocessors are grouped into Texture Processor Clusters (TPCs).1 To coalesce. or 16.4.1 where threads need to access words in sequence and coalescing only happens if the half-warp addresses a single segment. which resides in device memory.0 and 1.1. This is in contrast with devices of compute capabilities 1. or 16 bytes.3. a 128-byte memory transaction. The resulting memory transactions are serviced at the throughput of device memory. 8. The number of multiprocessors per TPC is: 2 for devices of compute capabilities 1. all 16 words must lie in the same 64-byte segment. If this size is:  4.1 Devices of Compute Capability 1. i. a 64-byte memory transaction. Each TPC has a read-only texture cache that is shared by all multiprocessors and speeds up reads from the texture memory space. Figure G-1 shows some examples of global memory accesses and corresponding memory transactions based on compute capability. 8.2 and 1. Coalescing is achieved even if the warp is divergent.2. 16 separate 32-byte memory transactions are issued.Appendix G. all 16 words must lie in the same 128-byte segment.3 Threads can access any words in any order.2. the first 8 words must lie in the same 128-byte segment and the last 8 words in the following 128-byte segment.3. one for each half-warp.2. If the half-warp meets these requirements.0 and 1. respectively. the memory request for a half-warp must satisfy the following conditions: The size of the words accessed by the threads must be 4.2 describe how the memory accesses of threads within a half-warp are coalesced into one or more memory transactions depending on the compute capability of the device. and a single memory transaction for each segment addressed by the half-warp is issued.1 and G. there are some inactive threads that do not actually access memory.2. G.3. Sections G.e.  3 for devices of compute capabilities 1. G.0 and 1.  8. G.

2 159 . Find all other active threads whose requested address lies in the same segment. interleaved. G.  If the transaction size is 64 bytes (originally or after reduction from 128 bytes) and only the lower or upper half is used. there can be no bank conflict between a thread belonging to the first half of a warp and a thread belonging to the second half of the same warp.3. A shared memory request for a warp is split into two memory requests.. that is only if d is equal to 1. Figure G-2 shows some examples of strided access for devices of compute capability 2. equivalently.e. G. 8. CUDA C Programming Guide Version 3. threads tid and tid+n access the same bank whenever s*n is a multiple of the number of banks (i.e.x. reduce the transaction size to 64 bytes. there will be no bank conflict only if half the warp size (i. Compute Capabilities More precisely. whenever n is a multiple of 16/d where d is the greatest common divisor of 16 and s. i. Repeat until all threads in the half-warp are serviced. The segment size depends on the size of the words accessed by the threads:  32 bytes for 1-byte words.  128 bytes for 4-. only one thread per halfwarp performs a write and which thread performs the final write is undefined.  64 bytes for 2-byte words.x. the following protocol is used to determine the memory transactions necessary to service all threads in a half-warp:      Find the memory segment that contains the address requested by the lowest numbered active thread. Carry out the transaction and mark the serviced threads as inactive. that are issued independently. one for each half-warp.and 16-byte words. 16) is less than or equal to 16/d. If a non-atomic instruction executed by a warp writes to the same location in shared memory for more than one of the threads of the warp.Appendix G. reduce the transaction size to 32 bytes.3 Shared Memory Shared memory has 16 banks that are organized such that successive 32-bit words are assigned to successive banks. s is odd.1 32-Bit Strided Access A common access pattern is for each thread to access a 32-bit word from an array indexed by the thread ID tid and with some stride s: __shared__ float shared[32]. float data = shared[BaseIndex + s * tid].3. Each bank has a bandwidth of 32 bits per two clock cycles. 16) or.e.3. In this case. Reduce the transaction size. i.e. As a consequence. if possible:  If the transaction size is 128 bytes and only the lower or upper half is used. As a consequence. The same examples apply for devices of compute capability 1. but with 16 banks instead of 32.

More precisely. the subset is built from the remaining addresses that have yet to be serviced using the following procedure: Select one of the words pointed to by the remaining addresses as the broadcast word.3. For example. For example. There are no bank conflicts however.3 8-Bit and 16-Bit Access 8-bit and 16-bit accesses typically generate bank conflicts. Compute Capabilities G. This reduces the number of bank conflicts when several threads read from an address within the same 32-bit word. shared[2]. G.  One address for each bank (other than the broadcasting bank) pointed to by the remaining addresses. a memory read request made of several addresses is serviced in several steps over time by servicing one conflict-free subset of these addresses per step until all addresses have been serviced. 160 CUDA C Programming Guide Version 3.3. The same examples apply for devices of compute capability 1.2 32-Bit Broadcast Access Shared memory features a broadcast mechanism whereby a 32-bit word can be read and broadcast to several threads simultaneously when servicing one memory read request.3. if the same array is accessed the following way: char data = shared[BaseIndex + 4 * tid]. One way to avoid bank conflicts in this case is two split the double operands like in the following sample code: __shared__ int shared_lo[32]. at each step. char data = shared[BaseIndex + tid].Appendix G. there are bank conflicts if an array of char is accessed the following way: __shared__ char shared[32]. for example. __shared__ int shared_hi[32].4 Larger Than 32-Bit Access Accesses that are larger than 32-bit per thread are split into 32-bit accesses that typically generate bank conflicts. double data = shared[BaseIndex + tid].3.x. Figure G-3 shows some examples of memory read accesses that involve the broadcast mechanism. and shared[3].2 .3.  Include in the subset:  All addresses that are within the broadcast word. because shared[0]. but with 16 banks instead of 32. as the memory request is compiled into two separate 32-bit requests with a stride of two. G. Which word is selected as the broadcast word and which address is picked up for each bank at each cycle are unspecified. shared[1]. belong to the same bank.3. there are 2-way bank conflicts for arrays of doubles accessed as follows: __shared__ double shared[32].  A common conflict-free case is when all threads of a half-warp read from an address within the same 32-bit word.

z.1:  48 CUDA cores for integer and floating-point arithmetic operations.Appendix G.  4 special function units for single-precision floating-point transcendental functions. for some warp that is ready to execute.2 161 . y. Compute Capabilities double dataIn.4.x.1 Compute Capability 2. }.  8 special function units for single-precision floating-point transcendental functions. shared_hi[BaseIndex + tid] = __double2hiint(dataIn).0.  For devices of compute capability 2.0:  32 CUDA cores for integer and floating-point arithmetic operations. This might not always improve performance however and does perform worse on devices of compute capabilities 2. shared_lo[BaseIndex + tid] = __double2loint(dataIn).  Two instructions for devices of compute capability 2. struct type data = shared[BaseIndex + tid]. since each member is accessed with an even stride of two 32-bit words. The first scheduler is in charge of the warps with an odd ID and the second scheduler is in charge of the warps with an  CUDA C Programming Guide Version 3. each scheduler issues:  One instruction for devices of compute capability 2.x Architecture For devices of compute capability 2. }. At every instruction issue time. G.4 G. The same applies to structure assignments. if any.1. shared_lo[BaseIndex + tid]). y. since each member is accessed with an odd stride of three 32-bit words. The following code. results in:  Three separate reads without bank conflicts if type is defined as struct type { float x. for example: __shared__ struct type shared[32].  Two separate reads with bank conflicts if type is defined as struct type { float x.x.  2 warp schedulers. a multiprocessor consists of: For devices of compute capability 2. double dataOut = __hiloint2double(shared_hi[BaseIndex + tid].

If the current thread/context also has no preference (which is again the default setting). There is an L1 cache for each multiprocessor and an L2 cache shared by all multiprocessors. which is set using cudaThreadSetCacheConfig()/cuCtxSetCacheConfig() (see the reference manual for details). unless a different cache configuration is required to launch the kernel (e.Appendix G. The initial configuration is 48KB of shared memory and 16KB of L1 cache. Note that when a scheduler issues a double-precision floating-point instruction.2 . using cudaFuncSetCacheConfig()/cuFuncSetCacheConfig(): // Device code __global__ void MyKernel() { . } // Host code // Runtime API // cudaFuncCachePreferShared: shared memory is 48 KB // cudaFuncCachePreferL1: shared memory is 16 KB // cudaFuncCachePreferNone: no preference cudaFuncSetCacheConfig(MyKernel. CU_FUNC_CACHE_PREFER_SHARED) The default cache configuration is "prefer none. including temporary register spills. The same on-chip memory is used for both L1 and shared memory: It can be configured as 48 KB of shared memory and 16 KB of L1 cache or as 16 KB of shared memory and 48 KB of L1 cache. which resides in device memory. both of which are used to cache accesses to local or global memory. A multiprocessor also has a read-only uniform cache that is shared by all functional units and speeds up reads from the constant memory space.. then it will default to the preference of the current thread/context." meaning "no preference.g. 162 CUDA C Programming Guide Version 3. Compute Capabilities even ID. the other scheduler cannot issue any instruction.. whether reads are cached in both L1 and L2 or in L2 only) can be partially configured on a peraccess basis using modifiers to the load or store instruction.. A GPC includes four multiprocessors. due to shared memory requirements).g. A warp scheduler can issue an instruction to only half of the CUDA cores." If a kernel is configured to have no preference. Multiprocessors are grouped into Graphics Processor Clusters (GPCs). a warp scheduler must therefore issue the instruction over two clock cycles for an integer or floating-point arithmetic instruction. The cache behavior (e. cudaFuncCachePreferShared) // Driver API // CU_FUNC_CACHE_PREFER_SHARED: shared memory is 48 KB // CU_FUNC_CACHE_PREFER_L1: shared memory is 16 KB // CU_FUNC_CACHE_PREFER_NONE: no preference CUfunction myKernel. cuFuncSetCacheConfig(myKernel. To execute an instruction for all threads of a warp. then whichever cache configuration was most recently used for any kernel will be the one that is used.

G. in the case of scattered memory accesses. if the size is 16 bytes. they can be configured at compile time to be cached in both L1 and L2 (-Xptxas -dlcm=ca) (this is the default setting) or in L2 only (-Xptxas -dlcm=cg). one for each quarter-warp. Each memory request is then broken down into cache line requests that are issued independently. If a non-atomic instruction executed by a warp writes to the same location in global memory for more than one of the threads of the warp. or at the throughput of device memory.4. a memory request by a warp is first split into separate 128-byte memory requests that are issued independently: Two memory requests. Compute Capabilities Each multiprocessor has a read-only texture cache to speed up reads from the texture memory space. including the same words.  Note that threads can access any words in any order.2 Global Memory Global memory accesses are cached. Using the –dlcm compilation flag.Appendix G. one for each half-warp.  Four memory requests. for example.4. which resides in device memory. only one thread performs a write and which thread does it is undefined. if the size is 8 bytes.2 163 . Caching in L2 only can therefore reduce over-fetch.2. It accesses the texture cache via a texture unit that implements the various addressing modes and data filtering mentioned in Section 3. A cache line request is serviced at the throughput of L1 or L2 cache in case of a cache hit. CUDA C Programming Guide Version 3. A cache line is 128 bytes and maps to a 128-byte aligned segment in device memory. otherwise. Memory accesses that are cached in both L1 and L2 are serviced with 128-byte memory transactions whereas memory accesses that are cached in L2 only are serviced with 32-byte memory transactions. If the size of the words accessed by each thread is more than 4 bytes.

1 … 1.1 … 1.Appendix G. Examples of Global Memory Accesses by a Warp.0 and 1.0 and 1.3 31 2.2 and 1.0 and 1.2 .2 and 1.2 and 1.3 31 2. 4-Byte Word per Thread.0 Cached 1 x 128B at 128 1 x 128B at 256 Uncached 7x 8x 8x 8x 1x 32B at 128 32B at 160 32B at 192 32B at 224 32B at 256 1 x 128B at 128 1 x 64B at 192 1 x 32B at 256 Figure G-1.0 Cached 1 x 128B at 128 Uncached 8x 8x 8x 8x 32B at 128 32B at 160 32B at 192 32B at 224 1 x 64B at 128 1 x 64B at 192 Misaligned and sequential Addresses: 96 128 160 192 224 256 288 Threads: Compute capability: Memory transactions: 0 1. and Associated Memory Transactions Based on Compute Capability 164 CUDA C Programming Guide Version 3. Compute Capabilities Aligned and sequential Addresses: 96 128 160 192 224 256 288 Threads: Compute capability: Memory transactions: 0 1.3 31 2.1 … 1.0 Cached 1 x 128B at 128 Uncached 1 x 64B at 128 1 x 64B at 192 1 x 64B at 128 1 x 64B at 192 Aligned and non-sequential Addresses: 96 128 160 192 224 256 288 Threads: Compute capability: Memory transactions: 0 1.

unlike for devices of lower compute capability. If two or more threads access any bytes within the same 32-bit word.. The following code. G. y.x.1 32-Bit Strided Access A common access pattern is for each thread to access a 32-bit word from an array indexed by the thread ID tid and with some stride s: __shared__ float shared[32]. that unlike for devices of compute capability 1. interleaved. 64-Bit Accesses For 64-bit accesses.4. Compute Capabilities G. equivalently. each byte is written by only one of the threads (which thread performs the write is undefined). for example: __shared__ char shared[32]. i. for example: struct type { float x.4. Therefore.e.e. threads tid and tid+n access the same bank whenever s*n is a multiple of the number of banks (i.2 Larger Than 32-Bit Access 64-bit and 128-bit accesses are specifically handled to minimize bank conflicts as described below.2 165 .4. As a consequence. 32) or. the word is broadcast to the requesting threads (unlike for devices of compute capability 1.3 Shared Memory Shared memory has 32 banks that are organized such that successive 32-bit words are assigned to successive banks. In this case. multiple words can be broadcast in a single transaction). }. there is no bank conflict between these threads: For read accesses.e. CUDA C Programming Guide Version 3.3.x. whenever n is a multiple of 32/d where d is the greatest common divisor of 32 and s.e. G. 64-bit. char data = shared[BaseIndex + tid]. i. there will be no bank conflict only if the warp size (i. s is odd. there are no bank conflicts if an array of char is accessed as follows. or 128-bit accesses. __shared__ struct type shared[32]. Figure G-2 shows some examples of strided access. A bank conflict only occurs if two or more threads access any bytes within different 32-bit words belonging to the same bank. Other accesses larger than 32-bit are split into 32-bit. z. that is only if d is equal to 1.Appendix G.3. Each bank has a bandwidth of 32 bits per two clock cycles. a bank conflict only occurs if two or more threads in either of the half-warps access different addresses belonging to the same bank. results in three separate 32-bit reads without bank conflicts since each member is accessed with a stride of three 32-bit words. float data = shared[BaseIndex + s * tid]. This means. for write accesses. there may be bank conflicts between a thread belonging to the first half of a warp and a thread belonging to the second half of the same warp. in particular. 32) is less than or equal to 32/d. struct type data = shared[BaseIndex + tid].

Appendix G.x support the LDU (LoaD Uniform) instruction that the compiler use to load any variable that is: pointing to global memory.2 . there are no bank conflicts for arrays of doubles accessed as follows.  not dependent on thread ID. devices of compute capability 2. 128-Bit Accesses The majority of 128-bit accesses will cause 2-way bank conflicts. to determine the ways of bank conflicts. G.  166 CUDA C Programming Guide Version 3. one must add 1 to the maximum number of threads in a quarter-warp that access different addresses belonging to the same bank. Therefore. Compute Capabilities Unlike for devices of compute capability 1. even if no two threads in a quarter-warp access different addresses belonging to the same bank.  read-only in the kernel (programmer can enforce this using the const keyword). double data = shared[BaseIndex + tid].x. for example: __shared__ double shared[32].4.4 Constant Memory In addition to the constant memory space supported by devices of all compute capabilities (where __constant__ variables reside).

Compute Capabilities Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Left: Linear addressing with a stride of one 32-bit word (no bank conflict).Appendix G.2 167 . Figure G-2 Examples of Strided Shared Memory Accesses for Devices of Compute Capability 2. Right: Linear addressing with a stride of three 32-bit words (no bank conflict).x CUDA C Programming Guide Version 3. Middle: Linear addressing with a stride of two 32-bit words (2-way bank conflicts).

2 . Compute Capabilities Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Threads: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Banks: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Left: Conflict-free access via random permutation. Middle: Conflict-free access since threads 3. and 9 access the same word within bank 5. 6.Appendix G. 168 CUDA C Programming Guide Version 3. 4. Right: Conflict-free broadcast access (all threads access the same word). 7.

Appendix G. Compute Capabilities Figure G-3 Examples of Irregular and Colliding Shared Memory Accesses for Devices of Compute Capability 2.2 169 .x CUDA C Programming Guide Version 3.

.

Information furnished is believed to be accurate and reliable. MERCHANTABILITY. This publication supersedes and replaces all information previously supplied. the NVIDIA logo. LISTS. REFERENCE BOARDS.Notice ALL NVIDIA DESIGN SPECIFICATIONS.nvidia. Copyright © 2006-2010 NVIDIA Corporation.org/queue/20080304/?u1=texterity" NVIDIA Corporation 2701 San Tomas Expressway Santa Clara. AND FITNESS FOR A PARTICULAR PURPOSE.” NVIDIA MAKES NO WARRANTIES. This work incorporates portions of on an earlier work: Scalable Parallel Programming with CUDA. http://mags. 2008.acm. Specifications mentioned in this publication are subject to change without notice. All rights reserved. STATUTORY. © ACM. Other company and product names may be trademarks of the respective companies with which they are associated. AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY. IMPLIED. AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT. NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. 2 (March/April 2008). GeForce. used under license to the Khronos Group Inc. VOL 6. OR OTHERWISE WITH RESPECT TO THE MATERIALS. FILES. However.com . No. CA 95050 www. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation. OpenCL is trademark of Apple Inc. DRAWINGS. “MATERIALS”) ARE BEING PROVIDED “AS IS. in ACM Queue. and Quadro are trademarks or registered trademarks of NVIDIA Corporation. Tesla. DIAGNOSTICS. Trademarks NVIDIA. EXPRESSED.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times