Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword or section
Like this

Table Of Contents

Chapter 1 Introduction
1.1 From Graphics Processing to General-Purpose Parallel Computing
Figure 1-2. The GPU Devotes More Transistors to Data Processing
1.2 CUDA™: a General-Purpose Parallel Computing Architecture
1.3 A Scalable Programming Model
1.4 Document’s Structure
Chapter 2 Programming Model
2.1 Kernels
2.2 Thread Hierarchy
Figure 2-1. Grid of Thread Blocks
2.3 Memory Hierarchy
2.4 Heterogeneous Programming
2.5 Compute Capability
Chapter 3 Programming Interface
3.1 Compilation with NVCC
3.1.1 Compilation Workflow
3.1.2 Binary Compatibility
3.1.3 PTX Compatibility
3.1.4 Application Compatibility
3.1.5 C/C++ Compatibility
3.2 CUDA C
3.2.1 Device Memory
3.2.2 Shared Memory
Figure 3-1. Matrix Multiplication without Shared Memory
Figure 3-2. Matrix Multiplication with Shared Memory
3.2.3 Multiple Devices
3.2.4 Texture Memory Texture Reference Declaration Runtime Texture Reference Attributes Texture Binding
3.2.5 Page-Locked Host Memory Portable Memory Write-Combining Memory Mapped Memory
3.2.6 Asynchronous Concurrent Execution Concurrent Execution between Host and Device Overlap of Data Transfer and Kernel Execution Concurrent Kernel Execution Concurrent Data Transfers Stream Event Synchronous Calls
3.2.7 Graphics Interoperability OpenGL Interoperability Direct3D Interoperability
3.2.8 Error Handling
3.2.9 Debugging using the Device Emulation Mode
3.3 Driver API
3.3.3 Kernel Execution
3.3.4 Device Memory
3.3.5 Shared Memory
3.3.6 Multiple Devices
3.3.7 Texture Memory
3.3.8 Page-Locked Host Memory
3.3.9 Asynchronous Concurrent Execution Stream Event Management Synchronous Calls
3.3.10 Graphics Interoperability OpenGL Interoperability Direct3D Interoperability
3.3.11 Error Handling
3.4 Interoperability between Runtime and Driver APIs
3.5 Versioning and Compatibility
3.6 Compute Modes
3.7 Mode Switches
Chapter 4 Hardware Implementation
4.1 SIMT Architecture
4.2 Hardware Multithreading
4.3 Multiple Devices
Chapter 5 Performance Guidelines
5.1 Overall Performance Optimization Strategies
5.2 Maximize Utilization
5.2.1 Application Level
5.2.2 Device Level
5.2.3 Multiprocessor Level
5.3 Maximize Memory Throughput
5.3.1 Data Transfer between Host and Device
5.3.2 Device Memory Accesses Global Memory Local Memory Shared Memory Constant Memory
5.4.1 Arithmetic Instructions
5.4.2 Control Flow Instructions
5.4.3 Synchronization Instruction
Appendix A CUDA-Enabled GPUs
Appendix B C Language Extensions
B.1 Function Type Qualifiers
B.1.1 __device__
B.1.2 __global__
B.1.3 __host__
B.2.5 Restrictions
B.3 Built-in Vector Types
B.3.2 dim3
B.4 Built-in Variables
B.4.1 gridDim
B.4.2 blockIdx
B.4.3 blockDim
B.4.4 threadIdx
B.4.5 warpSize
B.4.6 Restrictions
B.5 Memory Fence Functions
B.6 Synchronization Functions
B.7 Mathematical Functions
B.8 Texture Functions
B.8.1 tex1Dfetch()
B.8.2 tex1D()
B.8.3 tex2D()
B.8.4 tex3D()
B.9 Time Function
B.10 Atomic Functions
B.10.1 Arithmetic Functions
B.10.1.1 atomicAdd()
B.10.1.2 atomicSub()
B.10.1.3 atomicExch()
B.10.1.4 atomicMin()
B.10.1.5 atomicMax()
B.10.1.6 atomicInc()
B.10.1.7 atomicDec()
B.10.1.8 atomicCAS()
B.10.2 Bitwise Functions
B.10.2.1 atomicAnd()
B.10.2.2 atomicOr()
B.10.2.3 atomicXor()
B.11 Warp Vote Functions
B.12 Profiler Counter Function
B.13 Execution Configuration
B.14 Launch Bounds
Appendix C Mathematical Functions
C.1 Standard Functions
C.1.1 Single-Precision Floating-Point Functions
C.1.2 Double-Precision Floating-Point Functions
C.1.3 Integer Functions
C.2 Intrinsic Functions
C.2.1 Single-Precision Floating-Point Functions
C.2.2 Double-Precision Floating-Point Functions
C.2.3 Integer Functions
Appendix D C++ Language Constructs
D.1 Polymorphism
D.2 Default Parameters
D.3 Operator Overloading
D.4 Namespaces
D.5 Function Templates
D.6 Classes
D.6.1 Example 1 Pixel Data Type
D.6.2 Example 2 Functor Class
Appendix E NVCC Specifics
E.1 __noinline__
E.2 #pragma unroll
E.3 __restrict__
Appendix F Texture Fetching
F.1 Nearest-Point Sampling
F.2 Linear Filtering
F.3 Table Lookup
Appendix G Compute Capabilities
G.1 Features and Technical Specifications
G.2 Floating-Point Standard
G.3 Compute Capability 1.x
G.3.1 Architecture
G.3.2 Global Memory
G.3.2.1 Devices of Compute Capability 1.0 and 1.1
G.3.2.2 Devices of Compute Capability 1.2 and 1.3
G.3.3 Shared Memory
G.3.3.1 32-Bit Strided Access
G.3.3.2 32-Bit Broadcast Access
G.3.3.3 8-Bit and 16-Bit Access
G.3.3.4 Larger Than 32-Bit Access
G.4 Compute Capability 2.0
G.4.1 Architecture
G.4.2 Global Memory
G.4.3 Shared Memory
G.4.3.1 32-Bit Strided Access
G.4.3.2 Larger Than 32-Bit Access
G.4.4 Constant Memory
0 of .
Results for:
No results containing your search query
P. 1
NVIDIA CUDA ProgrammingGuide

NVIDIA CUDA ProgrammingGuide

Ratings: (0)|Views: 1,673|Likes:
Published by Gustavo

More info:

Published by: Gustavo on Apr 09, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





You're Reading a Free Preview
Pages 4 to 60 are not shown in this preview.
You're Reading a Free Preview
Pages 64 to 98 are not shown in this preview.
You're Reading a Free Preview
Pages 102 to 107 are not shown in this preview.
You're Reading a Free Preview
Pages 111 to 160 are not shown in this preview.
You're Reading a Free Preview
Pages 164 to 165 are not shown in this preview.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->