You are on page 1of 2

Learning Vulkan – Parminder Singh

1. What does the “driver layer in Vulkan” actually mean ?


- does it really have to do with “device-dependent code” ?

2. What it a “graphics processor unit” ?


- is this the whole chip or just a multi-stream processor (MS) ?
- graphics processor unit => GPU (duh..)

3. What does it mean that “the driver is not responsible for managing resources” and “resource
control is explicit” ?
- so I can manually allocate and deallocate memory on the Vulkan device ?
- what about “flushing” vs “batching” ? Does asking for a command to execute actually do
so, or does it wait for a specific moment / event to occur ? Can I manually control the triggering of
processing some batched commands ?
- how can a host program listen and trigger asynchronously after some Vulkan device has
finished its task ? => reactive programming

4. What is the source code that gets translated to SPIR-V (standard,portable – intermediate
representation/language, version 5?) ?
- is it similar to GLSL ? It would seem that multiple languages can be translated to SPIR-V,
including GLSL and HLSL (*)
- what computational model / execution model does SPIR-V have ? What abstract machine
does it imagine executing it ?

5. What is the difference between the “Vulkan physical device” and the “Vulkan (logical) device” ?
- anything “logical” probably has to do with a “model” of a physicality (*)

6. What is a “command buffer” ?


- the command buffer seems to be submitted to a queue (of a “compatible family”, based on
the set of instructions)
- it also seems that the queues handle the dispatches and distribution(+)allocation of the
workload to actual cores (/ “work units”); there are even notions of “work groups” which probably
have to do with how to cluster (=> leads to “locality” in the basis of the GPU’s architecture ? - well,
as much as the GPU can emulate the locality in its architecture -)
- the “command buffer” likely provides mechanisms to adjust such things as
“synchronization” and “work-group / locality layouts”, which are besides just the notion of “the
binary instructions code” ?

7. Do “commands” in the same “command buffer” enforce an order constraint ? Do they really
execute “in-order” ?
- this would imply that instructions that can execute asynchronously would have to be
scheduled/submitted in different command buffers (?)

8. So the multi-threading aspect of Vulkan is in its ability to construct and submit “command
buffers” concurrently ?

9. What does / might the driver code (which I suppose is still host-side ?) do in processing a
command buffer ?
- does it compute “reductions”, or would that be computationally hard ?
10. Because command buffers are assigned to specific queues and queues belong to some family
which constraints what functionality they can schedule commands for, this ought to imply that a
command buffer can only contain compatible commands (*?)

11. What is Vulkan’s “evaluate-return model” ? Are there ways to submit a command buffer and
expect a result as if there were a return), or are all commands similar to void procedures ?

12. What is a “descriptor set”, or the “descriptor set layout” ? Or the “descriptor pool” and
“command pool” ?
- what about the “pipeline cache” and “pipeline cache layout” ?

13.

You might also like