You are on page 1of 2

(int) as a single parameter and returns

``ProfilerAction`` value that specifies the profiler action to perform


at each step.
on_trace_ready (Callable): callable that is called at each step when
``schedule``
returns ``ProfilerAction.RECORD_AND_SAVE`` during the profiling.
record_shapes (bool): save information about operator's input shapes.
profile_memory (bool): track tensor memory allocation/deallocation.
with_stack (bool): record source information (file and line number) for the
ops.
with_flops (bool): use formula to estimate the FLOPs (floating point
operations) of specific operators
(matrix multiplication and 2D convolution).
with_modules (bool): record module hierarchy (including function names)
corresponding to the callstack of the op. e.g. If module A's forward
call's
module B's forward which contains an aten::add op,
then aten::add's module hierarchy is A.B
Note that this support exist, at the moment, only for TorchScript
models
and not eager mode models.
experimental_config (_ExperimentalConfig) : A set of experimental options
used for Kineto library features. Note, backward compatibility is not
guaranteed.

use_cuda (bool):
.. deprecated:: 1.8.1
use ``activities`` instead.

.. note::
Use :func:`~torch.profiler.schedule` to generate the callable schedule.
Non-default schedules are useful when profiling long training jobs
and allow the user to obtain multiple traces at the different iterations
of the training process.
The default schedule simply records all the events continuously for the
duration of the context manager.

.. note::
Use :func:`~torch.profiler.tensorboard_trace_handler` to generate result
files for TensorBoard:

``on_trace_ready=torch.profiler.tensorboard_trace_handler(dir_name)``

After profiling, result files can be found in the specified directory. Use
the command:

``tensorboard --logdir dir_name``

to see the results in TensorBoard.


For more information, see
`PyTorch Profiler TensorBoard Plugin
<https://github.com/pytorch/kineto/tree/master/tb_plugin>`__

.. note::
Enabling shape and stack tracing results in additional overhead.
When record_shapes=True is specified, profiler will temporarily hold
references to the tensors;
that may further prevent certain optimizations that depend on the reference
count and introduce
extra tensor copies.

Examples:

.. code-block:: python

with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
]
) as p:
code_to_profile()
print(p.key_averages().table(
sort_by="self_cuda_time_total", row_limit=-1))

Using the profiler's ``schedule``, ``on_trace_ready`` and ``step`` functions:

.. code-block:: python

# Non-default profiler schedule allows user to turn profiler on and off


# on different iterations of the training loop;
# trace_handler is called every time a new trace becomes available
def trace_handler(prof):
print(prof.key_averages().table(
sort_by="self_cuda_time_total", row_limit=-1))
# prof.export_chrome_trace("/tmp/test_trace_" + str(prof.step_num) +
".json")

with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],

# In this example with wait=1, warmup=1, active=2, repeat=1,


# profiler will skip the first step/iteration,
# start warming up on the second, record
# the third and the forth iterations,
# after which the trace will become available
# and on_trace_ready (when set) is called;
# the cycle repeats starting with the next step

schedule=torch.profiler.schedule(
wait=1,
warmup=1,
active=2,
repeat=1),
on_trace_ready=trace_handler
# on_trace_ready=torch.profiler.tensorboard_trace_handler('./log')
# used when outputting for tensorboard
) as p:
for iter in range(N):
code_iteration_to_profile(iter)
# send a signal to the profiler that the next iteration has
started
p.step()
NF)
r!

You might also like