You are on page 1of 82

ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

System Overview
Overview of the key system metrics.

cpu
Total CPU utilization (all cores). 100% here means there is no CPU idle time at all. You can
get per core usage at the CPUs section and per application usage at the Applications
Monitoring section.
Keep an eye on iowait ( %). If it is constantly high, your
disks are a bottleneck and they slow your system down.
An important metric worth monitoring, is softirq ( %). A
constantly high percentage of softirq may indicate network driver issues.
Total CPU utilization (system.cpu)
100.0

80.0

60.0

40.0

20.0

0.0
percentage proc:/proc/stat | system.cpu
softirq - user - system - nice - iowait -

Pressure Stall Information (https://www.kernel.org/doc/html/latest/accounting/psi.html)


identifies and quantifies the disruptions caused by resource contentions. The "some" line
indicates the share of time in which at least some tasks are stalled on CPU. The ratios (in
%) are tracked as recent trends over 10-, 60-, and 300-second windows.
CPU Pressure (system.cpu_pressure)
80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/pressure | system.cpu_pressure


some 10 - some 60 - some 300 -

1 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

load
Current system load, i.e. the number of processes using CPU or waiting for system
resources (usually CPU and disk). The 3 metrics refer to 1, 5 and 15 minute averages. The
system calculates this once every 5 seconds. For more information check this wikipedia
article (https://en.wikipedia.org/wiki/Load_(computing)).
System Load Average (system.load)

3.00

2.00

1.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

load proc:/proc/loadavg | system.load


load1 - load5 - load15 -

disk
Total Disk I/O, for all physical disks. You can get detailed information about each disk at the
Disks section and per application Disk usage at the Applications Monitoring section.
Physical are all the disks that are listed in /sys/block , but do not exist in /sys/devices
/virtual/block .
Disk I/O (system.io)

0.0

-2.0

-3.9

-5.9

-7.8

-9.8

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB/s proc:/proc/diskstats | system.io


in - out -

2 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Memory paged from/to disk. This is usually the total disk I/O of the system.
Memory Paged from/to disk (system.pgpgio)

0.0

-2.0

-3.9

-5.9

-7.8

-9.8

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB/s proc:/proc/vmstat | system.pgpgio


in - out -

Pressure Stall Information (https://www.kernel.org/doc/html/latest/accounting/psi.html)


identifies and quantifies the disruptions caused by resource contentions. The "some" line
indicates the share of time in which at least some tasks are stalled on I/O. The "full" line
indicates the share of time in which all non-idle tasks are stalled on I/O simultaneously. In
this state actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. The ratios (in %) are tracked as recent trends
over 10-, 60-, and 300-second windows.
I/O Pressure (system.io_some_pressure)
2.00

1.50

1.00

0.50

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/pressure | system.io_some_pressure


some 10 - some 60 - some 300 -

I/O Full Pressure (system.io_full_pressure)

0.50

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/pressure | system.io_full_pressure


full 10 - full 60 - full 300 -

ram

3 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

System Random Access Memory (i.e. physical memory) usage.


System RAM (system.ram)

3.42
2.93
2.44
1.95
1.46
0.98
0.49
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

GiB proc:/proc/meminfo | system.ram


free - used - cached - buffers -

Pressure Stall Information (https://www.kernel.org/doc/html/latest/accounting/psi.html)


identifies and quantifies the disruptions caused by resource contentions. The "some" line
indicates the share of time in which at least some tasks are stalled on memory. The "full"
line indicates the share of time in which all non-idle tasks are stalled on memory
simultaneously. In this state actual CPU cycles are going to waste, and a workload that
spends extended time in this state is considered to be thrashing. The ratios (in %) are
tracked as recent trends over 10-, 60-, and 300-second windows.
Memory Pressure (system.memory_some_pressure)

0.15

0.10

0.05

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/pressure | system.memory_some_pressure


some 10 - some 60 -

Memory Full Pressure (system.memory_full_pressure)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/pressure | system.memory_full_pressure


full 10 - full 60 - full 300 -

swap
4 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

System swap memory usage. Swap space is used when the amount of physical memory
(RAM) is full. When the system needs more memory resources and the RAM is full, inactive
pages in memory are moved to the swap space (usually a disk, a disk partition or a file).
System Swap (system.swap)

923.6

923.4

923.2

923.0

922.8
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB proc:/proc/meminfo | system.swap


free -

network
Total IP traffic in the system.
IP Bandwidth (system.ip)
0.00

-1.00

-2.00

-3.00

-4.00

-5.00

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

megabits/s proc:/proc/net/netstat | system.ip


received - sent -

Total IPv6 Traffic.


IPv6 Bandwidth (system.ipv6)
350.0

300.0

250.0

200.0

150.0

100.0

50.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

bits/s proc:/proc/net/snmp6 | system.ipv6


received -

5 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

processes
System processes. Running are the processes in the CPU. Blocked are processes that are
willing to enter the CPU, but they cannot, e.g. because they wait for disk activity.
System Processes (system.processes)
14.0
12.0
10.0
8.0
6.0
4.0
2.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes proc:/proc/stat | system.processes


running - blocked -

Number of new processes created.


Started Processes (system.forks)

12.0

10.0

8.0

6.0

4.0

2.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes/s proc:/proc/stat | system.forks


started -

All system processes.


System Active Processes (system.active_processes)

680.0

670.0

660.0

650.0

640.0

630.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes proc:/proc/loadavg | system.active_processes


active -

6 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Context Switches (https://en.wikipedia.org/wiki/Context_switch), is the switching of the CPU


from one process, task or thread to another. If there are many processes or threads willing
to execute and very few CPU cores available to handle them, the system is making more
context switching to balance the CPU resources among them. The whole process is
computationally intensive. The more the context switches, the slower the system gets.
CPU Context Switches (system.ctxt)

20,000

15,000

10,000

5,000

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

context switches/s proc:/proc/stat | system.ctxt


switches -

idlejitter
Idle jitter is calculated by netdata. A thread is spawned that requests to sleep for a few
microseconds. When the system wakes it up, it measures how many microseconds have
passed. The difference between the requested and the actual duration of the sleep, is the
idle jitter. This number is useful in real-time environments, where CPU jitter can affect the
quality of the service (like VoIP media gateways).
CPU Idle Jitter (system.idlejitter)

8,000

6,000

4,000

2,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

microseconds lost/s idlejitter | system.idlejitter


min - max - average -

interrupts

7 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Total number of CPU interrupts. Check system.interrupts that gives more detail about
each interrupt and also the CPUs section where interrupts are analyzed per CPU core.
CPU Interrupts (system.intr)
1,400

1,200

1,000

800

600

400

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

interrupts/s proc:/proc/stat | system.intr


interrupts -

CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU core.
System interrupts (system.interrupts)
1,400
1,200
1,000
800
600
400
200
0
interrupts/s proc:/proc/interrupts | system.interrupts
i8042_1 - i8042_12 - ata_piix_15 - vmwgfx_18 -
enp0s3_19 - vboxguest_20 - snd_intel8x0_21 - LOC - MCP -

softirqs
CPU softirqs in detail. At the CPUs section, softirqs are analyzed per CPU core.
System softirqs (system.softirqs)

1,200

1,000

800

600

400

200

0
softirqs/s proc:/proc/softirqs | system.softirqs
TIMER - NET_TX - NET_RX - BLOCK - TASKLET -
RCU -

softnet

8 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU core can
be found at CPU / softnet statistics. processed states the number of packets processed,
dropped is the number packets dropped because the network device backlog was full (to fix
them on Linux use sysctl to increase net.core.netdev_max_backlog ), squeezed is
the number of packets dropped because the network device budget ran out (to fix them on
Linux use sysctl to increase net.core.netdev_budget and/or
net.core.netdev_budget_usecs ). More information about identifying and troubleshooting
network driver related issues can be found at Red Hat Enterprise Linux Network
Performance Tuning Guide (https://access.redhat.com/sites/default/files/attachments
/20150325_network_performance_tuning.pdf).

System softnet_stat (system.softnet_stat)


400.0
350.0
300.0
250.0
200.0
150.0
100.0
50.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events/s proc:/proc/net/softnet_stat | system.softnet_stat


processed -

entropy
Entropy (https://en.wikipedia.org/wiki/Entropy_(computing)), is a pool of random numbers
(/dev/random (https://en.wikipedia.org/wiki//dev/random)) that is mainly used in
cryptography. If the pool of entropy gets empty, processes requiring random numbers may
run a lot slower (it depends on the interface each program uses), waiting for the pool to be
replenished. Ideally a system with high entropy demands should have a hardware device for
that purpose (TPM is one such device). There are also several software-only options you
may install, like haveged , although these are generally useful only in servers.
Available Entropy (system.entropy)
3,800.0

3,750.0

3,700.0

3,650.0

3,600.0

3,550.0

3,500.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

entropy proc:/proc/sys/kernel/random/entropy_avail | system.entropy


entropy -

uptime

9 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

System Uptime (system.uptime)


00:23:20

00:21:40

00:20:00

00:18:20

00:16:40

00:15:00

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time proc:/proc/uptime | system.uptime


uptime -

clock synchronization
State map: 0 - not synchronized, 1 - synchronized
System Clock Synchronization State (system.clock_sync_state)

1.4

1.2

0.8

0.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

state timex | system.clock_sync_state


state -

Computed Time Offset Between Local System and Reference Clock (system.clock_sync_offset)
45.0
40.0
35.0
30.0
25.0
20.0
15.0
10.0
5.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds timex | system.clock_sync_offset


offset -

ipc semaphores

10 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

IPC Semaphores (system.ipc_semaphores)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

semaphores proc:ipc | system.ipc_semaphores


semaphores -

IPC Semaphore Arrays (system.ipc_semaphore_arrays)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

arrays proc:ipc | system.ipc_semaphore_arrays


arrays -

ipc shared memory


IPC Shared Memory Number of Segments (system.shared_memory_segments)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

segments proc:ipc | system.shared_memory_segments


segments -

11 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

IPC Shared Memory Used Bytes (system.shared_memory_bytes)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

bytes proc:ipc | system.shared_memory_bytes


bytes -

CPUs
Detailed information for each CPU of the system. A summary of the system for all CPUs can be
found at the System Overview section.

utilization
Core utilization (cpu.cpu0)
100.0

80.0

60.0

40.0

20.0

0.0
percentage proc:/proc/stat | cpu.cpu
softirq - user - system - nice - iowait -

interrupts

12 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

CPU0 Interrupts (cpu.cpu0_interrupts)


1,400
1,200
1,000
800
600
400
200
0
interrupts/s proc:/proc/interrupts | cpu.interrupts
i8042_1 - i8042_12 - ata_piix_15 - vmwgfx_18 -
enp0s3_19 - vboxguest_20 - snd_intel8x0_21 - LOC - MCP -

softirqs
CPU0 softirqs (cpu.cpu0_softirqs)

1,200

1,000

800

600

400

200

0
softirqs/s proc:/proc/softirqs | cpu.softirqs
TIMER - NET_TX - NET_RX - BLOCK - TASKLET -
RCU -

softnet
Statistics for per CPUs core SoftIRQs related to network receive work. Total for all CPU
cores can be found at System / softnet statistics. processed states the number of packets
processed, dropped is the number packets dropped because the network device backlog
was full (to fix them on Linux use sysctl to increase net.core.netdev_max_backlog ),
squeezed is the number of packets dropped because the network device budget ran out (to
fix them on Linux use sysctl to increase net.core.netdev_budget and/or
net.core.netdev_budget_usecs ). More information about identifying and troubleshooting
network driver related issues can be found at Red Hat Enterprise Linux Network
Performance Tuning Guide (https://access.redhat.com/sites/default/files/attachments
/20150325_network_performance_tuning.pdf).

13 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

CPU0 softnet_stat (cpu.cpu0_softnet_stat)


400.0
350.0
300.0
250.0
200.0
150.0
100.0
50.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events/s proc:/proc/net/softnet_stat | cpu.softnet_stat


processed -

cpuidle
C-state residency time (cpu.cpu0_cpuidle)

100.4

100.2

100

99.8

99.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage proc:/proc/stat | cpuidle.cpuidle


C0 (active) -

Memory
Detailed information about the memory management of the system.

system

14 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Available Memory is estimated by the kernel, as the amount of RAM that can be used by
userspace processes, without causing swapping.
Available RAM for applications (mem.available)

2.39

2.34

2.29

2.25

2.20

2.15

2.10

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

GiB proc:/proc/meminfo | mem.available


avail -

Out of Memory Kills (mem.oom_kill)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

kills/s proc:/proc/vmstat | mem.oom_kill


kills -

Committed Memory, is the sum of all memory which has been allocated by processes.
Committed (Allocated) Memory (mem.committed)

5.47

5.37

5.27

5.18

5.08

4.98
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

GiB proc:/proc/meminfo | mem.committed


Committed_AS -

15 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

A page fault (https://en.wikipedia.org/wiki/Page_fault) is a type of interrupt, called trap,


raised by computer hardware when a running program accesses a memory page that is
mapped into the virtual address space, but not actually loaded into main memory. If the
page is loaded in memory at the time the fault is generated, but is not marked in the
memory management unit as being loaded in memory, then it is called a minor or soft page
fault. A major page fault is generated when the system needs to load the memory page
from disk or swap memory.
Memory Page Faults (mem.pgfaults)
70,000

60,000

50,000

40,000

30,000

20,000

10,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

faults/s proc:/proc/vmstat | mem.pgfaults


minor - major -

kernel
Dirty is the amount of memory waiting to be written to disk. Writeback is how much
memory is actively being written to disk.
Writeback Memory (mem.writeback)

40.0

30.0

20.0

10.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB proc:/proc/meminfo | mem.writeback


Dirty - Writeback -

16 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

The total amount of memory being used by the kernel. Slab is the amount of memory used
by the kernel to cache data structures for its own use. KernelStack is the amount of
memory allocated for each task done by the kernel. PageTables is the amount of memory
dedicated to the lowest level of page tables (A page table is used to turn a virtual address
into a physical memory address). VmallocUsed is the amount of memory being used as
virtual address space.
Memory Used by Kernel (mem.kernel)

250.0

200.0

150.0

100.0

50.0

0.0
MiB proc:/proc/meminfo | mem.kernel
Slab - KernelStack - PageTables - VmallocUsed - Percpu -

slab
Reclaimable is the amount of memory which the kernel can reuse. Unreclaimable can not
be reused even when the kernel is lacking memory.
Reclaimable Kernel Memory (mem.slab)

200.0

150.0

100.0

50.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB proc:/proc/meminfo | mem.slab


reclaimable - unreclaimable -

hugepages
Hugepages is a feature that allows the kernel to utilize the multiple page size capabilities of
modern hardware architectures. The kernel creates multiple pages of virtual memory,
mapped from both physical RAM and swap. There is a mechanism in the CPU architecture
called "Translation Lookaside Buffers" (TLB) to manage the mapping of virtual memory
pages to actual physical memory addresses. The TLB is a limited hardware resource, so
utilizing a large amount of physical memory with the default page size consumes the TLB
and adds processing overhead. By utilizing Huge Pages, the kernel is able to create pages
of much larger sizes, each page consuming a single resource in the TLB. Huge Pages are
pinned to physical RAM and cannot be swapped/paged out.

17 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Transparent HugePages (THP) is backing virtual memory with huge pages, supporting
automatic promotion and demotion of page sizes. It works for all applications for anonymous
memory mappings and tmpfs/shmem.
Transparent HugePages Memory (mem.transparent_hugepages)

8.40

8.20

8.00

7.80

7.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB proc:/proc/meminfo | mem.transparent_hugepages


anonymous -

Disks
Charts with performance information for all the system disks. Special care has been given to
present disk performance metrics in a way compatible with iostat -x . netdata by default
prevents rendering performance charts for individual partitions and unmounted virtual disks.
Disabled charts can still be enabled by configuring the relative settings in the netdata
configuration file.

sda
Amount of data transferred to and from disk.
Disk I/O Bandwidth (disk.sda)

0.0

-2.0

-3.9

-5.9

-7.8

-9.8

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB/s proc:/proc/diskstats | disk.io


reads - writes -

18 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Amount of Discarded Data (disk_ext.sda)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s proc:/proc/diskstats | disk_ext.io


discards -

Completed disk I/O operations. Keep in mind the number of operations requested might be
higher, since the system is able to merge adjacent to each other (see merged operations
chart).
Disk Completed I/O Operations (disk_ops.sda)
40.0
20.0
0.0
-20.0
-40.0
-60.0
-80.0
-100.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

operations/s proc:/proc/diskstats | disk.ops


reads - writes -

Disk Completed Extended I/O Operations (disk_ext_ops.sda)


40.0
35.0
30.0
25.0
20.0
15.0
10.0
5.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

operations/s proc:/proc/diskstats | disk_ext.ops


flushes -

19 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Backlog is an indication of the duration of pending disk operations. On every I/O event the
system is multiplying the time spent doing I/O since the last update of this field with the
number of pending operations. While not accurate, this metric can provide an indication of
the expected completion time of the operations in progress.
Disk Backlog (disk_backlog.sda)

80.0

60.0

40.0

20.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds proc:/proc/diskstats | disk.backlog


backlog -

Disk Busy Time measures the amount of time the disk was busy with something.
Disk Busy Time (disk_busy.sda)

80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds proc:/proc/diskstats | disk.busy


busy -

Disk Utilization measures the amount of time the disk was busy with something. This is not
related to its performance. 100% means that the system always had an outstanding
operation on the disk. Keep in mind that depending on the underlying technology of the disk,
100% here may or may not be an indication of congestion.
Disk Utilization Time (disk_util.sda)
6.00

5.00

4.00

3.00

2.00

1.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

% of time working proc:/proc/diskstats | disk.util


utilization -

20 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

The average time for I/O requests issued to the device to be served. This includes the time
spent by the requests in queue and the time spent servicing them.
Average Completed I/O Operation Time (disk_await.sda)
5.0

0.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk.await


reads - writes -

The average time for extended I/O requests issued to the device to be served. This includes
the time spent by the requests in queue and the time spent servicing them.
Average Completed Extended I/O Operation Time (disk_ext_await.sda)

2.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk_ext.await


flushes -

The average I/O operation size.


Average Completed I/O Operation Bandwidth (disk_avgsz.sda)

0.0

-200.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/operation proc:/proc/diskstats | disk.avgsz


reads - writes -

Average Amount of Discarded Data (disk_ext_avgsz.sda)


1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/operation proc:/proc/diskstats | disk_ext.avgsz


discards -

21 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

The average service time for completed I/O operations. This metric is calculated using the
total busy time of the disk and the number of completed operations. If the disk is able to
execute multiple parallel operations the reporting average service time will be misleading.
Average Service Time (disk_svctm.sda)

5.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk.svctm


svctm -

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to disk.
Disk Merged Operations (disk_mops.sda)
0.0

-200.0

-400.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

merged operations/s proc:/proc/diskstats | disk.mops


reads - writes -

Disk Merged Discard Operations (disk_ext_mops.sda)


1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

merged operations/s proc:/proc/diskstats | disk_ext.mops


discards -

The sum of the duration of all completed I/O operations. This number can exceed the
interval if the disk is able to execute I/O operations in parallel.
Disk Total I/O Time (disk_iotime.sda)
0.0

-50.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s proc:/proc/diskstats | disk.iotime


reads - writes -

Disk Total I/O Time for Extended Operations (disk_ext_iotime.sda)


10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s proc:/proc/diskstats | disk_ext.iotime


flushes -

22 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

sr0
Amount of data transferred to and from disk.
Disk I/O Bandwidth (disk.sr0)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s proc:/proc/diskstats | disk.io


reads - writes -

Amount of Discarded Data (disk_ext.sr0)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s proc:/proc/diskstats | disk_ext.io


discards -

Completed disk I/O operations. Keep in mind the number of operations requested might be
higher, since the system is able to merge adjacent to each other (see merged operations
chart).
Disk Completed I/O Operations (disk_ops.sr0)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

operations/s proc:/proc/diskstats | disk.ops


reads - writes -

23 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Disk Completed Extended I/O Operations (disk_ext_ops.sr0)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

operations/s proc:/proc/diskstats | disk_ext.ops


discards - flushes -

Backlog is an indication of the duration of pending disk operations. On every I/O event the
system is multiplying the time spent doing I/O since the last update of this field with the
number of pending operations. While not accurate, this metric can provide an indication of
the expected completion time of the operations in progress.
Disk Backlog (disk_backlog.sr0)
1.0

0.8

0.6

0.4

0.2

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds proc:/proc/diskstats | disk.backlog


backlog -

Disk Busy Time measures the amount of time the disk was busy with something.
Disk Busy Time (disk_busy.sr0)
1.0

0.8

0.6

0.4

0.2

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds proc:/proc/diskstats | disk.busy


busy -

24 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Disk Utilization measures the amount of time the disk was busy with something. This is not
related to its performance. 100% means that the system always had an outstanding
operation on the disk. Keep in mind that depending on the underlying technology of the disk,
100% here may or may not be an indication of congestion.
Disk Utilization Time (disk_util.sr0)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

% of time working proc:/proc/diskstats | disk.util


utilization -

The average time for I/O requests issued to the device to be served. This includes the time
spent by the requests in queue and the time spent servicing them.
Average Completed I/O Operation Time (disk_await.sr0)
1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk.await


reads - writes -

The average time for extended I/O requests issued to the device to be served. This includes
the time spent by the requests in queue and the time spent servicing them.
Average Completed Extended I/O Operation Time (disk_ext_await.sr0)
1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk_ext.await


discards - flushes -

The average I/O operation size.


Average Completed I/O Operation Bandwidth (disk_avgsz.sr0)
1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/operation proc:/proc/diskstats | disk.avgsz


reads - writes -

25 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Average Amount of Discarded Data (disk_ext_avgsz.sr0)


1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/operation proc:/proc/diskstats | disk_ext.avgsz


discards -

The average service time for completed I/O operations. This metric is calculated using the
total busy time of the disk and the number of completed operations. If the disk is able to
execute multiple parallel operations the reporting average service time will be misleading.
Average Service Time (disk_svctm.sr0)
1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/operation proc:/proc/diskstats | disk.svctm


svctm -

The sum of the duration of all completed I/O operations. This number can exceed the
interval if the disk is able to execute I/O operations in parallel.
Disk Total I/O Time (disk_iotime.sr0)
1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s proc:/proc/diskstats | disk.iotime


reads - writes -

Disk Total I/O Time for Extended Operations (disk_ext_iotime.sr0)


1

0.5

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s proc:/proc/diskstats | disk_ext.iotime


discards - flushes -

26 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Disk space utilization. reserved for root is automatically reserved by the system to prevent
the root user from getting out of space.
Disk Space Usage for / [overlay] (disk_space._)

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

GiB diskspace | disk.space


avail - used - reserved for root -

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of
file system implementations, the maximum number of inodes is fixed at filesystem creation,
limiting the maximum number of files the filesystem can hold. It is possible for a device to
run out of inodes. When this happens, new files cannot be created on the device, even
though there may be free space available.
Disk Files (inodes) Usage for / [overlay] (disk_inodes._)
1,200,000

1,000,000

800,000

600,000

400,000

200,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

inodes diskspace | disk.inodes


avail - used -

/dev

27 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Disk space utilization. reserved for root is automatically reserved by the system to prevent
the root user from getting out of space.
Disk Space Usage for /dev [tmpfs] (disk_space._dev)

409.6

204.8

0.0

-204.8

-409.6
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB diskspace | disk.space


avail -

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of
file system implementations, the maximum number of inodes is fixed at filesystem creation,
limiting the maximum number of files the filesystem can hold. It is possible for a device to
run out of inodes. When this happens, new files cannot be created on the device, even
though there may be free space available.
Disk Files (inodes) Usage for /dev [tmpfs] (disk_inodes._dev)
500,000

400,000

300,000

200,000

100,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

inodes diskspace | disk.inodes


avail - used -

/dev/shm

28 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Disk space utilization. reserved for root is automatically reserved by the system to prevent
the root user from getting out of space.
Disk Space Usage for /dev/shm [shm] (disk_space._dev_shm)

409.6

204.8

0.0

-204.8

-409.6
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB diskspace | disk.space


avail -

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of
file system implementations, the maximum number of inodes is fixed at filesystem creation,
limiting the maximum number of files the filesystem can hold. It is possible for a device to
run out of inodes. When this happens, new files cannot be created on the device, even
though there may be free space available.
Disk Files (inodes) Usage for /dev/shm [shm] (disk_inodes._dev_shm)
500,000

400,000

300,000

200,000

100,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

inodes diskspace | disk.inodes


avail - used -

Networking Stack
Metrics for the networking stack of the system. These metrics are collected from /proc/net
/netstat , apply to both IPv4 and IPv6 traffic and are related to operation of the kernel
networking stack.

tcp

29 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

TCP connection aborts. baddata ( TCPAbortOnData ) happens while the connection is on


FIN_WAIT1 and the kernel receives a packet with a sequence number beyond the last one
for this connection - the kernel responds with RST (closes the connection). userclosed
( TCPAbortOnClose ) happens when the kernel receives data on an already closed
connection and responds with RST . nomemory ( TCPAbortOnMemory happens when there
are too many orphaned sockets (not attached to an fd) and the kernel has to drop a
connection - sometimes it will send an RST , sometimes it won't. timeout
( TCPAbortOnTimeout ) happens when a connection times out. linger
( TCPAbortOnLinger ) happens when the kernel killed a socket that was already closed by
the application and lingered around for long enough. failed ( TCPAbortFailed ) happens
when the kernel attempted to send an RST but failed because there was no memory
available.
TCP Connection Aborts (ip.tcpconnaborts)

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

connections/s proc:/proc/net/netstat | ip.tcpconnaborts


userclosed -

ecn
Explicit Congestion Notification (ECN) (https://en.wikipedia.org
/wiki/Explicit_Congestion_Notification) is a TCP extension that allows end-to-end notification
of network congestion without dropping packets. ECN is an optional feature that may be
used between two ECN-enabled endpoints when the underlying network infrastructure also
supports it.

IP ECN Statistics (ip.ecnpkts)


0.0

-10.0

-20.0

-30.0

-40.0

-50.0

-60.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/netstat | ip.ecnpkts


NoECTP -

IPv4 Networking
30 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4)
(https://en.wikipedia.org/wiki/IPv4) is the fourth version of the Internet Protocol (IP). It is one of
the core protocols of standards-based internetworking methods in the Internet. IPv4 is a
connectionless protocol for use on packet-switched networks. It operates on a best effort delivery
model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance
of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer
transport protocol, such as the Transmission Control Protocol (TCP).

sockets
IPv4 Sockets Used (ipv4.sockstat_sockets)
11.00

10.50

10.00

9.50

9.00

8.50

8.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

sockets proc:/proc/net/sockstat | ipv4.sockstat_sockets


used -

packets
IPv4 Packets (ipv4.packets)
60.0

40.0

20.0

0.0

-20.0

-40.0

-60.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/snmp | ipv4.packets


received - sent - delivered -

icmp

31 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

IPv4 ICMP Packets (ipv4.icmp)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/snmp | ipv4.icmp


received - sent -

IPv4 ICMP Errors (ipv4.icmp_errors)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/snmp | ipv4.icmp_errors


InErrors - OutErrors - InCsumErrors -

IPv4 ICMP Messages (ipv4.icmpmsg)


1

0.8

0.6

0.4

0.2

0
pps proc:/proc/net/snmp | ipv4.icmpmsg
InEchoReps - OutEchoReps - InDestUnreachs - OutDestUnreachs -
InRedirects - OutRedirects - InEchos - OutEchos -
InRouterAdvert - OutRouterAdvert - InRouterSelect - OutRouterSelect -
InTimeExcds - OutTimeExcds - InParmProbs - OutParmProbs -
InTimestamps - OutTimestamps - InTimestampReps -

tcp

32 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

The number of established TCP connections (known as CurrEstab ). This is a snapshot of


the established connections at the time of measurement (i.e. a connection established and
a connection disconnected within the same iteration will not affect this metric).
IPv4 TCP Connections (ipv4.tcpsock)
3.00

2.50

2.00

1.50

1.00

0.50

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

active connections proc:/proc/net/snmp | ipv4.tcpsock


connections -

IPv4 TCP Sockets (ipv4.sockstat_tcp_sockets)


60.0

50.0

40.0

30.0

20.0

10.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

sockets proc:/proc/net/sockstat | ipv4.sockstat_tcp_sockets


alloc - inuse - timewait -

IPv4 TCP Packets (ipv4.tcppackets)

0.0

-100.0

-200.0

-300.0

-400.0

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/snmp | ipv4.tcppackets


received - sent -

33 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

active or ActiveOpens is the number of outgoing TCP connections attempted by this


host. passive or PassiveOpens is the number of incoming TCP connections accepted by
this host.
IPv4 TCP Opens (ipv4.tcpopens)

0.80

0.60

0.40

0.20

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

connections/s proc:/proc/net/snmp | ipv4.tcpopens


active - passive -

EstabResets is the number of established connections resets (i.e. connections that made
a direct transition from ESTABLISHED or CLOSE_WAIT to CLOSED ). OutRsts is the
number of TCP segments sent, with the RST flag set (for both IPv4 and IPv6).
AttemptFails is the number of times TCP connections made a direct transition from
either SYN_SENT or SYN_RECV to CLOSED , plus the number of times TCP connections
made a direct transition from the SYN_RECV to LISTEN . TCPSynRetrans shows retries for
new outbound TCP connections, which can indicate general connectivity issues or backlog
on the remote host.
IPv4 TCP Handshake Issues (ipv4.tcphandshake)
0.50

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events/s proc:/proc/net/snmp | ipv4.tcphandshake


EstabResets - OutRsts -

IPv4 TCP Sockets Memory (ipv4.sockstat_tcp_mem)


70.0

60.0

50.0

40.0

30.0

20.0

10.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB proc:/proc/net/sockstat | ipv4.sockstat_tcp_mem


mem -

34 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

udp
IPv4 UDP Sockets (ipv4.sockstat_udp_sockets)

1.4

1.2

0.8

0.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

sockets proc:/proc/net/sockstat | ipv4.sockstat_udp_sockets


inuse -

IPv4 UDP Packets (ipv4.udppackets)


0.80
0.60
0.40
0.20
0.00
-0.20
-0.40
-0.60
-0.80
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

packets/s proc:/proc/net/snmp | ipv4.udppackets


received - sent -

IPv4 UDP Errors (ipv4.udperrors)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events/s proc:/proc/net/snmp | ipv4.udperrors


RcvbufErrors - SndbufErrors - InErrors - NoPorts -
InCsumErrors - IgnoredMulti -

35 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

IPv4 UDP Sockets Memory (ipv4.sockstat_udp_mem)


55.0
50.0
45.0
40.0
35.0
30.0
25.0
20.0
15.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB proc:/proc/net/sockstat | ipv4.sockstat_udp_mem


mem -

IPv6 Networking
Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6)
(https://en.wikipedia.org/wiki/IPv6) is the most recent version of the Internet Protocol (IP), the
communications protocol that provides an identification and location system for computers on
networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering
Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 is
intended to replace IPv4.

packets
IPv6 Packets (ipv6.packets)
0.50

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

packets/s proc:/proc/net/snmp6 | ipv6.packets


received -

errors

36 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

IPv6 Errors (ipv6.errors)


0.50

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

packets/s proc:/proc/net/snmp6 | ipv6.errors


InDiscards -

tcp6
IPv6 TCP Sockets (ipv6.sockstat6_tcp_sockets)

1.4

1.2

0.8

0.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

sockets proc:/proc/net/sockstat6 | ipv6.sockstat6_tcp_sockets


inuse -

Network Interfaces
Performance metrics for network interfaces.

eth0

37 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Bandwidth (net.eth0)
0.00

-1.00

-2.00

-3.00

-4.00

-5.00

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

megabits/s proc:/proc/net/dev | net.net


received - sent -

Packets (net_packets.eth0)
60.0

40.0

20.0

0.0

-20.0

-40.0

-60.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps proc:/proc/net/dev | net.packets


received - sent -

State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 -


dormant, 6 - up
Interface Operational State (net_operstate.eth0)

6.4

6.2

5.8

5.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

state proc:/proc/net/dev | net.operstate


state -

38 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

State map: 0 - down, 1 - up


Inteface Physical Link State (net_carrier.eth0)

1.4

1.2

0.8

0.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

state proc:/proc/net/dev | net.carrier


carrier -

Interface MTU (net_mtu.eth0)

1,500

1,500

1,500

1,500

1,500

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

octets proc:/proc/net/dev | net.mtu


mtu -

Firewall (netfilter)
Performance metrics of the netfilter components.

connection tracker
Netfilter Connection Tracker performance metrics. The connection tracker keeps track of all
connections of the machine, inbound and outbound. It works by keeping a database with all
open connections, tracking network and address translation and connection expectations.

39 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Connection Tracker Connections (netfilter.conntrack_sockets)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

active connections proc:/proc/net/stat/nf_conntrack | netfilter.conntrack_sockets


connections -

Applications
Per application statistics are collected using netdata's apps.plugin . This plugin walks through
all processes and aggregates statistics for applications of interest, defined in /etc/netdata
/apps_groups.conf , which can be edited by running $ /etc/netdata/edit-config
apps_groups.conf (the default is here (https://github.com/netdata/netdata/blob/master/collectors
/apps.plugin/apps_groups.conf)). The plugin internally builds a process tree (much like ps fax
does), and groups processes together (evaluating both child and parent processes) so that the
result is always a chart with a predefined set of dimensions (of course, only application groups
found running are reported). The reported values are compatible with top , although the netdata
plugin counts also the resources of exited children (unlike top which shows only the resources
of the currently running processes). So for processes like shell scripts, the reported values
include the resources used by the commands these scripts run within each timeframe.

cpu

40 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps CPU Time (100% = 1 core) (apps.cpu)

11.0

10.0

9.0

8.0

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | apps.cpu


netdata - apps.plugin - go.d.plugin -

Apps CPU User Time (100% = 1 core) (apps.cpu_user)


4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | apps.cpu_user


netdata - apps.plugin - go.d.plugin -

Apps CPU System Time (100% = 1 core) (apps.cpu_system)


8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | apps.cpu_system


netdata - apps.plugin - go.d.plugin -

disk

41 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Disk Reads (apps.preads)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
KiB/s apps | apps.preads
netdata - apps.plugin - charts.d.plugin - python.d.plugin -
go.d.plugin - other -

Apps Disk Writes (apps.pwrites)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
KiB/s apps | apps.pwrites
netdata - apps.plugin - charts.d.plugin - python.d.plugin -
go.d.plugin - other -

42 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Disk Logical Reads (apps.lreads)

8.4500

8.4400

8.4300

8.4200

8.4100

8.4000

8.3900
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | apps.lreads


apps.plugin -

Apps I/O Logical Writes (apps.lwrites)


5.36

5.34

5.32

5.30

5.28

5.26

5.24

5.22
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | apps.lwrites


apps.plugin -

Apps Open Files (apps.files)

7.40

7.20

7.00

6.80

6.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open files apps | apps.files


apps.plugin -

mem

43 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Real memory (RAM) used by applications. This does not include shared memory.
Apps Real Memory (w/o shared) (apps.mem)
30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | apps.mem


netdata - apps.plugin - go.d.plugin - other -

Virtual memory allocated by applications. Please check this article (https://github.com


/netdata/netdata/tree/master/daemon#virtual-memory) for more information.
Apps Virtual Memory Size (apps.vmem)
1,000

800

600

400

200

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | apps.vmem


netdata - apps.plugin - go.d.plugin - other -

Apps Minor Page Faults (apps.minor_faults)

1,600
1,400
1,200
1,000
800
600
400
200
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

page faults/s apps | apps.minor_faults


netdata - other -

processes

44 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Threads (apps.threads)


30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

threads apps | apps.threads


netdata - apps.plugin - go.d.plugin - other -

Apps Processes (apps.processes)


5.00

4.00

3.00

2.00

1.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes apps | apps.processes


netdata - apps.plugin - go.d.plugin - other -

Carried over process group uptime since the Netdata restart. The period of time within
which at least one process in the group was running.
Apps Carried Over Uptime (apps.uptime)
00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | apps.uptime


netdata - apps.plugin - go.d.plugin - other -

45 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Minimum Uptime (apps.uptime_min)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | apps.uptime_min


netdata - apps.plugin - go.d.plugin - other -

Apps Average Uptime (apps.uptime_avg)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | apps.uptime_avg


netdata - apps.plugin - go.d.plugin - other -

Apps Maximum Uptime (apps.uptime_max)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | apps.uptime_max


netdata - apps.plugin - go.d.plugin - other -

46 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Pipes (apps.pipes)

1.40

1.20

1.00

0.80

0.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open pipes apps | apps.pipes


apps.plugin -

swap
Apps Swap Memory (apps.swap)
1

0.8

0.6

0.4

0.2

0
MiB apps | apps.swap
netdata - apps.plugin - charts.d.plugin - python.d.plugin -
go.d.plugin - other -

Apps Major Page Faults (swap read) (apps.major_faults)


1

0.8

0.6

0.4

0.2

0
page faults/s apps | apps.major_faults
netdata - apps.plugin - charts.d.plugin - python.d.plugin -
go.d.plugin - other -

net

47 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Open Sockets (apps.sockets)


1

0.8

0.6

0.4

0.2

0
open sockets apps | apps.sockets
netdata - apps.plugin - charts.d.plugin - python.d.plugin -
go.d.plugin - other -

User Groups
Per user group statistics are collected using netdata's apps.plugin . This plugin walks through
all processes and aggregates statistics per user group. The reported values are compatible with
top , although the netdata plugin counts also the resources of exited children (unlike top which
shows only the resources of the currently running processes). So for processes like shell scripts,
the reported values include the resources used by the commands these scripts run within each
timeframe.

cpu
User Groups CPU Time (100% = 1 core) (groups.cpu)

11.0

10.0

9.0

8.0

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | groups.cpu


netdata -

48 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups CPU User Time (100% = 1 core) (groups.cpu_user)


4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | groups.cpu_user


netdata -

User Groups CPU System Time (100% = 1 core) (groups.cpu_system)


8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | groups.cpu_system


netdata -

disk

49 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Disk Reads (groups.preads)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | groups.preads


root - netdata -

User Groups Disk Writes (groups.pwrites)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | groups.pwrites


root - netdata -

50 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Disk Logical Reads (groups.lreads)

8.4500

8.4400

8.4300

8.4200

8.4100

8.4000

8.3900
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | groups.lreads


netdata -

User Groups I/O Logical Writes (groups.lwrites)


5.36

5.34

5.32

5.30

5.28

5.26

5.24

5.22
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | groups.lwrites


netdata -

User Groups Open Files (groups.files)

7.40

7.20

7.00

6.80

6.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open files apps | groups.files


netdata -

mem

51 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Real memory (RAM) used per user group. This does not include shared memory.
User Groups Real Memory (w/o shared) (groups.mem)
30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | groups.mem


root - netdata -

Virtual memory allocated per user group since the Netdata restart. Please check this article
(https://github.com/netdata/netdata/tree/master/daemon#virtual-memory) for more
information.
User Groups Virtual Memory Size (groups.vmem)
1,000

800

600

400

200

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | groups.vmem


root - netdata -

User Groups Minor Page Faults (groups.minor_faults)

1,600
1,400
1,200
1,000
800
600
400
200
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

page faults/s apps | groups.minor_faults


root - netdata -

processes

52 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Threads (groups.threads)


30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

threads apps | groups.threads


root - netdata -

User Groups Processes (groups.processes)


5.00

4.00

3.00

2.00

1.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes apps | groups.processes


root - netdata -

Carried over process group uptime. The period of time within which at least one process in
the group was running.
User Groups Carried Over Uptime (groups.uptime)
00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | groups.uptime


root - netdata -

53 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Minimum Uptime (groups.uptime_min)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | groups.uptime_min


root - netdata -

User Groups Average Uptime (groups.uptime_avg)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | groups.uptime_avg


root - netdata -

User Groups Maximum Uptime (groups.uptime_max)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | groups.uptime_max


root - netdata -

54 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Pipes (groups.pipes)

1.40

1.20

1.00

0.80

0.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open pipes apps | groups.pipes


netdata -

swap
User Groups Swap Memory (groups.swap)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | groups.swap


root - netdata -

User Groups Major Page Faults (swap read) (groups.major_faults)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

page faults/s apps | groups.major_faults


root - netdata -

net

55 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

User Groups Open Sockets (groups.sockets)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open sockets apps | groups.sockets


root - netdata -

Users
Per user statistics are collected using netdata's apps.plugin . This plugin walks through all
processes and aggregates statistics per user. The reported values are compatible with top ,
although the netdata plugin counts also the resources of exited children (unlike top which shows
only the resources of the currently running processes). So for processes like shell scripts, the
reported values include the resources used by the commands these scripts run within each
timeframe.

cpu
Users CPU Time (100% = 1 core) (users.cpu)

11.0

10.0

9.0

8.0

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | users.cpu


root - netdata -

56 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users CPU User Time (100% = 1 core) (users.cpu_user)


4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | users.cpu_user


root - netdata -

Users CPU System Time (100% = 1 core) (users.cpu_system)


8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage apps | users.cpu_system


root - netdata -

disk

57 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Disk Reads (users.preads)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | users.preads


root - netdata -

Users Disk Writes (users.pwrites)


1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | users.pwrites


root - netdata -

58 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Disk Logical Reads (users.lreads)

8.4500

8.4400

8.4300

8.4200

8.4100

8.4000

8.3900
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | users.lreads


root -

Users I/O Logical Writes (users.lwrites)


5.36

5.34

5.32

5.30

5.28

5.26

5.24

5.22
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s apps | users.lwrites


root -

Users Open Files (users.files)

7.40

7.20

7.00

6.80

6.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open files apps | users.files


root -

mem

59 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Real memory (RAM) used per user. This does not include shared memory.
Users Real Memory (w/o shared) (users.mem)
30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | users.mem


root - netdata -

Virtual memory allocated per user. Please check this article (https://github.com/netdata
/netdata/tree/master/daemon#virtual-memory) for more information.
Users Virtual Memory Size (users.vmem)
1,000

800

600

400

200

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | users.vmem


root - netdata -

Users Minor Page Faults (users.minor_faults)

1,600
1,400
1,200
1,000
800
600
400
200
0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

page faults/s apps | users.minor_faults


root - netdata -

processes

60 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Threads (users.threads)


30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

threads apps | users.threads


root - netdata -

Users Processes (users.processes)


5.00

4.00

3.00

2.00

1.00

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

processes apps | users.processes


root - netdata -

Carried over process group uptime since the Netdata restart. The period of time within
which at least one process in the group was running.
Users Carried Over Uptime (users.uptime)
00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | users.uptime


root - netdata -

61 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Minimum Uptime (users.uptime_min)

00:10:00

00:08:20

00:06:40

00:05:00

00:03:20

00:01:40
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | users.uptime_min


root - netdata -

Users Average Uptime (users.uptime_avg)


00:13:20
00:11:40
00:10:00
00:08:20
00:06:40
00:05:00
00:03:20
00:01:40
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | users.uptime_avg


root - netdata -

Users Maximum Uptime (users.uptime_max)


00:16:40

00:13:20

00:10:00

00:06:40

00:03:20

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time apps | users.uptime_max


root - netdata -

62 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Pipes (users.pipes)

1.40

1.20

1.00

0.80

0.60

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open pipes apps | users.pipes


root -

swap
Users Swap Memory (users.swap)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB apps | users.swap


root - netdata -

Users Major Page Faults (swap read) (users.major_faults)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

page faults/s apps | users.major_faults


root - netdata -

net

63 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Users Open Sockets (users.sockets)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

open sockets apps | users.sockets


root - netdata -

Netdata Monitoring
Performance metrics for the operation of netdata itself and its plugins.

netdata
Netdata Network Traffic (netdata.net)
0.00

-1.00

-2.00

-3.00

-4.00

-5.00

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

megabits/s netdata:stats | netdata.net


in - out -

Netdata CPU usage (netdata.server_cpu)

80.0

60.0

40.0

20.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s netdata:stats | netdata.server_cpu


user - system -

64 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata uptime (netdata.uptime)

00:10:00

00:08:20

00:06:40

00:05:00

00:03:20

00:01:40
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

time netdata:stats | netdata.uptime


uptime -

Netdata Web Clients (netdata.clients)


3.00

2.50

2.00

1.50

1.00

0.50

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

connected clients netdata:stats | netdata.clients


clients -

Netdata Web Requests (netdata.requests)

10.0

8.0

6.0

4.0

2.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

requests/s netdata:stats | netdata.requests


requests -

65 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

The netdata API response time measures the time netdata needed to serve requests. This
time includes everything, from the reception of the first byte of a request, to the dispatch of
the last byte of its reply, therefore it includes all network latencies involved (i.e. a client over
a slow network will influence these metrics).
Netdata API Response Time (netdata.response_time)

60.0

50.0

40.0

30.0

20.0

10.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/request netdata:stats | netdata.response_time


average - max -

Netdata API Responses Compression Savings Ratio (netdata.compression_ratio)


80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage netdata:stats | netdata.compression_ratio


savings -

queries
Netdata API Queries (netdata.queries)
20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

queries/s netdata:stats | netdata.queries


queries -

66 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata API Points (netdata.db_points)

10,000

5,000

-5,000

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

points/s netdata:stats | netdata.db_points


read - generated -

dbengine
Netdata DB engine data extents' compression savings ratio (netdata.dbengine_compression_ratio)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage netdata:stats | netdata.dbengine_compression_ratio


savings -

Netdata DB engine page cache hit ratio (netdata.page_cache_hit_ratio)


100.0

80.0

60.0

40.0

20.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

percentage netdata:stats | netdata.page_cache_hit_ratio


ratio -

67 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata dbengine page cache statistics (netdata.page_cache_stats)

656.4

656.2

656.0

655.8

655.6

pages netdata:stats | netdata.page_cache_stats


descriptors - populated - dirty - used_by_collectors -

Netdata dbengine long-term page statistics (netdata.dbengine_long_term_page_stats)

656.4

656.2

656.0

655.8

655.6

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pages netdata:stats | netdata.dbengine_long_term_page_stats


total -

Netdata DB engine I/O throughput (netdata.dbengine_io_throughput)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB/s netdata:stats | netdata.dbengine_io_throughput


reads - writes -

68 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata DB engine I/O operations (netdata.dbengine_io_operations)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

operations/s netdata:stats | netdata.dbengine_io_operations


reads - writes -

Netdata DB engine errors (netdata.dbengine_global_errors)


1

0.8

0.6

0.4

0.2

0
errors/s netdata:stats | netdata.dbengine_global_errors
io_errors - fs_errors - pg_cache_over_half_dirty_events -

Netdata DB engine File Descriptors (netdata.dbengine_global_file_descriptors)


250,000

200,000

150,000

100,000

50,000

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

descriptors netdata:stats | netdata.dbengine_global_file_descriptors


current - max -

69 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata DB engine RAM usage (netdata.dbengine_ram)

2.50

2.00

1.50

1.00

0.50

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

MiB netdata:stats | netdata.dbengine_ram


collectors - metadata -

cgroups
Netdata CGroups Plugin CPU usage (netdata.plugin_cgroups_cpu)

0.0800

0.0600

0.0400

0.0200

0.0000
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s cgroups:stats | netdata.plugin_cgroups_cpu


user - system -

proc
Netdata proc plugin CPU usage (netdata.plugin_proc_cpu)
1.00

0.80

0.60

0.40

0.20

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s proc:stats | netdata.plugin_proc_cpu


user - system -

70 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata proc plugin modules durations (netdata.plugin_proc_modules)


4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
milliseconds/run proc:stats | netdata.plugin_proc_modules
loadavg - vmstat - ksm - netstat - snmp -
diskstats - ipc -

web
Netdata web server thread No 1 CPU usage (netdata.web_thread1_cpu)
35.0

30.0

25.0

20.0

15.0

10.0

5.0

0.0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s web:stats | netdata.web_cpu


user - system -

statsd
Netdata statsd charting thread CPU usage (netdata.plugin_statsd_charting_cpu)
0.10

0.08

0.06

0.04

0.02

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s statsd:stats | netdata.statsd_cpu


user - system -

71 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

NetData statsd collector thread No 1 CPU usage (netdata.plugin_statsd_collector1_cpu)

0.0400

0.0300

0.0200

0.0100

0.0000
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s statsd:stats | netdata.statsd_cpu


user - system -

Metrics in the netdata statsd database (netdata.statsd_metrics)


1

0.8

0.6

0.4

0.2

0
metrics statsd:stats | netdata.statsd_metrics
gauges - counters - timers - meters - histograms -
sets -

Useful metrics in the netdata statsd database (netdata.statsd_useful_metrics)


1

0.8

0.6

0.4

0.2

0
metrics statsd:stats | netdata.statsd_useful_metrics
gauges - counters - timers - meters - histograms -
sets -

72 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Events processed by the netdata statsd server (netdata.statsd_events)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events/s statsd:stats | netdata.statsd_events


gauges - counters - timers - meters - histograms -
sets - unknown - errors -

Read operations made by the netdata statsd server (netdata.statsd_reads)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

reads/s statsd:stats | netdata.statsd_reads


tcp - udp -

Bytes read by the netdata statsd server (netdata.statsd_bytes)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

kilobits/s statsd:stats | netdata.statsd_bytes


tcp - udp -

73 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Network packets processed by the netdata statsd server (netdata.statsd_packets)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

pps statsd:stats | netdata.statsd_packets


tcp - udp -

statsd server TCP connects and disconnects (netdata.tcp_connects)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

events statsd:stats | netdata.tcp_connects


connects - disconnects -

statsd server TCP connected sockets (netdata.tcp_connected)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

sockets statsd:stats | netdata.tcp_connected


connected -

74 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Private metric charts created by the netdata statsd server (netdata.private_charts)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

charts statsd:stats | netdata.private_charts


charts -

diskspace
Netdata Disk Space Plugin CPU usage (netdata.plugin_diskspace)
0.20

0.15

0.10

0.05

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s diskspace | netdata.plugin_diskspace


user - system -

Netdata Disk Space Plugin Duration (netdata.plugin_diskspace_dt)


2.00

1.50

1.00

0.50

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/run diskspace | netdata.plugin_diskspace_dt


duration -

timex

75 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Netdata Timex Plugin CPU usage (netdata.plugin_timex)

0.0100

0.0080

0.0060

0.0040

0.0020

19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s timex | netdata.plugin_timex


user -

Netdata Timex Plugin Duration (netdata.plugin_timex_dt)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/run timex | netdata.plugin_timex_dt


duration -

apps.plugin
Apps Plugin CPU (netdata.apps_cpu)

0.60

0.50

0.40

0.30

0.20

0.10

0.00
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s apps | netdata.apps_cpu


user - system -

76 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Apps Plugin Files (netdata.apps_sizes)


350.0
300.0
250.0
200.0
150.0
100.0
50.0
0.0
files/s apps | netdata.apps_sizes
calls - files - filenames - inode_changes - pids -
fds - targets -

Apps Plugin Normalization Ratios (netdata.apps_fix)


100.0

80.0

60.0

40.0

20.0

0.0
percentage apps | netdata.apps_fix
utime - stime - gtime - minflt - majflt -

Apps Plugin Exited Children Normalization Ratios (netdata.apps_children_fix)


100.0

80.0

60.0

40.0

20.0

0.0
percentage apps | netdata.apps_children_fix
cutime - cstime - cgtime - cminflt - cmajflt -

aclk

77 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

This chart shows if ACLK was online during entirety of the sample duration.
ACLK/Cloud connection status (netdata.aclk_status)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

connected netdata:stats | netdata.aclk_status


online -

This chart shows how many queries were added for ACLK_query thread to process and
how many it was actually able to process.
ACLK Queries per second (netdata.aclk_query_per_second)
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

queries/s netdata:stats | netdata.aclk_query_per_second


added - dispatched -

Write Queue Mosq->Libwebsockets (netdata.aclk_write_q)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s netdata:stats | netdata.aclk_write_q


added - consumed -

78 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Read Queue Libwebsockets->Mosq (netdata.aclk_read_q)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

KiB/s netdata:stats | netdata.aclk_read_q


added - consumed -

Requests received from cloud (netdata.aclk_cloud_req)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

req/s netdata:stats | netdata.aclk_cloud_req


accepted - rejected -

Requests received from cloud by their version (netdata.aclk_cloud_req_version)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

req/s netdata:stats | netdata.aclk_cloud_req_version


v1 - v2+ -

79 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Time it took to process cloud requested DB queries (netdata.aclk_db_query_time)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

us netdata:stats | netdata.aclk_db_query_time
avg - max - total -

Time from receiving the Cloud Query until it was picked up by query thread (just before passing to the database). (netda
1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

us netdata:stats | netdata.aclk_cloud_q_recvd_to_processed
avg - max - total -

Requests received from cloud by their type (api endpoint queried) (netdata.aclk_cloud_req_cmd)
1

0.8

0.6

0.4

0.2

0
req/s netdata:stats | netdata.aclk_cloud_req_cmd
other - info - data - alarms - alarm_log - chart -
charts -

80 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

Queries Processed Per Thread (netdata.aclk_query_threads)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

req/s netdata:stats | netdata.aclk_query_threads


Query 0 - Query 1 -

Cpu Usage For Thread No 0 (netdata.aclk_thread0_cpu)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s netdata:stats | netdata.aclk_thread0_cpu


user - system -

Cpu Usage For Thread No 1 (netdata.aclk_thread1_cpu)


1

0.8

0.6

0.4

0.2

0
19:13:00 19:14:00 19:15:00 19:16:00 19:17:00 19:18:00 19:19:00

milliseconds/s netdata:stats | netdata.aclk_thread1_cpu


user - system -

Netdata (https://github.com/netdata/netdata/wiki)

Copyright 2020, Netdata, Inc (mailto:info@netdata.cloud).

Terms and conditions (https://www.netdata.cloud/terms/)


Privacy Policy (https://www.netdata.cloud/privacy/)

Released under GPL v3 or later (http://www.gnu.org/licenses/gpl-3.0.en.html). Netdata uses


third party tools (https://github.com/netdata/netdata/blob/master/REDISTRIBUTED.md).

81 of 82 6/17/21, 19:23
ea69cb481e37 netdata dashboard http://172.17.0.2:19999/#menu_system_submenu_load;...

82 of 82 6/17/21, 19:23

You might also like