You are on page 1of 2

cybe rcit i.


Linux Tune Network Stack (Buffers Size) To Increase Networking Performance


I've two servers located in two dif f erent data center. Both server deals with a lot of concurrent large f ile transf ers. But network perf ormance is very poor f or large f iles and perf ormance degradation take place with a large f iles. How do I tune T CP under Linux to solve this problem? By def ault the Linux network stack is not conf igured f or high speed large f ile transf er across WAN links. T his is done to save memory resources. You can easily tune Linux network stack by increasing network buf f ers size f or high-speed networks that connect server systems to handle more network packets. T he def ault maximum Linux T CP buf f er sizes are way too small. T CP memory is calculated automatically based on system memory; you can f ind the actual values by typing the f ollowing commands: $ cat /proc/sys/net /ipv4/t cp_mem T he def ault and maximum amount f or the receive socket memory: $ cat /proc/sys/net /core/rmem_default $ cat /proc/sys/net /core/rmem_max T he def ault and maximum amount f or the send socket memory: $ cat /proc/sys/net /core/wmem_default $ cat /proc/sys/net /core/wmem_max T he maximum amount of option memory buf f ers: $ cat /proc/sys/net /core/opt mem_max

Tune values
Set the max OS send buf f er size (wmem) and receive buf f er size (rmem) to 12 MB f or queues on all protocols. In other words set the amount of memory that is allocated f or each T CP socket when it is opened or created while transf erring f iles: WARNING! T he def ault value of rmem_max and wmem_max is about 128 KB in most Linux distributions, which may be enough f or a low-latency general purpose network environment or f or apps such as DNS / Web server. However, if the latency is large, the def ault size might be too small. Please note that the f ollowing settings going to increase memory usage on your server. # echo 'net .core.wmem_max=12582912' >> /et c/sysct l.conf # echo 'net .core.rmem_max=12582912' >> /et c/sysct l.conf You also need to set minimum size, initial size, and maximum size in bytes: # echo 'net .ipv4.t cp_rmem= 10240 87380 12582912' >> /et c/sysct l.conf # echo 'net .ipv4.t cp_wmem= 10240 87380 12582912' >> /et c/sysct l.conf Turn on window scaling which can be an option to enlarge the transf er window: # echo 'net .ipv4.t cp_window_scaling = 1' >> /et c/sysct l.conf Enable timestamps as def ined in RFC1323: # echo 'net .ipv4.t cp_t imest amps = 1' >> /et c/sysct l.conf Enable select acknowledgments: # echo 'net .ipv4.t cp_sack = 1' >> /et c/sysct l.conf By def ault, T CP saves various connection metrics in the route cache when the connection closes, so that connections established in the near f uture can use these to set initial conditions. Usually, this increases overall perf ormance, but may sometimes cause perf ormance degradation. If set, T CP will not cache metrics on closing connections. # echo 'net .ipv4.t cp_no_met rics_save = 1' >> /et c/sysct l.conf

Set maximum number of packets, queued on the INPUT side, when the interf ace receives packets f aster than kernel can process them. # echo 'net dev_max_backlog = 5000' >> /et c/sysct l.conf Now reload the changes: # sysct l -p Use tcpdump to view changes f or eth0: # t cpdump -ni et h0

Recommend readings:
Please ref er to kernel documentation in Documentation /networking/ip-sysctl.txt f or more inf ormation. man page sysctl If you would like to be kept up to date with our posts, you can f ollow us on Twitter, Facebook, Google+, or even by subscribing to our RSS Feed.

Featured Articles: