Last Updated: February 25, 2016
· mikhailov
Screen shot 2016 02 29 at 20.33.18

Should you concerned about defaults?

Any software has configuration, bad simple configuration, just default values for abstract environment. For. Abstract. Environment.

Default settings are average values in common. For example, MySQL has 8MB as buffer pool by default (ok for dev machine only), so if you have enough RAM you can't get the in-memory operations profit without my.cnf adjust, everything can be limited by disk, no insert buffer. IO is the most famous bottleneck. Increase buffer pool size helps you to use buffer through the memory, it is blazing fast, disks are much slower. There is a lot of other low-level tuning, such query cache disabling to dramatically reduce waiting and increase overall DB performance.


Myth #1. Expensive hardware = good, cheap = bad. Somehow everybody knows that fast expensive server can handle hundreds times more requests than the cheapest one, right? Wrong. If you can't measure things that go slow you can't understand what exactly to change. What if your server settings unoptimized enough and many processes spend a lot of time on waiting? It happens too often, so measure things, visualize. Fast expensive hardware + default settings = nuclear mixture

Myth #2 Trust without visualization. I have optimized server yesterday and it goes fine, my colleague says he spent hours on tuning and it utilises resource effectively, right? Wrong. Y U NO SHOW ME GRAPHICS? Trust to your eyes only, don't believe to words, it's alway bad to have no visulalisation. OK, yesterday all was fine, what about today?

Myth #3 I don't want to optimize what I don't know, because everything is specific, right? Almost. There is number of default settings that setup many years ago, but you can use optimized settings without worrying about it. What am I talking about? TCP/IP settings.

Remember the dial-up time, Internet was slow, TCP/IP was developed many years ago when default values were completely different from nowaday. But why kernel developers still do use old-fashion settings? They don't. Take new kernel and you get profit, but I can't get new kernel? Ok, tune your system by hands.

echo "net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.core.wmem_max = 256960
net.core.rmem_max = 256960
net.core.wmem_default = 256960
net.core.rmem_default = 256960
net.ipv4.tcp_wmem = 4096    87380   16777216
net.ipv4.tcp_rmem = 4096    87380   16777216
" >> /etc/sysctl.conf

sysctl -p

You should measure things before and after that, otherwise please do not spend time on any optimization at all. You can just try the settings today and see how it works. Let me explain what does it mean:

net.ipv4.tcpslowstartafteridle - reduce round-rtips for consecutive connections, read this

net.ipv4.tcp_sack - can cause problems in normal operations by saturating your connection, read this

net.ipv4.tcp_timestamps - to avoid 12 byte header overhead

Other else you can find on google, don't confuse with contradictory settings, you are smart person, you can do right choice.

Say Thanks