config: avoid sudden downward glitch in max_filter heurisitic
The recent heuristic change, while having good intentions, added a
glitch in the calculation of max_servers to use between the system
memory boundary of 3840 MiB.
For example, if a VM would e.g., have 3.5 GiB memory it would get 26
max_workers, and if a admin then increased this to 4 GiB, it would get
13 max_workers. This also meant that the VM would use less memory with
4 GiB configured, but when one reduces that, the usage would jump up.
Such effects are rather odd, and thus the heuristic was adapted to be
more linear, with basically no decrease of max_servers due to system
memory increasing.
Make the base_usage a 5/8 fraction of the detected total system
memory, fixate the estimation for per-server-memory usage to 150 MiB
and make the warning differ between violating minimum and recommended
total system memory (with some leeway).
A comparission table of system-memory in the first column, previous
max_servers results in the middle (the caller adds +2 to this, which
is done here too) and the result of the updated calculation in right
most column:
As flooding tests here could not use much more than 4 to 6 processes,
the slightly lower values on very low-memory systems should not matter
– actually they might improve performance even (less memory
contention and lower OOM-kill possibility).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>