levle 9 is what Arch Linux uses for their packages, so it's a widely
used level which is a good default tradeoff for such images.
Add also zstd-max level, which uses the highest default level which
is still highly efficient on decompression, i.e., while compression
needs more time and resources the decompression doesn't.
Some number for a Debian 11 minimal template from 2021-05-06:
uncomp. 321M 100.0%
gzip 116M 36.1%
zstd 0 106M 33.0%
zstd 9 98M 30.5%
zstd 19 83M 26.8%
So, the still cheap to extract zstd lvl19 need 33M less than the
current default gzip, the access log from our german cdn server show
about 490 successful access to our system images per day, that would
mean that changes saves `490 download/day * 30.4375 days/month * 33M
saved/download` =~ 490 G/month traffic for us and for users.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
my $compressor = $opts->{compressor} // 'gz';
my $compressor2cmd_map = {
gz => 'gzip',
- zst => 'zstd',
+ gzip => 'gzip',
+ zst => 'zstd -9',
+ zstd => 'zstd -9',
+ 'zstd-max' => 'zstd -19 -T0', # maximal level where the decompressor can still run efficiently
+ };
+ my $compressor2ending = {
+ gzip => 'gz',
+ zstd => 'zst',
+ 'zstd-max' => 'zst',
};
my $compressor_cmd = $compressor2cmd_map->{$compressor};
die "unkown compressor '$compressor', use one of: ". join(', ', sort keys %$compressor2cmd_map)
if !defined($compressor_cmd);
- my $final_archive = "${target}.${compressor}";
+ my $ending = $compressor2ending->{$compressor} // $compressor;
+ my $final_archive = "${target}.${ending}";
unlink $target;
unlink $final_archive;