Thomas Lamprecht [Mon, 22 Mar 2021 07:49:55 +0000 (08:49 +0100)]
fix #3164: api: quarantine: allow to return spam from all users
The pmail was only checked for the spam quarantine call, and there
mainly to ensure that the quarantine user only can check their own
mails. Make the pmail parameter also optional for this quarantine
related endpoint as long as one has a role other than quser.
This allows to query all spam quarantine entries from all pmails at
once, providing the backend side to address #3164.
The main argument against this was performance, but postgres can
handle even hundreds of thousands of rows rather fine, it's a high
performant database after all and this is quite the simple query
(single join, but no functions on columns, nested queries or other
performance hogs).
Some data, 45k records on a read limited disk, gathered with EXPLAIN
ANALYZE commands:
All caches dropped and fresh start: 440ms
Running for a bit with caches warm: 55ms
A simple extrapolation would mean that for half a million rows we
would spent about 5s in the DB, which is not too bad considering our
hard limit of 30s per requests, and the overhead of perl/https seems
to put the limit on my not so beefy VM at at least ~1.5 million rows
from a *cold* cache, which seems plenty (default 7 days keep window
and an avg. of 10 spam mails per day means >21k qusers). And with
warm caches and a beefier machine one can probably gain one or even
two order of magnitudes here.
And at the end, no mail admin is forced to use this and if they run a
setup with tens of millions of spam in their spam-keep time window,
well, they really should not be surprised that querying all has a
certain cost.
Stoiko Ivanov [Thu, 18 Mar 2021 15:14:49 +0000 (16:14 +0100)]
certs: reload postfix to activate new certificate
the current logic for reloading postfix only does so if the tls config
parameter changes (after rewriting the config files).
this does not cover the case where a certificate is replaced in a
setup, which already has tls enabled (config stays the same, so
postfix does not get reloaded)
the issue is mostly cosmetic, since postfix does eventually fork off
new smtpd instances, which read the files from disk, but it's
inconvenient, when trying out the new acme integration, and then
running a ssl-check on your PMG from external just to see that the
certificate was not updated.
Stoiko Ivanov [Thu, 18 Mar 2021 15:14:48 +0000 (16:14 +0100)]
cluster: use old and new fingerprint on master
when triggering a fingerprint update on master right after reloading
pmgproxy as we do for ACME certificates it can happen that the
connection is made against the old pmgproxy process (with the old
fingerprint). Simply trusting both fingerprints in that case seems
acceptable from a security perspective and makes the fingerprint
update more robust
Stoiko Ivanov [Mon, 1 Mar 2021 14:12:20 +0000 (15:12 +0100)]
backup: add notify parameter to 'classic' backup
for feature-parity, and since we recently had a user in our community
forum, who does regular backups via cron+rsync (small site w/o
dedicated backup host). Those setups could also benefit from this.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stoiko Ivanov [Mon, 1 Mar 2021 14:12:19 +0000 (15:12 +0100)]
backup: fix #3146 add email notifications
this patch addresses the missing email notification for scheduled
backup related tasks, which we have in all our other products, for our
mail product.
the parameter names are inspired by PBS' datastore config.
the default is 'never' in order to stay consistent with the current
code.
it uses the templateing system for the notification, because this
results in less code and a bit of added flexibility for the users.
the recipient address is currently hardcoded to the admin address in
pmg.conf as we also send the (admin) pmgreport there, and I did not
want to overengineer this (even more).
I shortly considered adding a $SIG{'__DIE__'} handler to the
run_backup API call but dropped the idea due to the warning in
perlvar(1).
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stoiko Ivanov [Mon, 1 Mar 2021 14:12:18 +0000 (15:12 +0100)]
backup: pbs: prevent race in concurrent backups
If two pbs backup-creation calls happen simultaneously, it is possible
that the first removes the backup dir before the other is done
creating or sending it to the pbs remote.
This patch takes the same route as non-PBS backups - creating a unique
tempdir indexed by remote, PID and current time.
the tmp-dir now also needs to be removed in case of error while
backing up. (before the next invocation would have wiped it).
Noticed while having 2 schedules to different PBS instances with the
same interval and w/o random delay.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stoiko Ivanov [Tue, 2 Feb 2021 13:03:16 +0000 (14:03 +0100)]
utils: allow '/' inside email address localpart
The change is motivated by a report in our community forum [0], where
a mail addressed to an address containing '/' in its local-part ended
up in the quarantine.
This is permitted by RFC5322 ([1]), and, probably more relevant,
happily accepted and processed by postfix.
Once inside the quarantine (or the statistic database) the records cannot
be displayed (due to the parameter verification failure).
This leaves the user unable to delete the quarantined mail.
Apart from the quarantine and statistics the 'pmg-email-address'
format is only used in the PBSConfig and the fetchmail configuration
(both of which are available only to the admin and can be still be
edited irrespective of the set localpart).
Stoiko Ivanov [Thu, 21 Jan 2021 15:51:04 +0000 (16:51 +0100)]
api: statistics: refactor detail calls
the API calls returning the detailed statistics for a particular
email use much common code.
This patch introduces a sub to be used in all detail api calls.
Stoiko Ivanov [Tue, 19 Jan 2021 10:38:14 +0000 (11:38 +0100)]
api: spamassassin: update local channels
This patch adds a helper to loop over all present Spamassassin
channels files in /etc/mail/spamassassin/channel.d and:
* import the included gpg key into sa-update's keyring
* run sa-update for each channel separately
the verbose argument of the helper is for reusing the code in
pmg-daily (where we only want to log errors and be less informative)
the $SA_UPDATE variable hardcoding the path of /usr/bin/sa-update was
dropped in favor of using 'sa-update' without path since we do have a
sensible setting of PATH everywhere, and hardcoding paths is
problematic (especially in usr-merged systems).
The choice of invoking sa-update for each channel separately, instead
of providing multiple '--channel' and '--gpgkey' options to a single
command was made to prevent downloading signatures, which were signed
by a key not configured for the channel.
Importing gpg-keys is also done with individual sa-update invocations,
because sa-update only imports the last present --import argument
(wrong use of Getopt::Long)
Stoiko Ivanov [Tue, 19 Jan 2021 10:38:12 +0000 (11:38 +0100)]
add helper for parsing SA channel.d files
RHEL/CentOS based SpamAssassin implementations ship an update script,
which reads shell snippets from
/etc/mail/spamassassin/channel.d/*.conf and uses the information there
to update SA rules from the configured channels [0].
Noticed the existence of this directory/mechanism while reading the
announcement of the updatechannel for the KAM ruleset [1].
Parsing the file as text, instead of sourcing it in a shell, since I
hope that the channel files distributed don't rely on running commands
to get the ruleset url and gpg key.
The parser has some minimal tests added (inspired by the
convert_size_test.pl from pve-common)
Dominik Csapak [Wed, 18 Nov 2020 10:59:36 +0000 (11:59 +0100)]
api2/quarantine: add global sendlink api call
this api call takes an email, checks it against the relay domains,
and prepares a custom quarantinelink for that email and sends it there
this has to happen unauthenticated, since the idea is that the user
want to access the quarantine but has no current ticket (and no
old spam report with a ticket)
we rate limit the requests by allowing only a request per 5 seconds
(to prevent dos'ing the internal mail server) and only
one request per user/hour
this api call is disabled by default
if admins want even more ratelimiting, they can setup something
like fail2ban to block hosts hitting this api call often
Stoiko Ivanov [Wed, 18 Nov 2020 14:52:54 +0000 (15:52 +0100)]
do not create /cluster/<cid> unconditionally
while looking through the spooldir creation we noticed the mkdir call
on a relative path. This creates a '/cluster/<cid>/' directory on each system
which has a cluster.conf (<cid> being the node's clusterid). This is not used
since the spooldirs are in '/var/spool/pmg/cluster/'
Simply drop the mkdir call, since the spooldirs get created upon cluster
creation (PMG::API2::Cluster::create) and joining to an existing cluster.
Stoiko Ivanov [Wed, 18 Nov 2020 14:52:53 +0000 (15:52 +0100)]
fix clustersync after node-deletion
This patch creates the spoolsdirs for a newly joining clusternode on the
master (/var/spool/pmg/cluster/<newnode-cid>/(spam|attachment|virus).
This is necessary in order to prevent a failing cluster-sync for nodes, joining
the cluster after that node has been deleted. (This happens if you remove
a node from the cluster and directly rejoin it to the same masternode):
On the first sync after a node was deleted (there is no section config for a
number < maxcid) each node tries to sync the quarantine for the deleted node
from the cluster (in order to be promotable to new master). This rsync
fails because the spooldir for that node never got created on the master.
The spooldir for a node gets created on the master on the first sync of a node
which can be 2 minutes after joining the cluster (and leaving it again).
Thomas Lamprecht [Wed, 18 Nov 2020 09:28:15 +0000 (10:28 +0100)]
pmgbackup: fix missing semicolon leading to weird effects
without as semicolon the code run at "compile" (initial parse) time,
not after that. Thus, every code error was attributed to an
compilation error, i.e., bad syntax or the like, not a runtime die.
This was visible as the program executed when using perl check
> # perl -wc src/bin/pmgbackup
which should normally not be the case
Originally-by: Stoiko Ivanov <s.ivanov@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stoiko Ivanov [Thu, 29 Oct 2020 17:49:16 +0000 (18:49 +0100)]
reinject_email: fix connecting for ipv6-only hosts
When configuring PMG only with ipv6 addresses, reinject_email after processing
fails to connect to the postfix/smtpd instance (with EINVAL).
Setting the host to '::FFFF:127.0.0.1' fixes the issue.
Tested with:
* an ipv6only host (no ipv4 configured)
* a host with ipv6 disabled via sysctl:
```
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
```
* a host with dual-stack setup