For DB and WAL disks, not only partitions will show up now, but one
more type of disk, that didn't show up before: Namely, GPT-partitioned
disks with any partitions detected as used.
It's confusing as the size shown is of the full disk, with no
indication that a new partition will be appended at the end. This
problem was already present before, but only affected GPT-partitioned
disks where no usage on a partition was detected.
Fabian Ebner [Wed, 6 Oct 2021 09:18:49 +0000 (11:18 +0200)]
partially fix #2285: api: ceph: create osd: allow using partitions
Note that this does not only allow partitions to be used, but for DB
and WAL disks, one more type of disk, that wasn't allowed before.
Namely, GPT-partitioned disks with any partitions detected as used.
The reason is get_disks' behavior:
* Without $include_partitions=1, the disk will have the same usage
as it's first used partition, and thus wasn't allowed. (Except in
the case that usage was LVM, where the check was bypassed, but
luckily OSD creation just failed later because no Ceph volume
group would be detected).
* With $include_partitions=1, the disk will have usage 'partitions'
and thus be allowed.
Dominik Csapak [Thu, 11 Nov 2021 11:07:07 +0000 (12:07 +0100)]
api: cluster: add jobs/schedule-analyze api call
a simple api call to simulate calendar event triggers
takes a schedule, an optional number (default 10), an optional starttime
(default 'now') and returns a list with unix timestamps, as well as
humanly readable utc timestamps.
Dominik Csapak [Mon, 25 Oct 2021 14:01:31 +0000 (16:01 +0200)]
api: cephfs: more checks on fs create
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Dominik Csapak [Mon, 25 Oct 2021 14:01:29 +0000 (16:01 +0200)]
ui: ceph: catch missing version for service list
when a daemon is stopped, the version here is 'undefined'. catch that
instead of letting the template renderer run into an error.
this fixes the rendering of the grid backgrounds
Dominik Csapak [Mon, 25 Oct 2021 14:01:28 +0000 (16:01 +0200)]
api: ceph-mds: get mds state when multple ceph filesystems exist
by iterating over all of them and saving the name to the active ones
this fixes the issue that an mds that is assigned to not the first
fs in the list gets wrongly shown as offline
broadcast the built-in, statically available version info, e.g.:
{
"release" : "7.0",
"repoid" : "3ce05d40",
"version" : "7.0-14"
}
We can expand this by more actual package version info in the future,
but that certainly needs more elaborate update control mechanisms as
the oneshot at boot we have now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 10 Nov 2021 16:26:48 +0000 (17:26 +0100)]
api: version: only include the console option of the datacenter config
Especially with the new description the result can get quite huge and
we do not really require any other info, the console one is only
returned here anyway as a hack due to the dc config API path requires
higher privileges.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Mon, 8 Nov 2021 13:07:57 +0000 (14:07 +0100)]
api/backup: handle new vzdump jobs
in addition to listing the vzdump.cron jobs, also list from the
jobs.cfg file.
updates/creations go into the new jobs.cfg only now
and on update, starttime+dow get converted to a schedule
this transformation is straight forward, since 'dow'
is already in a compatible format (e.g. 'mon,tue') and we simply
append the starttime (if any)
id on creation is optional for now (for api compat), but will
be autogenerated (uuid). on update, we simply take the id from before
(the ids of the other entries in vzdump.cron will change but they would
anyway)
as long as we have the vzdump.cron file, we must lock both
vzdump.cron and jobs.cfg, since we often update both
we also change the backupinfo api call to read the jobs.cfg too
Dominik Csapak [Mon, 8 Nov 2021 13:07:54 +0000 (14:07 +0100)]
add PVE/Jobs to handle VZDump jobs
this adds a SectionConfig handling for jobs (only 'vzdump' for now) that
represents a job that will be handled by pvescheduler and a basic
'job-state' handling for reading/writing state json files
this has some intersections with pvesrs state handling, but does not
use a single state file for all jobs, but seperate ones, like we
do it in the backup-server.
The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.
But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.
So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)
We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.
Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Dominik Csapak [Wed, 10 Nov 2021 12:15:56 +0000 (13:15 +0100)]
ui: ceph: make version handling actually future proof
It seems we did not prepare the GUI enough for the API changes
planned when stopping to broadcast the old single string version. We
have to use the node specific versions, not the global 'versions'
object.
Also use the `PVE.Utils.compare_ceph_versions` everywhere, since some
versions are strings and others are the parts of the version (e.g.
["16", "2, "6"])
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this also removes the "content" selector from the window.
as far as it seems, this selector was never able to select
more than one entry, so it was useless.
the check for FormData() is also removed, because this is
supported by all major browsers for a long time. therefore
doStandardSubmit() is also not necessary.
ui: storage content: avoid redundant options hasNotesColumn and hideColumns
Replace both by a showColumns option instead. As the current use of
hasNotesColumn already indicates, when new content-specific columns
are added, it is more natural for each derived class to specify the
columns it wants, rather than those it doesn't.
For hideColumns, there was no user. For hasNotesColumn, the only user
was the backup view.
Set the column information in the storage.BackupView class itself
rather than the instance (like hasNotesColumn was).
vzdump: skip protected backups for dumpdir pruning
Keeps the behavior consistent with what happens for storages. It also
is required to not get into conflict with the check in archive_remove,
i.e. pruning here marks a backup as 'remove' and then archive_remove
complains that it's protected.