Dominik Csapak [Thu, 2 Dec 2021 10:41:43 +0000 (11:41 +0100)]
vzdump: add new 'next-run' field for vzdump job listing
and calculate it by getting the next event after 'now' since
we currently have no way to get the last run time for jobs only running
on different cluster nodes
ui: node: zfs: use ZFSDetail window from widget-toolkit
The only functional differences I could see are the missing
defaultValue for 'Scan' and the reduced height. For restoring the
height, there is a proposed patch for widget-toolkit.
Dominik Csapak [Tue, 11 Jan 2022 10:26:21 +0000 (11:26 +0100)]
ui: fix novnc scaling radio buttons
when having selected 'off', the next start of the window has both
radiobuttons selected and no change is possible anymore. It seems that
the 'checked: true' triggers only after the 'init' function.
So instead remove the 'checked: true', and add the default in the init
function.
some users in the forum[0][1] and oguz wondered and made me notice
it, its neither used nor referred too in pve-storage, qemu-server nor
the pve-manager package itself, so just drop it.
Fabian Ebner [Tue, 7 Dec 2021 13:08:44 +0000 (14:08 +0100)]
sorters: use correct property 'direction' and keep default 'ASC'
Ext.util.Sorter does not have an 'order' property, so 'order: DESC'
didn't have an effect. The default is 'ASC' and it is arguably the
preferred direction for all affected sorters anyways.
ui: calendar event: add once daily example and clarify workday one
Using 00:00 with relying on the implied default is sub optimal as its
a bit of a magic example that new users may not understand as easily.
So spell it out explicitly, even if there'd be a shorter version
possible.
We also had some request for the once-daily every day, and its a
sensible example to have in general, could help getting the
difference between an hour list and a single one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Wed, 24 Nov 2021 14:47:48 +0000 (15:47 +0100)]
api: journal: stream the journal data to the client
instead of accumulating the whole output of 'mini-journalreader' in
the api call (this can be quite big), use the download mechanic of the
http-server to stream the output to the client.
we lose some error handling possibilities, but we do not have
to allocate anything here, and since perl does not free memory after
allocating[0] this is our desired behaviour.
to keep api compatiblitiy, we need to give the journalreader the '-j'
flag to let it output json.
also tell the http server that the encoding is gzip and pipe
the output through it.
Thomas Lamprecht [Mon, 22 Nov 2021 19:15:29 +0000 (20:15 +0100)]
pvescheduler: make jobs tracking more flexible, rework stop
Avoid hard-coding the current implication of the replication stack to
not get started again until the old worker is done..
We still apply the same check, but changing that to let the jobs have
control is rather easy now.
Also rework the stop logic, send terminate to _all_ workers and make
the timeout a actual shared one (not first gets all, remaining get
kill) and send a kill to the stuck, leftover ones in one go at the
end, including some logging so that the admin can actually know about
this non-ideal situation.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Thu, 18 Nov 2021 13:28:30 +0000 (14:28 +0100)]
pvescheduler: reworking child pid tracking
previously, systemd timers were responsible for running replication jobs.
those timers would not restart if the previous one is still running.
though trying again while it is running does no harm really, it spams
the log with errors about not being able to acquire the correct lock
to fix this, we rework the handling of child processes such that we only
start one per loop if there is currently none running. for that,
introduce the types of forks we do and allow one child process per type
(for now, we have 'jobs' and 'replication' as types)
Dominik Csapak [Thu, 18 Nov 2021 13:28:29 +0000 (14:28 +0100)]
pvescheduler: catch errors in forked childs
if '$sub' dies, the error handler of PVE::Daemon triggers, which
initiates a shutdown of the child, resulting in confusing error logs
(e.g. 'got shutdown request, signal running jobs to stop')
instead, run it under 'eval' and print the error to the sylog instead
Dominik Csapak [Wed, 17 Nov 2021 14:21:01 +0000 (15:21 +0100)]
api: backup: normalize 'dow' format when converting
the old web ui sends the days as seperate parameters, which will
be concatenated by a null-byte in the api, causing it to land it this
way in the jobs.cfg
to fix this, split+join the list to get a well-formed dow list
Thomas Lamprecht [Tue, 16 Nov 2021 13:17:42 +0000 (14:17 +0100)]
ui: qemu: disk edit: drop label widths from advanced columns
this is a historical left over from the time when the bandwidth
limits weren't in their own, separate tab, as there we got quite
long labels and we synced the width up for the remaining fields to
avoid that it looks to much off.
Luckily not required anymore, so just drop it for non BW fields.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Tue, 16 Nov 2021 13:08:22 +0000 (14:08 +0100)]
ui: ceph: osd: handle edge case with dead node
If there is a left-over entry for a dead node in the ceph osd tree
the panel wouldn't show and produce an
Uncaught TypeError: data.versions is undefined
because of an access
node.version = data.versions[node.name];
further below (not visible in the patch itself).
AFAICT, the same issue would also happen when something went wrong
with getting the broadcasted ceph-versions, or when a node is part
of Ceph, but not PVE.
Handle the situation gracefully by always initializing data.versions.
Thomas Lamprecht [Mon, 15 Nov 2021 09:33:05 +0000 (10:33 +0100)]
ui: qemu: disk edit: refactor to more declarative style using bindings
would technically require a versioned dependency bump to widget
toolkit as the `clearOnDisable` flag is new in 3.4-2, but this is
really only for slight UX improvement, so avoid the hard dependency
bump..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For DB and WAL disks, not only partitions will show up now, but one
more type of disk, that didn't show up before: Namely, GPT-partitioned
disks with any partitions detected as used.
It's confusing as the size shown is of the full disk, with no
indication that a new partition will be appended at the end. This
problem was already present before, but only affected GPT-partitioned
disks where no usage on a partition was detected.
Fabian Ebner [Wed, 6 Oct 2021 09:18:49 +0000 (11:18 +0200)]
partially fix #2285: api: ceph: create osd: allow using partitions
Note that this does not only allow partitions to be used, but for DB
and WAL disks, one more type of disk, that wasn't allowed before.
Namely, GPT-partitioned disks with any partitions detected as used.
The reason is get_disks' behavior:
* Without $include_partitions=1, the disk will have the same usage
as it's first used partition, and thus wasn't allowed. (Except in
the case that usage was LVM, where the check was bypassed, but
luckily OSD creation just failed later because no Ceph volume
group would be detected).
* With $include_partitions=1, the disk will have usage 'partitions'
and thus be allowed.