Christian Ebner [Mon, 27 May 2024 12:07:36 +0000 (14:07 +0200)]
pxar: bin: ignore version and prelude entries in listing
Do not list the pxar format version and the prelude entries in the
output of pxar list, these are not regular entries. Do include them
however when dumping with the debug environmet variable set.
Since the prelude is arbitrary in size, only show the content size.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Thu, 2 May 2024 09:26:57 +0000 (11:26 +0200)]
client: pxar: allow to restore prelude to optional path
Pxar archives allow to store additional information in a prelude
entry since pxar format version 2.
Add an optional parameter to `pxar` and `proxmox-backup-client` to
specify the path to restore the prelude to and pass this to the
archive extraction by extending the `PxarExtractOptions` by a
corresponding field. If none is given, the prelude is simply skipped
during restore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Fri, 22 Mar 2024 14:27:59 +0000 (15:27 +0100)]
client: pxar: opt encode cli exclude patterns as Prelude
Instead of encoding the pxar cli exclude patterns as regular file
within the root directory of an archive, store this information
directly after the pxar format version entry in the entry of kind
Prelude.
This behavior is however currently exclusive to the archives written
with format version 2 in a split metadata and payload case.
This is a breaking change for the encoding of new cli exclude
parameters. Any new exclude parameter will not be added to an already
present .pxar-cliexclude file, and it will not be created if not
present.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Thu, 4 Apr 2024 10:49:42 +0000 (12:49 +0200)]
client: pxar: add helper to handle optional preludes
Pxar archives with format version 2 allows to store optional
information file format version and prelude entries.
Cover the case for these entries, the file format version entry being
introduced to distinguish between different file formats used for
encoding as well as the prelude entry used to store optional metadata
such as the pxar cli exlude parameters.
Add the logic to accept and decode these prelude entries when
accessing the archive via a decoder instance.
For now simply ignore them.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Mon, 3 Jun 2024 08:24:48 +0000 (10:24 +0200)]
client: backup writer: make backup info output more concise
With the additional output in case of split pxar archives, the upload
statistics logged by the backup writer following a backup are crowded
and hard to read.
Make the output more concise by merging the currenlty 2 lines per
upload stream, shown as e.g.:
```
data.ppxar: had to backup 4 MiB of 10.943 GiB (compressed 159 B) in 49.30s
data.ppxar: average backup speed: 83.09 KiB/s
```
into a single line, shown as e.g.:
```
data.ppxar: had to back up 4 MiB of 10.943 GiB (159 B compressed) in 49.30 s (average 83.09 KiB/s)
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Fri, 24 May 2024 16:31:39 +0000 (18:31 +0200)]
fix #3174: client: pxar: enable caching and meta comparison
When walking the file system tree, check for each entry if it is
reusable, meaning that the metadata did not change and the payload
chunks can be reindexed instead of reencoding the whole data.
If the metadata matched, the range of the dynamic index entries for
that file are looked up in the previous payload data index.
Use the range and possible padding introduced by partial reuse of
chunks to decide whether to reuse the dynamic entries and encode
the file payloads as payload reference right away or cache the entry
for now and keep looking ahead.
If however a non-reusable (because changed) entry is encountered
before the padding threshold is reached, the entries on the cache are
flushed to the archive by reencoding them, resetting the cached state.
Reusable chunk digests and size as well as reference offsets to the
start of regular files payloads within the payload stream are injected
into the backup stream by sending them to the chunker via a dedicated
channel, forcing a chunk boundary and inserting the chunks.
If the threshold value for reuse is reached, the chunks are injected
in the payload stream and the references with the corresponding
offsets encoded in the metadata stream.
Since multiple files might be contained within a single chunk, it is
assured that the deduplication of chunks is performed, by keeping back
the last chunk, so following files might as well reuse that same
chunk without double indexing it. It is assured that this chunk is
injected in the stream also in case that the following lookups lead to
a cache clear and reencoding.
Directory boundaries are cached as well, and written as part of the
encoding when flushing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 23 Apr 2024 13:16:26 +0000 (15:16 +0200)]
pxar: caching: add look-ahead cache
Add a lookahead cache and the neccessary types to store the required
data and keep track of directory boundaries while traversing the
filesystem tree, in order to postpone a decision if to reuse or
reencode a given regular file with unchanged metadata.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Wed, 21 Feb 2024 13:50:40 +0000 (14:50 +0100)]
client: pxar: add method for metadata comparison
Add method to compare metadata of current file entry against metadata
of the entry looked up in the previous backup snapshot. If the
metadata matched, the start offset pointing to the files payload
header in the payload steam is returned.
This is in preparation for reusing payload chunks for unchanged files.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Wed, 21 Feb 2024 12:06:46 +0000 (13:06 +0100)]
client: implement prepare reference method
Implement a method that prepares the decoder instance to access a
previous snapshots metadata index and payload index in order to
pass it to the pxar archiver. The archiver than can utilize these
to compare the metadata for files to the previous state and gather
reusable chunks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Wed, 21 Feb 2024 10:58:14 +0000 (11:58 +0100)]
client: streams: add channels for dynamic entry injection
To reuse dynamic entries of a previous backup run and index them for
the new snapshot. Adds a non-blocking channel between the pxar
archiver and the chunk stream, as well as the chunk stream and the
backup writer.
The archiver sends forced boundary positions and the dynamic
entries to inject into the chunk stream following this boundary.
The chunk stream consumes this channel inputs as receiver whenever a
new chunk is requested by the upload stream, forcing a non-regular
chunk boundary in the pxar stream at the requested positions.
The dynamic entries to inject and the boundary are then send via the
second asynchronous channel to the backup writer's upload stream,
indexing them by inserting the dynamic entries as known chunks into
the upload stream.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Mon, 13 May 2024 12:37:23 +0000 (14:37 +0200)]
chunker: add method to reset chunker state
When forcing a boundary, the internal chunker state is not in sync
with the chunk stream anymore. The reset method therefore allows
to reset the internal state.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 19 Mar 2024 09:23:08 +0000 (10:23 +0100)]
client: chunk stream: add struct to hold injection state
Adds a dedicated structure to hold the optional sender and receiver
instances and state for injection of reused dynamic entries in the
payload stream for split stream pxar archives.
The asynchronous channels must only be attached to the payload
archive, leaving the current behavior for the metadata archive and
current default encoding without reusing payload chunks of previous
snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Thu, 21 Sep 2023 12:46:56 +0000 (14:46 +0200)]
upload stream: implement reused chunk injector
In order to be included in the backups index file, reused payload
chunks have to be injected into the payload upload stream at a
forced boundary. The chunker forces a chunk boundary and sends the
list of reusable dynamic entries to be uploaded.
This implements the logic to receive these dynamic entries via the
corresponding communication channel from the chunker and inject the
entries into the backup upload stream by looking for the matching
chunk boundary, already forced by the chunker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Thu, 14 Mar 2024 14:07:03 +0000 (15:07 +0100)]
client: pxar: helper for lookup of reusable dynamic entries
The helper method allows to lookup the entries of a dynamic index
which fully cover a given offset range. Further, the helper returns
the start padding from the start offset of the dynamic index entry
to the start offset of the given range and the end padding.
This will be used to lookup size and digest for chunks covering the
payload range of a regular file in order to re-use found chunks by
indexing them in the archives index file instead of re-encoding the
payload.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Fri, 15 Mar 2024 08:46:06 +0000 (09:46 +0100)]
client: pxar: include payload offset in entry listing
Also display the payload offset as listing output when the regular file
entry had a payload reference rather than the payload encoded in the
archive. This allows for debugging by inspecting the raw payload data
file at given offset.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Mon, 27 May 2024 11:57:44 +0000 (13:57 +0200)]
pxar: bin: cover listing for split archives
Allows to list entries of split pxar archives. As the decoder skips
over the file payloads, the corresponding payload file has to be
provided. Otherwise the decoder would skip inside the metadata
archive, leading to incorrect decoding.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 23 Apr 2024 17:25:55 +0000 (19:25 +0200)]
file restore: cover split metadata and payload archives
Attach the payload data archive as input stream to the decoder
and accessor instances for split archives.
Allows to restore contents from split archives via the
`proxmox-file-restore extract` command, by passing the metadata
archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Wed, 21 Feb 2024 19:39:48 +0000 (20:39 +0100)]
catalog: shell: make split pxar archives accessible
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives,
using the corresponding helper methods.
Allows to restore split metadata and payload stream pxar archives via
the catalog shell.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Thu, 15 Feb 2024 11:47:32 +0000 (12:47 +0100)]
client: mount: make split pxar archives mountable
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Wed, 21 Feb 2024 10:51:52 +0000 (11:51 +0100)]
client: restore: read payload from dedicated index
Whenever a split pxar archive is encountered, instantiate and attach
the required dedicated reader instance to the decoder instance on
restore.
Piping the output to stdout is not possible for these, as this would
require a decoder instance which can decode the input stream, while
maintaining the pxar stream format as output.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 21 May 2024 15:18:38 +0000 (17:18 +0200)]
client: tools: helper to check pxar filename extensions
With the introduction of split pxar archives, the allowed extensions
are now `.pxar`, `.mpxar` and `.ppxar`. Add a helper function to
allow to check for all valid variants, including the optional
additional `.didx` in case of a server archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 19 Mar 2024 08:43:22 +0000 (09:43 +0100)]
client: helper: add method for split archive name mapping
Helper method that takes an archive name as input and checks if the
given archive is present in the manifest, by also taking possible
split archive extensions into account.
Returns the pxar archive name if found or the split archive names if
the split archive variant is present in the manifest.
If neither is matched, an error is returned signaling that nothing
matched entries in the manifest.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Mon, 22 Apr 2024 15:39:00 +0000 (17:39 +0200)]
client: pxar: optionally split metadata and payload streams
... and attach the split payload writer variant to the pxar archive
creation. By this, metadata and payload data will create different
dynamic indexes, allowing to lookup and reuse payload chunks without
the additional overhead of the pxar archive's metadata.
For now this functionality remains disabled and will be enabled in a
later patch once the logic for reusing the payload chunks is in
place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Mon, 22 Apr 2024 14:23:40 +0000 (16:23 +0200)]
client: pxar: combine writers into struct
Introduce a `PxarWriters` struct to bundle all writer instances
required for the pxar archive creation into a single object to limit
the number of function call parameters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 20 Feb 2024 16:07:08 +0000 (17:07 +0100)]
client: pxar: switch to stack based encoder state
... and adapt to the new reader/writer variant for encoder or
decoder/accessor to attach a dedicated payload input/output for split
pxar archives.
In preparation for look-ahead caching, where a passing around of
per-directory level encoder instances with internal references is
not feasible.
Previously, for each directory level a new encoder instance has been
generated, restricting possible implementation errors. These encoder
instances have been internally linked by references to keep track of
the state changes in a parent child relationship.
This is however not feasible when the encoder has to be passed by
mutable reference, as required by the look-ahead cache
implementation. The encoder has therefore been adapted to use a
single instance implementation with an internal stack keeping track
of the state.
Depends on the bumped pxar library version, including the patches to
attach the corresponding variant for the pxar reader/writer
instantiation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 28 May 2024 09:42:33 +0000 (11:42 +0200)]
datastore: dynamic index: add method to get digest
In preparation for injecting reused payload chunks in payload streams
for regular files with unchanged metaddata. Allows to get the digest
of a dynamic index entry to construct a reusable dynamic entry from
it.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 28 May 2024 09:42:11 +0000 (11:42 +0200)]
api: datastore: refactor getting local chunk reader
Move the code to get the local chunk reader to a dedicated function
to make it reusable. The same code is required to get the local chunk
reader for the payload stream for split stream archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Christian Ebner [Tue, 28 May 2024 09:42:10 +0000 (11:42 +0200)]
client: backup: factor out extension from backup target
Instead of composing the backup target name and pushing it to the
backup list, push the archive name and extension separately, only
constructing it while iterating the list later.
By this it remains possible to additionally prefix the extension, as
required with the separate pxar metadata and payload indexes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Shannon Sterz [Thu, 23 May 2024 11:25:59 +0000 (13:25 +0200)]
auth: add locking to `PbsAuthenticator` to avoid race conditions
currently we don't lock the shadow file when removing or storing a
password. by adding locking here we avoid a situation where storing
and/or removing a password concurrently could lead to a race
condition. in this scenario it is possible that a password isn't
persisted or a password isn't removed. we already do this for
the "token.shadow" file, so just use the same mechanism here.
Thomas Lamprecht [Wed, 22 May 2024 17:03:05 +0000 (19:03 +0200)]
tape: rework setting MAM Host type attributes
The product name is Proxmox Backup Server, not just Backup Server,
that makes no sense on its own and it really cannot be expected by
tools extracting any Medium Auxiliary Memory (MAM) info to render it
as `${app_vendor} ${app_name}`.
Drop the comment about ignoring errors, that's pretty clear with
the only-log-error construct.
Instead, add some comments about what the hex numbers refers too and
what their respective length (limit) is. The names where taken from
Table 315 "MAM Host type attributes" in the "IBM LTO SCSI Reference"
for LTO 9.
Slightly off-topic: The tape code really is a mess with sprinkling
those hex numbers hard coded all over the place, often with some
unchecked coupling in other places (like here, the list of set MAM
attrs and the one that get cleared can easily get out of sync..), but
that's for another time to clean-up (I need to cut a release).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
docs: document notification-mode and merge old notification section
This new section describes how the notification-mode parameter works.
The section also contains also parts of the old notification section
from the maintenance chapter, reusing the description of the
`notify` and `notify-user` parameters.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com> Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Gabriel Goller [Wed, 15 May 2024 09:58:45 +0000 (11:58 +0200)]
notifications: fix legacy sync notifications
When using the legacy notifications the sync mode would pick up the
settings from the prune-job, which default to Error. This completely
disables notifications for successful sync-jobs when using the legacy
system.
Reported in the forum: https://forum.proxmox.com/threads/147018/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com> Tested-by: Max Carrara <m.carrara@proxmox.com> Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Stefan Sterz [Wed, 6 Mar 2024 12:36:09 +0000 (13:36 +0100)]
auth: use auth-api when generating keys and generate ec keys
this commit switches pbs over to generating ed25519 keys when
generating new auth api keys. this also removes the last direct
usages of openssl here and further unifies key handling in the auth
api.
Stefan Sterz [Wed, 6 Mar 2024 12:36:08 +0000 (13:36 +0100)]
auth: move to auth-api's private and public keys when loading keys
this commit moves away from using openssl's `PKey` and uses the
wrappers from proxmox-auth-api. this allows us to handle keys in a
more flexible way and enables as to move to ec based crypto for the
authkey in the future.
Stefan Sterz [Wed, 6 Mar 2024 12:36:07 +0000 (13:36 +0100)]
auth: upgrade hashes on user log in
if a users password is not hashed with the latest password hashing
function, re-hash the password with the newest hashing function. we
can only do this on login and after the password has been validated,
as this is the only point at which we have access to the plain text
password and also know that it matched the original password.
Stefan Sterz [Wed, 6 Mar 2024 12:36:06 +0000 (13:36 +0100)]
auth: move to hmac keys for csrf tokens
previously we used a self-rolled implementation for csrf tokens. while
it's unlikely to cause issues in reality, as csrf tokens are only
valid for a given tickets lifetime, there are still theoretical
attacks on our implementation. so move all of this code into the
proxmox-auth-api crate and use hmac instead.
this change should not impact existing installations for now, as this
falls back to the old implementation if a key is already present. hmac
keys will only be used for new installations and if users manually
remove the old key and
Thomas Lamprecht [Tue, 21 May 2024 09:30:07 +0000 (11:30 +0200)]
ui: garbage-collection: use different state-id for global and per-datastore view
For one these different views have different columns shown, and more
importantly: with the state being shared one could change sorting in
the global view and then have that applied in the per-datastore view
too, even if one cannot sort that view explicitly otherwise as there's
just one row anyway. This small glitch might lead to a bit of
confusion in the worst case and looks unpolished in any way.
Note that I explicitly decided against encoding the datastore in the
state-id for the per-datastore views for now, as most users will want
to adapt layout (like column width) for all per-datastores views.
Having to re-do that for every datastore separately can be quite a
nuisance while the same user wanting different layout for each
datastore in their per-datastore view seems rather to be an edge case.
And we can always change this, so starting out with the slightly more
restricted design that has less browser local data to be saved seems
better w.r.t. maintainability.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Gabriel Goller [Thu, 16 May 2024 09:18:41 +0000 (11:18 +0200)]
fix #5422: ui: garbage-collection: make columns in global view sortable
Make columns sortable in the global 'Prune & GC Jobs' view. In the
per-datastore view the columns will not be sortable as there can only be
one job.
Fixes: db3fd213 ("fix #3217: ui: global prune and gc job view") Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Signed-off-by: Gabriel Goller <g.goller@proxmox.com> Tested-by: Max Carrara <m.carrara@proxmox.com>
Dominik Csapak [Wed, 15 May 2024 09:55:13 +0000 (11:55 +0200)]
restore daemon: search disk also with truncated serial
the disk serial given to virtio disks only can be 20 characters, so
looking for a disk with a longer serial will always fail (like
'drive-tpmstate0-backup'). If the serial is longer, also try with the
truncated one. Leave the first try in place in case the limit changes.
Dominik Csapak [Wed, 15 May 2024 09:55:12 +0000 (11:55 +0200)]
restore daemon: log some errors for dir traversal
in case we cannot stat a file in the restore vm, log the path and reason
why. This should normally not happen, but when it does, the path and
error might help us find the issue.
Dominik Csapak [Mon, 13 May 2024 10:49:26 +0000 (12:49 +0200)]
tape: include drive activity in status
Since we don't query each drives status seperately, but rely on a single
call to the drives listing parameter for that, we now add the option
to query the activity there too. This makes that data avaiable for us
to show in a seperate (by default hidden) column.
Also we show the activity in the 'State' column when the drive is idle
from our perspective. This is useful when e.g. an LTO-9 tape is loaded
the first time and is calibrating, since that happens automatically.
Dominik Csapak [Mon, 13 May 2024 10:49:25 +0000 (12:49 +0200)]
tape: drive status: make some depend on the activity
when the tape drive has an activity (and the tape is in motion), certain
calls block until the operation is finished. Since we cannot predict how
long it's going to be and it can be quite long in certain cases,
skip those calls when the drive is doing anything.
If we cannot determine the activity, try to do the queries.
We have to extend the check for a loaded drive in the UI, since the
position is not available during any activity.
Dominik Csapak [Tue, 7 May 2024 13:45:52 +0000 (15:45 +0200)]
tape: improve throughput by not unnecessarily syncing/committing
When writing data on tape, the idea was to sync/committing to tape and
the catalog to disk every 128GiB of data. For that the counter
'bytes_written' was introduced and checked after every chunk/snapshot
archive.
Sadly we forgot to reset the counter after doing so, which meant that
after 128GiB was written onto the tape, we synced/committed after every
archive on the tape for the remaining length of the tape.
Since syncing to tape and writing to disk takes a bit of time, the drive
had to slow down every time and reduced the available throughput. (In
our tests here from ~300MB/s to ~255MB/s).
By resetting the value to zero after syncing, we avoid that and increase
throughput performance when backups are bigger than 128GiB on tape.
Gabriel Goller [Fri, 26 Apr 2024 14:02:43 +0000 (16:02 +0200)]
api-types: remove influxdb bucket name restrictions
Remove the regex for influxdb organizations and buckets. Influxdb does
not place any constraints on these names and allows all characters. This
allows influxdb organization names with slashes.
Also remove a duplicate comment and add some missing ones.
This also aligns the behavior to PVE as there are no restrictions there
either.
The motivation for this patch is this forum post:
https://forum.proxmox.com/threads/influx-db-organization-doesnt-allow-slash.145402/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
ui: sync job: fix error if local namespace is selected first
When creating a new sync job and a local namespace is configured
without setting a remote first, the createMaxPrefixLength
was passed an array instead of a string/undefined/null, which
triggered a 'ns2.match is not a funtion exception', making the UI
glitchy afterwards.
Fixed by explicitly checking for a string. Verified that the other
user of NamespaceMaxDepthReduced, the prune job edit window, does not
break after the change.
Stefan Sterz [Fri, 5 Apr 2024 14:12:57 +0000 (16:12 +0200)]
fix: tape ui: unset `deleteEmpty` in `TapeBackupWindow`
since the api rejects unknown parameters, deleteEmpty needs to be
unset here, because the endpoint for creating backups does not support
deleting parameters. otherwise a user will get a fairly cryptic error
message in the gui.
Stefan Lendl [Thu, 4 Apr 2024 10:00:35 +0000 (12:00 +0200)]
api: create and update vlan interfaces
* Implement setting vlan-id and vlan-raw-device in the create and update api.
* Checking if the provided vlan-raw-device exists
* Moved VLAN_INTERFACE_REGEX to top level network module to use it in
the checking functions there. Changed to match with named capture groups.
* Unit tests to verify parsing vlan_id and vlan_raw_device from name.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com> Tested-by: Lukas Wagner <l.wagner@proxmox.com> Reviewed-by: Lukas Wagner <l.wagner@proxmox.com> Tested-by: Folke Gleumes <f.gleumes@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* Add lexer Token enum variants for vlan-id and vlan-raw-device and parse
them in parse_iface_attributes.
* Add tests to verify this works in the above scenarios
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com> Tested-by: Lukas Wagner <l.wagner@proxmox.com> Reviewed-by: Lukas Wagner <l.wagner@proxmox.com> Tested-by: Folke Gleumes <f.gleumes@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stefan Lendl [Thu, 4 Apr 2024 10:00:31 +0000 (12:00 +0200)]
config: write vlan network interface
* Add vlan_id and vlan_raw_device fields to the Interface api type
* Write to the network config the vlan specific properties for vlan
interface type
* Add several tests to verify the functionally
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com> Tested-by: Lukas Wagner <l.wagner@proxmox.com> Reviewed-by: Lukas Wagner <l.wagner@proxmox.com> Tested-by: Folke Gleumes <f.gleumes@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>