From 7ccbce03d3f93e02e595e0aba9a4c275dfebfff2 Mon Sep 17 00:00:00 2001 From: Dylan Whyte Date: Mon, 11 Oct 2021 17:15:19 +0200 Subject: [PATCH] docs: language and formatting fixup Some minor langague and formatting fixes to sections: Proxmox VE Integration, pxar Command Line Tool, Managing Remotes, Maintenance Tasks, Host System Administration, Network Management, and Technical Overview. Signed-off-by: Dylan Whyte --- docs/local-zfs.rst | 133 ++++++++++++---------- docs/maintenance.rst | 54 ++++----- docs/managing-remotes.rst | 23 ++-- docs/network-management.rst | 3 +- docs/proxmox-backup-proxy/description.rst | 2 +- docs/pve-integration.rst | 16 +-- docs/pxar/description.rst | 52 ++++----- docs/sysadmin.rst | 8 +- docs/technical-overview.rst | 130 ++++++++++----------- 9 files changed, 221 insertions(+), 200 deletions(-) diff --git a/docs/local-zfs.rst b/docs/local-zfs.rst index f36aefe8..32af860f 100644 --- a/docs/local-zfs.rst +++ b/docs/local-zfs.rst @@ -4,17 +4,17 @@ ZFS on Linux ------------ -ZFS is a combined file system and logical volume manager designed by +ZFS is a combined file system and logical volume manager, designed by Sun Microsystems. There is no need to manually compile ZFS modules - all packages are included. By using ZFS, it's possible to achieve maximum enterprise features with -low budget hardware, but also high performance systems by leveraging -SSD caching or even SSD only setups. ZFS can replace cost intense -hardware raid cards by moderate CPU and memory load combined with easy +low budget hardware, and also high performance systems by leveraging +SSD caching or even SSD only setups. ZFS can replace expensive +hardware raid cards with moderate CPU and memory load, combined with easy management. -General ZFS advantages +General advantages of ZFS: * Easy configuration and management with GUI and CLI. * Reliable @@ -34,18 +34,18 @@ General ZFS advantages Hardware ~~~~~~~~~ -ZFS depends heavily on memory, so you need at least 8GB to start. In -practice, use as much you can get for your hardware/budget. To prevent +ZFS depends heavily on memory, so it's recommended to have at least 8GB to +start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM. If you use a dedicated cache and/or log disk, you should use an -enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can +enterprise class SSD (for example, Intel SSD DC S3700 Series). This can increase the overall performance significantly. -IMPORTANT: Do not use ZFS on top of hardware controller which has its +IMPORTANT: Do not use ZFS on top of a hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An -HBA adapter is the way to go, or something like LSI controller flashed -in ``IT`` mode. +HBA adapter or something like an LSI controller flashed in ``IT`` mode is +recommended. ZFS Administration @@ -53,7 +53,7 @@ ZFS Administration This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands -to manage ZFS are `zfs` and `zpool`. Both commands come with great +to manage ZFS are `zfs` and `zpool`. Both commands come with extensive manual pages, which can be read with: .. code-block:: console @@ -123,7 +123,7 @@ Create a new pool with cache (L2ARC) It is possible to use a dedicated cache drive partition to increase the performance (use SSD). -As `` it is possible to use more devices, like it's shown in +For ``, you can use multiple devices, as is shown in "Create a new pool with RAID*". .. code-block:: console @@ -136,7 +136,7 @@ Create a new pool with log (ZIL) It is possible to use a dedicated cache drive partition to increase the performance (SSD). -As `` it is possible to use more devices, like it's shown in +For ``, you can use multiple devices, as is shown in "Create a new pool with RAID*". .. code-block:: console @@ -146,8 +146,9 @@ As `` it is possible to use more devices, like it's shown in Add cache and log to an existing pool ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you have a pool without cache and log. First partition the SSD in -2 partition with `parted` or `gdisk` +You can add cache and log devices to a pool after its creation. In this example, +we will use a single drive for both cache and log. First, you need to create +2 partitions on the SSD with `parted` or `gdisk` .. important:: Always use GPT partition tables. @@ -171,12 +172,12 @@ Changing a failed device Changing a failed bootable device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot` -as bootloader. +Depending on how Proxmox Backup was installed, it is either using `grub` or +`systemd-boot` as a bootloader. -The first steps of copying the partition table, reissuing GUIDs and replacing -the ZFS partition are the same. To make the system bootable from the new disk, -different steps are needed which depend on the bootloader in use. +In either case, the first steps of copying the partition table, reissuing GUIDs +and replacing the ZFS partition are the same. To make the system bootable from +the new disk, different steps are needed which depend on the bootloader in use. .. code-block:: console @@ -207,7 +208,7 @@ Usually `grub.cfg` is located in `/boot/grub/grub.cfg` # grub-mkconfig -o /path/to/grub.cfg -Activate E-Mail Notification +Activate e-mail notification ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ZFS comes with an event daemon, which monitors events generated by the @@ -219,24 +220,24 @@ and you can install it using `apt-get`: # apt-get install zfs-zed -To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your -favorite editor, and uncomment the `ZED_EMAIL_ADDR` setting: +To activate the daemon, it is necessary to to uncomment the ZED_EMAIL_ADDR +setting, in the file `/etc/zfs/zed.d/zed.rc`. .. code-block:: console ZED_EMAIL_ADDR="root" -Please note Proxmox Backup forwards mails to `root` to the email address +Please note that Proxmox Backup forwards mails to `root` to the email address configured for the root user. IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All other settings are optional. -Limit ZFS Memory Usage +Limit ZFS memory usage ^^^^^^^^^^^^^^^^^^^^^^ It is good to use at most 50 percent (which is the default) of the -system memory for ZFS ARC to prevent performance shortage of the +system memory for ZFS ARC, to prevent performance degradation of the host. Use your preferred editor to change the configuration in `/etc/modprobe.d/zfs.conf` and insert: @@ -244,27 +245,42 @@ host. Use your preferred editor to change the configuration in options zfs zfs_arc_max=8589934592 -This example setting limits the usage to 8GB. +The above example limits the usage to 8 GiB ('8 * 2^30^'). -.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes: +.. IMPORTANT:: In case your desired `zfs_arc_max` value is lower than or equal + to `zfs_arc_min` (which defaults to 1/32 of the system memory), `zfs_arc_max` + will be ignored. Thus, for it to work in this case, you must set + `zfs_arc_min` to at most `zfs_arc_max - 1`. This would require updating the + configuration in `/etc/modprobe.d/zfs.conf`, with: + +.. code-block:: console + options zfs zfs_arc_min=8589934591 + options zfs zfs_arc_max=8589934592 + +This example setting limits the usage to 8 GiB ('8 * 2^30^') on +systems with more than 256 GiB of total memory, where simply setting +`zfs_arc_max` alone would not work. + +.. IMPORTANT:: If your root file system is ZFS, you must update your initramfs + every time this value changes. .. code-block:: console # update-initramfs -u -SWAP on ZFS +Swap on ZFS ^^^^^^^^^^^ -Swap-space created on a zvol may generate some troubles, like blocking the +Swap-space created on a zvol may cause some issues, such as blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage. -We strongly recommend to use enough memory, so that you normally do not +We strongly recommend using enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is -preferred to create a partition on a physical disk and use it as swap device. +preferred to create a partition on a physical disk and use it as a swap device. You can leave some space free for this purpose in the advanced options of the -installer. Additionally, you can lower the `swappiness` value. +installer. Additionally, you can lower the `swappiness` value. A good value for servers is 10: .. code-block:: console @@ -291,7 +307,7 @@ an editor of your choice and add the following line: vm.swappiness = 100 The kernel will swap aggressively. ==================== =============================================================== -ZFS Compression +ZFS compression ^^^^^^^^^^^^^^^ To activate compression: @@ -300,10 +316,11 @@ To activate compression: # zpool set compression=lz4 We recommend using the `lz4` algorithm, since it adds very little CPU overhead. -Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing -the compression ratio, 1 is fastest and 9 is best compression) are also available. -Depending on the algorithm and how compressible the data is, having compression enabled can even increase -I/O performance. +Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer from `1-9` +representing the compression ratio, where 1 is fastest and 9 is best +compression) are also available. Depending on the algorithm and how +compressible the data is, having compression enabled can even increase I/O +performance. You can disable compression at any time with: .. code-block:: console @@ -314,26 +331,26 @@ Only new blocks will be affected by this change. .. _local_zfs_special_device: -ZFS Special Device +ZFS special device ^^^^^^^^^^^^^^^^^^ -Since version 0.8.0 ZFS supports `special` devices. A `special` device in a +Since version 0.8.0, ZFS supports `special` devices. A `special` device in a pool is used to store metadata, deduplication tables, and optionally small file blocks. A `special` device can improve the speed of a pool consisting of slow spinning -hard disks with a lot of metadata changes. For example workloads that involve +hard disks with a lot of metadata changes. For example, workloads that involve creating, updating or deleting a large number of files will benefit from the presence of a `special` device. ZFS datasets can also be configured to store -whole small files on the `special` device which can further improve the +small files on the `special` device, which can further improve the performance. Use fast SSDs for the `special` device. .. IMPORTANT:: The redundancy of the `special` device should match the one of the - pool, since the `special` device is a point of failure for the whole pool. + pool, since the `special` device is a point of failure for the entire pool. .. WARNING:: Adding a `special` device to a pool cannot be undone! -Create a pool with `special` device and RAID-1: +To create a pool with `special` device and RAID-1: .. code-block:: console @@ -346,8 +363,8 @@ Adding a `special` device to an existing pool with RAID-1: # zpool add special mirror ZFS datasets expose the `special_small_blocks=` property. `size` can be -`0` to disable storing small file blocks on the `special` device or a power of -two in the range between `512B` to `128K`. After setting the property new file +`0` to disable storing small file blocks on the `special` device, or a power of +two in the range between `512B` to `128K`. After setting this property, new file blocks smaller than `size` will be allocated on the `special` device. .. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to @@ -355,10 +372,10 @@ blocks smaller than `size` will be allocated on the `special` device. the `special` device, so be careful! Setting the `special_small_blocks` property on a pool will change the default -value of that property for all child ZFS datasets (for example all containers +value of that property for all child ZFS datasets (for example, all containers in the pool will opt in for small file blocks). -Opt in for all file smaller than 4K-blocks pool-wide: +Opt in for all files smaller than 4K-blocks pool-wide: .. code-block:: console @@ -379,10 +396,15 @@ Opt out from small file blocks for a single dataset: Troubleshooting ^^^^^^^^^^^^^^^ -Corrupted cachefile +Corrupt cache file +"""""""""""""""""" + +`zfs-import-cache.service` imports ZFS pools using the ZFS cache file. If this +file becomes corrupted, the service won't be able to import the pools that it's +unable to read from it. -In case of a corrupted ZFS cachefile, some volumes may not be mounted during -boot until mounted manually later. +As a result, in case of a corrupted ZFS cache file, some volumes may not be +mounted during boot and must be mounted manually later. For each pool, run: @@ -390,16 +412,13 @@ For each pool, run: # zpool set cachefile=/etc/zfs/zpool.cache POOLNAME -and afterwards update the `initramfs` by running: +then, update the `initramfs` by running: .. code-block:: console # update-initramfs -u -k all -and finally reboot your node. - -Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service` -doesn't import the pools that aren't present in the cachefile. +and finally, reboot the node. Another workaround to this problem is enabling the `zfs-import-scan.service`, which searches and imports pools via device scanning (usually slower). diff --git a/docs/maintenance.rst b/docs/maintenance.rst index 15b313d0..8be5b666 100644 --- a/docs/maintenance.rst +++ b/docs/maintenance.rst @@ -14,15 +14,15 @@ following retention options are available: ``keep-hourly `` Keep backups for the last ```` hours. If there is more than one - backup for a single hour, only the latest is kept. + backup for a single hour, only the latest is retained. ``keep-daily `` Keep backups for the last ```` days. If there is more than one - backup for a single day, only the latest is kept. + backup for a single day, only the latest is retained. ``keep-weekly `` Keep backups for the last ```` weeks. If there is more than one - backup for a single week, only the latest is kept. + backup for a single week, only the latest is retained. .. note:: Weeks start on Monday and end on Sunday. The software uses the `ISO week date`_ system and handles weeks at @@ -30,17 +30,17 @@ following retention options are available: ``keep-monthly `` Keep backups for the last ```` months. If there is more than one - backup for a single month, only the latest is kept. + backup for a single month, only the latest is retained. ``keep-yearly `` Keep backups for the last ```` years. If there is more than one - backup for a single year, only the latest is kept. + backup for a single year, only the latest is retained. The retention options are processed in the order given above. Each option only covers backups within its time period. The next option does not take care of already covered backups. It will only consider older backups. -Unfinished and incomplete backups will be removed by the prune command unless +Unfinished and incomplete backups will be removed by the prune command, unless they are newer than the last successful backup. In this case, the last failed backup is retained. @@ -48,7 +48,7 @@ Prune Simulator ^^^^^^^^^^^^^^^ You can use the built-in `prune simulator `_ -to explore the effect of different retetion options with various backup +to explore the effect of different retention options with various backup schedules. Manual Pruning @@ -59,10 +59,10 @@ Manual Pruning :align: right :alt: Prune and garbage collection options -To access pruning functionality for a specific backup group, you can use the -prune command line option discussed in :ref:`backup-pruning`, or navigate to -the **Content** tab of the datastore and click the scissors icon in the -**Actions** column of the relevant backup group. +To manually prune a specific backup group, you can use +``proxmox-backup-client``'s ``prune`` subcommand, discussed in +:ref:`backup-pruning`, or navigate to the **Content** tab of the datastore and +click the scissors icon in the **Actions** column of the relevant backup group. Prune Schedules ^^^^^^^^^^^^^^^ @@ -81,7 +81,7 @@ Retention Settings Example ^^^^^^^^^^^^^^^^^^^^^^^^^^ The backup frequency and retention of old backups may depend on how often data -changes, and how important an older state may be, in a specific work load. +changes and how important an older state may be in a specific workload. When backups act as a company's document archive, there may also be legal requirements for how long backup snapshots must be kept. @@ -125,8 +125,8 @@ start garbage collection on an entire datastore and the ``status`` subcommand to see attributes relating to the :ref:`garbage collection `. This functionality can also be accessed in the GUI, by navigating to **Prune & -GC** from the top panel. From here, you can edit the schedule at which garbage -collection runs and manually start the operation. +GC** from the top panel of a datastore. From here, you can edit the schedule at +which garbage collection runs and manually start the operation. .. _maintenance_verification: @@ -139,13 +139,13 @@ Verification :align: right :alt: Adding a verify job -Proxmox Backup offers various verification options to ensure that backup data is -intact. Verification is generally carried out through the creation of verify -jobs. These are scheduled tasks that run verification at a given interval (see -:ref:`calendar-event-scheduling`). With these, you can set whether already verified -snapshots are ignored, as well as set a time period, after which verified jobs -are checked again. The interface for creating verify jobs can be found under the -**Verify Jobs** tab of the datastore. +Proxmox Backup Server offers various verification options to ensure that backup +data is intact. Verification is generally carried out through the creation of +verify jobs. These are scheduled tasks that run verification at a given interval +(see :ref:`calendar-event-scheduling`). With these, you can also set whether +already verified snapshots are ignored, as well as set a time period, after +which snapshots are checked again. The interface for creating verify jobs can be +found under the **Verify Jobs** tab of the datastore. .. Note:: It is recommended that you reverify all backups at least monthly, even if a previous verification was successful. This is because physical drives @@ -158,9 +158,9 @@ are checked again. The interface for creating verify jobs can be found under the data. Aside from using verify jobs, you can also run verification manually on entire -datastores, backup groups, or snapshots. To do this, navigate to the **Content** -tab of the datastore and either click *Verify All*, or select the *V.* icon from -the *Actions* column in the table. +datastores, backup groups or snapshots. To do this, navigate to the **Content** +tab of the datastore and either click *Verify All* or select the *V.* icon from +the **Actions** column in the table. .. _maintenance_notification: @@ -170,8 +170,8 @@ Notifications Proxmox Backup Server can send you notification emails about automatically scheduled verification, garbage-collection and synchronization tasks results. -By default, notifications are send to the email address configured for the -`root@pam` user. You can set that user for each datastore. +By default, notifications are sent to the email address configured for the +`root@pam` user. You can instead set this user for each datastore. You can also change the level of notification received per task type, the following options are available: @@ -179,6 +179,6 @@ following options are available: * Always: send a notification for any scheduled task, independent of the outcome -* Errors: send a notification for any scheduled task resulting in an error +* Errors: send a notification for any scheduled task that results in an error * Never: do not send any notification at all diff --git a/docs/managing-remotes.rst b/docs/managing-remotes.rst index 88ab3ba2..ccb7313e 100644 --- a/docs/managing-remotes.rst +++ b/docs/managing-remotes.rst @@ -17,8 +17,8 @@ configuration information for remotes is stored in the file :align: right :alt: Add a remote -To add a remote, you need its hostname or IP, a userid and password on the -remote, and its certificate fingerprint. To get the fingerprint, use the +To add a remote, you need its hostname or IP address, a userid and password on +the remote, and its certificate fingerprint. To get the fingerprint, use the ``proxmox-backup-manager cert info`` command on the remote, or navigate to **Dashboard** in the remote's web interface and select **Show Fingerprint**. @@ -60,12 +60,13 @@ Sync Jobs Sync jobs are configured to pull the contents of a datastore on a **Remote** to a local datastore. You can manage sync jobs in the web interface, from the -**Sync Jobs** tab of the datastore which you'd like to set one up for, or using -the ``proxmox-backup-manager sync-job`` command. The configuration information -for sync jobs is stored at ``/etc/proxmox-backup/sync.cfg``. To create a new -sync job, click the add button in the GUI, or use the ``create`` subcommand. -After creating a sync job, you can either start it manually from the GUI or -provide it with a schedule (see :ref:`calendar-event-scheduling`) to run regularly. +**Sync Jobs** tab of the **Datastore** panel or from that of the Datastore +itself. Alternatively, you can manage them with the ``proxmox-backup-manager +sync-job`` command. The configuration information for sync jobs is stored at +``/etc/proxmox-backup/sync.cfg``. To create a new sync job, click the add button +in the GUI, or use the ``create`` subcommand. After creating a sync job, you can +either start it manually from the GUI or provide it with a schedule (see +:ref:`calendar-event-scheduling`) to run regularly. .. code-block:: console @@ -79,14 +80,14 @@ provide it with a schedule (see :ref:`calendar-event-scheduling`) to run regular └────────────┴───────┴────────┴──────────────┴───────────┴─────────┘ # proxmox-backup-manager sync-job remove pbs2-local -For setting up sync jobs, the configuring user needs the following permissions: +To set up sync jobs, the configuring user needs the following permissions: #. ``Remote.Read`` on the ``/remote/{remote}/{remote-store}`` path -#. at least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``) +#. At least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``) If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on the local datastore as well. If the ``owner`` option is not set (defaulting to -``root@pam``) or set to something other than the configuring user, +``root@pam``) or is set to something other than the configuring user, ``Datastore.Modify`` is required as well. .. note:: A sync job can only sync backup groups that the configured remote's diff --git a/docs/network-management.rst b/docs/network-management.rst index 4b7ac75d..d6d84651 100644 --- a/docs/network-management.rst +++ b/docs/network-management.rst @@ -82,7 +82,8 @@ is: .. note:: This command and corresponding GUI button rely on the ``ifreload`` command, from the package ``ifupdown2``. This package is included within the Proxmox Backup Server installation, however, you may have to install it yourself, - if you have installed Proxmox Backup Server on top of Debian or Proxmox VE. + if you have installed Proxmox Backup Server on top of Debian or a Proxmox VE + version prior to version 7. You can also configure DNS settings, from the **DNS** section of **Configuration** or by using the ``dns`` subcommand of diff --git a/docs/proxmox-backup-proxy/description.rst b/docs/proxmox-backup-proxy/description.rst index 34e620e8..2fb55020 100644 --- a/docs/proxmox-backup-proxy/description.rst +++ b/docs/proxmox-backup-proxy/description.rst @@ -1,5 +1,5 @@ This daemon exposes the whole Proxmox Backup Server API on TCP port 8007 using HTTPS. It runs as user ``backup`` and has very limited -permissions. Operation requiring more permissions are forwarded to +permissions. Operations requiring more permissions are forwarded to the local ``proxmox-backup`` service. diff --git a/docs/pve-integration.rst b/docs/pve-integration.rst index 35d2adfd..2fe5c66a 100644 --- a/docs/pve-integration.rst +++ b/docs/pve-integration.rst @@ -3,8 +3,8 @@ `Proxmox VE`_ Integration ------------------------- -A Proxmox Backup Server can be integrated into a Proxmox VE setup by adding the -former as a storage in a Proxmox VE standalone or cluster setup. +Proxmox Backup Server can be integrated into a Proxmox VE standalone or cluster +setup, by adding it as a storage in Proxmox VE. See also the `Proxmox VE Storage - Proxmox Backup Server `_ section @@ -14,8 +14,8 @@ of the Proxmox VE Administration Guide for Proxmox VE specific documentation. Using the Proxmox VE Web-Interface ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Proxmox VE has native API and web-interface integration of Proxmox Backup -Server since the `Proxmox VE 6.3 release +Proxmox VE has native API and web interface integration of Proxmox Backup +Server as of `Proxmox VE 6.3 `_. A Proxmox Backup Server can be added under ``Datacenter -> Storage``. @@ -24,8 +24,8 @@ Using the Proxmox VE Command-Line ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You need to define a new storage with type 'pbs' on your `Proxmox VE`_ -node. The following example uses ``store2`` as storage name, and -assumes the server address is ``localhost``, and you want to connect +node. The following example uses ``store2`` as the storage's name, and +assumes the server address is ``localhost`` and you want to connect as ``user1@pbs``. .. code-block:: console @@ -33,7 +33,7 @@ as ``user1@pbs``. # pvesm add pbs store2 --server localhost --datastore store2 # pvesm set store2 --username user1@pbs --password -.. note:: If you would rather not pass your password as plain text, you can pass +.. note:: If you would rather not enter your password as plain text, you can pass the ``--password`` parameter, without any arguments. This will cause the program to prompt you for a password upon entering the command. @@ -53,7 +53,7 @@ relationship: # pvesm set store2 --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe -After that you should be able to see storage status with: +After that, you should be able to view storage status with: .. code-block:: console diff --git a/docs/pxar/description.rst b/docs/pxar/description.rst index 770d240c..bf802809 100644 --- a/docs/pxar/description.rst +++ b/docs/pxar/description.rst @@ -1,12 +1,12 @@ -``pxar`` is a command line utility to create and manipulate archives in the +``pxar`` is a command line utility for creating and manipulating archives in the :ref:`pxar-format`. It is inspired by `casync file archive format `_, which caters to a similar use-case. The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox -Backup Server, for example, efficient storage of hardlinks. -The format is designed to reduce storage space needed on the server by achieving -a high level of deduplication. +Backup Server, for example, efficient storage of hard links. +The format is designed to reduce the required storage on the server by +achieving a high level of deduplication. Creating an Archive ^^^^^^^^^^^^^^^^^^^ @@ -24,10 +24,10 @@ This will create a new archive called ``archive.pxar`` with the contents of the the same name is already present in the target folder, the creation will fail. -By default, ``pxar`` will skip certain mountpoints and will not follow device +By default, ``pxar`` will skip certain mount points and will not follow device boundaries. This design decision is based on the primary use case of creating -archives for backups. It makes sense to not back up the contents of certain -temporary or system specific files. +archives for backups. It makes sense to ignore the contents of certain +temporary or system specific files in a backup. To alter this behavior and follow device boundaries, use the ``--all-file-systems`` flag. @@ -41,40 +41,38 @@ by running: # pxar create archive.pxar /path/to/source --exclude '**/*.txt' -Be aware that the shell itself will try to expand all of the glob patterns before -invoking ``pxar``. -In order to avoid this, all globs have to be quoted correctly. +Be aware that the shell itself will try to expand glob patterns before invoking +``pxar``. In order to avoid this, all globs have to be quoted correctly. It is possible to pass the ``--exclude`` parameter multiple times, in order to match more than one pattern. This allows you to use more complex -file exclusion/inclusion behavior. However, it is recommended to use +file inclusion/exclusion behavior. However, it is recommended to use ``.pxarexclude`` files instead for such cases. -For example you might want to exclude all ``.txt`` files except for a specific -one from the archive. This is achieved via the negated match pattern, prefixed -by ``!``. -All the glob patterns are relative to the ``source`` directory. +For example you might want to exclude all ``.txt`` files except a specific +one from the archive. This would be achieved via the negated match pattern, +prefixed by ``!``. All the glob patterns are relative to the ``source`` +directory. .. code-block:: console # pxar create archive.pxar /path/to/source --exclude '**/*.txt' --exclude '!/folder/file.txt' -.. NOTE:: The order of the glob match patterns matters as later ones override - previous ones. Permutations of the same patterns lead to different results. +.. NOTE:: The order of the glob match patterns matters, as later ones override + earlier ones. Permutations of the same patterns lead to different results. ``pxar`` will store the list of glob match patterns passed as parameters via the -command line, in a file called ``.pxarexclude-cli`` at the root of -the archive. +command line, in a file called ``.pxarexclude-cli``, at the root of the archive. If a file with this name is already present in the source folder during archive -creation, this file is not included in the archive and the file containing the -new patterns is added to the archive instead, the original file is not altered. +creation, this file is not included in the archive, and the file containing the +new patterns is added to the archive instead. The original file is not altered. A more convenient and persistent way to exclude files from the archive is by placing the glob match patterns in ``.pxarexclude`` files. It is possible to create and place these files in any directory of the filesystem tree. -These files must contain one pattern per line, again later patterns win over -previous ones. +These files must contain one pattern per line, and later patterns override +earlier ones. The patterns control file exclusions of files present within the given directory or further below it in the tree. The behavior is the same as described in :ref:`client_creating_backups`. @@ -89,7 +87,7 @@ with the following command: # pxar extract archive.pxar /path/to/target -If no target is provided, the content of the archive is extracted to the current +If no target is provided, the contents of the archive is extracted to the current working directory. In order to restore only parts of an archive, single files, and/or folders, @@ -116,13 +114,13 @@ run the following command: # pxar list archive.pxar This displays the full path of each file or directory with respect to the -archives root. +archive's root. Mounting an Archive ^^^^^^^^^^^^^^^^^^^ ``pxar`` allows you to mount and inspect the contents of an archive via _`FUSE`. -In order to mount an archive named ``archive.pxar`` to the mountpoint ``/mnt``, +In order to mount an archive named ``archive.pxar`` to the mount point ``/mnt``, run the command: .. code-block:: console @@ -130,7 +128,7 @@ run the command: # pxar mount archive.pxar /mnt Once the archive is mounted, you can access its content under the given -mountpoint. +mount point. .. code-block:: console diff --git a/docs/sysadmin.rst b/docs/sysadmin.rst index 51601f2a..626b28fe 100644 --- a/docs/sysadmin.rst +++ b/docs/sysadmin.rst @@ -4,8 +4,8 @@ Host System Administration ========================== `Proxmox Backup`_ is based on the famous Debian_ Linux -distribution. That means that you have access to the whole world of -Debian packages, and the base system is well documented. The `Debian +distribution. This means that you have access to the entire range of +Debian packages, and that the base system is well documented. The `Debian Administrator's Handbook`_ is available online, and provides a comprehensive introduction to the Debian operating system. @@ -17,11 +17,11 @@ updates to some Debian packages when necessary. We also deliver a specially optimized Linux kernel, where we enable all required virtualization and container features. That kernel -includes drivers for ZFS_, and several hardware drivers. For example, +includes drivers for ZFS_, as well as several hardware drivers. For example, we ship Intel network card drivers to support their newest hardware. The following sections will concentrate on backup related topics. They -either explain things which are different on `Proxmox Backup`_, or +will explain things which are different on `Proxmox Backup`_, or tasks which are commonly used on `Proxmox Backup`_. For other topics, please refer to the standard Debian documentation. diff --git a/docs/technical-overview.rst b/docs/technical-overview.rst index 4223b2c5..59a59c11 100644 --- a/docs/technical-overview.rst +++ b/docs/technical-overview.rst @@ -8,7 +8,7 @@ Datastores A Datastore is the logical place where :ref:`Backup Snapshots ` and their chunks are stored. Snapshots consist of a -manifest, blobs, dynamic- and fixed-indexes (see :ref:`terms`), and are +manifest, blobs, and dynamic- and fixed-indexes (see :ref:`terms`), and are stored in the following directory structure: ///