]>
Commit | Line | Data |
---|---|---|
55334cf4 TL |
1 | Backup Storage |
2 | ============== | |
04e24b14 | 3 | |
ee0ab12d TL |
4 | .. _storage_disk_management: |
5 | ||
04e24b14 DW |
6 | Disk Management |
7 | --------------- | |
8 | ||
9 | .. image:: images/screenshots/pbs-gui-disks.png | |
b4a81964 | 10 | :target: _images/pbs-gui-disks.png |
04e24b14 DW |
11 | :align: right |
12 | :alt: List of disks | |
13 | ||
34407477 | 14 | `Proxmox Backup`_ Server comes with a set of disk utilities, which are |
14f140d1 TL |
15 | accessed using the ``disk`` subcommand or the web interface. This subcommand |
16 | allows you to initialize disks, create various filesystems, and get information | |
17 | about the disks. | |
18 | ||
04e24b14 | 19 | To view the disks connected to the system, navigate to **Administration -> |
60589e60 | 20 | Storage/Disks** in the web interface or use the ``list`` subcommand of |
04e24b14 DW |
21 | ``disk``: |
22 | ||
23 | .. code-block:: console | |
24 | ||
25 | # proxmox-backup-manager disk list | |
26 | ┌──────┬────────┬─────┬───────────┬─────────────┬───────────────┬─────────┬────────┐ | |
27 | │ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │ | |
28 | ╞══════╪════════╪═════╪═══════════╪═════════════╪═══════════════╪═════════╪════════╡ | |
29 | │ sda │ lvm │ 1 │ hdd │ 34359738368 │ QEMU_HARDDISK │ - │ passed │ | |
30 | ├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤ | |
31 | │ sdb │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │ | |
32 | ├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤ | |
33 | │ sdc │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │ | |
34 | └──────┴────────┴─────┴───────────┴─────────────┴───────────────┴─────────┴────────┘ | |
35 | ||
36 | To initialize a disk with a new GPT, use the ``initialize`` subcommand: | |
37 | ||
38 | .. code-block:: console | |
39 | ||
40 | # proxmox-backup-manager disk initialize sdX | |
41 | ||
42 | .. image:: images/screenshots/pbs-gui-disks-dir-create.png | |
5565e454 | 43 | :target: _images/pbs-gui-disks-dir-create.png |
04e24b14 DW |
44 | :align: right |
45 | :alt: Create a directory | |
46 | ||
47 | You can create an ``ext4`` or ``xfs`` filesystem on a disk using ``fs | |
60589e60 DW |
48 | create``, or by navigating to **Administration -> Storage/Disks -> Directory** |
49 | in the web interface and creating one from there. The following command creates | |
50 | an ``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to | |
7804e36b | 51 | automatically create a datastore on the disk. This will |
04e24b14 DW |
52 | create a datastore at the location ``/mnt/datastore/store1``: |
53 | ||
54 | .. code-block:: console | |
55 | ||
7804e36b | 56 | # proxmox-backup-manager disk fs create store1 --disk sdX --filesystem ext4 --add-datastore true |
04e24b14 DW |
57 | |
58 | .. image:: images/screenshots/pbs-gui-disks-zfs-create.png | |
59 | :align: right | |
60 | :alt: Create ZFS | |
61 | ||
62 | You can also create a ``zpool`` with various raid levels from **Administration | |
60589e60 | 63 | -> Storage/Disks -> ZFS** in the web interface, or by using ``zpool create``. The command |
7804e36b | 64 | below creates a mirrored ``zpool`` using two disks and |
f5d9f253 | 65 | mounts it under ``/mnt/datastore/zpool1``: |
04e24b14 DW |
66 | |
67 | .. code-block:: console | |
68 | ||
7804e36b | 69 | # proxmox-backup-manager disk zpool create zpool1 --devices sdX,sdY --raidlevel mirror |
04e24b14 DW |
70 | |
71 | .. note:: You can also pass the ``--add-datastore`` parameter here, to automatically | |
72 | create a datastore from the disk. | |
73 | ||
74 | You can use ``disk fs list`` and ``disk zpool list`` to keep track of your | |
75 | filesystems and zpools respectively. | |
76 | ||
77 | Proxmox Backup Server uses the package smartmontools. This is a set of tools | |
78 | used to monitor and control the S.M.A.R.T. system for local hard disks. If a | |
79 | disk supports S.M.A.R.T. capability, and you have this enabled, you can | |
80 | display S.M.A.R.T. attributes from the web interface or by using the command: | |
81 | ||
82 | .. code-block:: console | |
83 | ||
84 | # proxmox-backup-manager disk smart-attributes sdX | |
85 | ||
86 | .. note:: This functionality may also be accessed directly through the use of | |
87 | the ``smartctl`` command, which comes as part of the smartmontools package | |
88 | (see ``man smartctl`` for more details). | |
89 | ||
90 | ||
91 | .. _datastore_intro: | |
92 | ||
030464d3 | 93 | :term:`Datastore` |
04e24b14 DW |
94 | ----------------- |
95 | ||
187ec504 | 96 | .. image:: images/screenshots/pbs-gui-datastore-summary.png |
5565e454 | 97 | :target: _images/pbs-gui-datastore-summary.png |
187ec504 TL |
98 | :align: right |
99 | :alt: Datastore Usage Overview | |
100 | ||
04e24b14 | 101 | A datastore refers to a location at which backups are stored. The current |
3bbb70b3 | 102 | implementation uses a directory inside a standard Unix file system (``ext4``, |
04e24b14 DW |
103 | ``xfs`` or ``zfs``) to store the backup data. |
104 | ||
105 | Datastores are identified by a simple *ID*. You can configure this | |
106 | when setting up the datastore. The configuration information for datastores | |
107 | is stored in the file ``/etc/proxmox-backup/datastore.cfg``. | |
108 | ||
109 | .. note:: The `File Layout`_ requires the file system to support at least *65538* | |
110 | subdirectories per directory. That number comes from the 2\ :sup:`16` | |
111 | pre-created chunk namespace directories, and the ``.`` and ``..`` default | |
112 | directory entries. This requirement excludes certain filesystems and | |
717ce406 | 113 | filesystem configurations from being supported for a datastore. For example, |
04e24b14 DW |
114 | ``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled. |
115 | ||
116 | ||
117 | Datastore Configuration | |
118 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
119 | ||
f608e74c | 120 | .. image:: images/screenshots/pbs-gui-datastore-content.png |
5565e454 | 121 | :target: _images/pbs-gui-datastore-content.png |
04e24b14 | 122 | :align: right |
b3116e56 | 123 | :alt: Datastore Content Overview |
04e24b14 | 124 | |
717ce406 | 125 | You can configure multiple datastores. A minimum of one datastore needs to be |
04e24b14 DW |
126 | configured. The datastore is identified by a simple *name* and points to a |
127 | directory on the filesystem. Each datastore also has associated retention | |
128 | settings of how many backup snapshots for each interval of ``hourly``, | |
129 | ``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent | |
4c3efb53 | 130 | number of backups to keep in that store. :ref:`backup-pruning` and |
717ce406 DW |
131 | :ref:`garbage collection <client_garbage-collection>` can also be configured to |
132 | run periodically, based on a configured schedule (see | |
133 | :ref:`calendar-event-scheduling`) per datastore. | |
04e24b14 DW |
134 | |
135 | ||
ee0ab12d TL |
136 | .. _storage_datastore_create: |
137 | ||
04e24b14 DW |
138 | Creating a Datastore |
139 | ^^^^^^^^^^^^^^^^^^^^ | |
b3116e56 | 140 | .. image:: images/screenshots/pbs-gui-datastore-create.png |
5565e454 | 141 | :target: _images/pbs-gui-datastore-create.png |
04e24b14 DW |
142 | :align: right |
143 | :alt: Create a datastore | |
144 | ||
f608e74c DW |
145 | You can create a new datastore from the web interface, by clicking **Add |
146 | Datastore** in the side menu, under the **Datastore** section. In the setup | |
147 | window: | |
04e24b14 DW |
148 | |
149 | * *Name* refers to the name of the datastore | |
150 | * *Backing Path* is the path to the directory upon which you want to create the | |
151 | datastore | |
152 | * *GC Schedule* refers to the time and intervals at which garbage collection | |
153 | runs | |
154 | * *Prune Schedule* refers to the frequency at which pruning takes place | |
f608e74c DW |
155 | * *Prune Options* set the amount of backups which you would like to keep (see |
156 | :ref:`backup-pruning`). | |
157 | * *Comment* can be used to add some contextual information to the datastore. | |
04e24b14 DW |
158 | |
159 | Alternatively you can create a new datastore from the command line. The | |
717ce406 DW |
160 | following command creates a new datastore called ``store1`` on |
161 | :file:`/backup/disk1/store1` | |
04e24b14 DW |
162 | |
163 | .. code-block:: console | |
164 | ||
165 | # proxmox-backup-manager datastore create store1 /backup/disk1/store1 | |
166 | ||
167 | ||
168 | Managing Datastores | |
169 | ^^^^^^^^^^^^^^^^^^^ | |
170 | ||
717ce406 | 171 | To list existing datastores from the command line, run: |
04e24b14 DW |
172 | |
173 | .. code-block:: console | |
174 | ||
175 | # proxmox-backup-manager datastore list | |
176 | ┌────────┬──────────────────────┬─────────────────────────────┐ | |
177 | │ name │ path │ comment │ | |
178 | ╞════════╪══════════════════════╪═════════════════════════════╡ | |
179 | │ store1 │ /backup/disk1/store1 │ This is my default storage. │ | |
180 | └────────┴──────────────────────┴─────────────────────────────┘ | |
181 | ||
182 | You can change the garbage collection and prune settings of a datastore, by | |
183 | editing the datastore from the GUI or by using the ``update`` subcommand. For | |
184 | example, the below command changes the garbage collection schedule using the | |
185 | ``update`` subcommand and prints the properties of the datastore with the | |
186 | ``show`` subcommand: | |
187 | ||
188 | .. code-block:: console | |
189 | ||
190 | # proxmox-backup-manager datastore update store1 --gc-schedule 'Tue 04:27' | |
191 | # proxmox-backup-manager datastore show store1 | |
192 | ┌────────────────┬─────────────────────────────┐ | |
193 | │ Name │ Value │ | |
194 | ╞════════════════╪═════════════════════════════╡ | |
195 | │ name │ store1 │ | |
196 | ├────────────────┼─────────────────────────────┤ | |
197 | │ path │ /backup/disk1/store1 │ | |
198 | ├────────────────┼─────────────────────────────┤ | |
199 | │ comment │ This is my default storage. │ | |
200 | ├────────────────┼─────────────────────────────┤ | |
201 | │ gc-schedule │ Tue 04:27 │ | |
202 | ├────────────────┼─────────────────────────────┤ | |
203 | │ keep-last │ 7 │ | |
204 | ├────────────────┼─────────────────────────────┤ | |
205 | │ prune-schedule │ daily │ | |
206 | └────────────────┴─────────────────────────────┘ | |
207 | ||
208 | Finally, it is possible to remove the datastore configuration: | |
209 | ||
210 | .. code-block:: console | |
211 | ||
212 | # proxmox-backup-manager datastore remove store1 | |
213 | ||
214 | .. note:: The above command removes only the datastore configuration. It does | |
215 | not delete any data from the underlying directory. | |
216 | ||
217 | ||
218 | File Layout | |
219 | ^^^^^^^^^^^ | |
220 | ||
221 | After creating a datastore, the following default layout will appear: | |
222 | ||
223 | .. code-block:: console | |
224 | ||
225 | # ls -arilh /backup/disk1/store1 | |
226 | 276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock | |
227 | 276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks | |
228 | ||
229 | `.lock` is an empty file used for process locking. | |
230 | ||
717ce406 DW |
231 | The `.chunks` directory contains folders, starting from `0000` and increasing in |
232 | hexadecimal values until `ffff`. These directories will store the chunked data, | |
233 | categorized by checksum, after a backup operation has been executed. | |
04e24b14 DW |
234 | |
235 | .. code-block:: console | |
236 | ||
237 | # ls -arilh /backup/disk1/store1/.chunks | |
238 | 545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff | |
239 | 545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe | |
240 | 415621 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffd | |
241 | 415620 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffc | |
242 | 353187 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffb | |
243 | 344995 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffa | |
244 | 144079 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff9 | |
245 | 144078 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff8 | |
246 | 144077 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff7 | |
247 | ... | |
248 | 403180 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000c | |
249 | 403179 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000b | |
250 | 403177 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000a | |
251 | 402530 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0009 | |
252 | 402513 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0008 | |
253 | 402509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0007 | |
254 | 276509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0006 | |
255 | 276508 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0005 | |
256 | 276507 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0004 | |
257 | 276501 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0003 | |
258 | 276499 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0002 | |
259 | 276498 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0001 | |
260 | 276494 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0000 | |
261 | 276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 .. | |
262 | 276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 . | |
263 | ||
264 | ||
6481fd24 | 265 | Once you've uploaded some backups or created namespaces, you may see the backup |
9aa8df5b | 266 | type (`ct`, `vm`, `host`) and the start of the namespace hierarchy (`ns`). |
ed9797d6 | 267 | |
2e9a9f94 TL |
268 | .. _storage_namespaces: |
269 | ||
ed9797d6 TL |
270 | Backup Namespaces |
271 | ~~~~~~~~~~~~~~~~~ | |
272 | ||
6481fd24 DW |
273 | A datastore can host many backups, as long as the underlying storage is large |
274 | enough and provides the performance required for a user's use case. | |
275 | However, without any hierarchy or separation, it's easy to run into naming conflicts, | |
ed9797d6 TL |
276 | especially when using the same datastore for multiple Proxmox VE instances or |
277 | multiple users. | |
278 | ||
279 | The backup namespace hierarchy allows you to clearly separate different users | |
6481fd24 | 280 | or backup sources in general, avoiding naming conflicts and providing a |
ed9797d6 TL |
281 | well-organized backup content view. |
282 | ||
6481fd24 DW |
283 | Each namespace level can host any backup type, CT, VM or Host, but also other |
284 | namespaces, up to a depth of 8 levels, where the root namespace is the first | |
ed9797d6 TL |
285 | level. |
286 | ||
ed9797d6 TL |
287 | Namespace Permissions |
288 | ^^^^^^^^^^^^^^^^^^^^^ | |
289 | ||
290 | You can make the permission configuration of a datastore more fine-grained by | |
291 | setting permissions only on a specific namespace. | |
292 | ||
6481fd24 | 293 | To view a datastore, you need a permission that has at least an `AUDIT`, |
ed9797d6 TL |
294 | `MODIFY`, `READ` or `BACKUP` privilege on any namespace it contains. |
295 | ||
6481fd24 DW |
296 | To create or delete a namespace, you require the modify privilege on the parent |
297 | namespace. Thus, to initially create namespaces, you need to have a permission | |
298 | with an access role that includes the `MODIFY` privilege on the datastore itself. | |
ed9797d6 | 299 | |
6481fd24 DW |
300 | For backup groups, the existing privilege rules still apply. You either need a |
301 | privileged enough permission or to be the owner of the backup group; nothing | |
302 | changed here. | |
ed9797d6 TL |
303 | |
304 | .. todo:: continue | |
305 | ||
306 | ||
56f0ce27 TL |
307 | Options |
308 | ~~~~~~~ | |
309 | ||
310 | .. image:: images/screenshots/pbs-gui-datastore-options.png | |
5565e454 | 311 | :target: _images/pbs-gui-datastore-options.png |
56f0ce27 TL |
312 | :align: right |
313 | :alt: Datastore Options | |
314 | ||
315 | There are a few per-datastore options: | |
316 | ||
317 | * :ref:`Notifications <maintenance_notification>` | |
318 | * :ref:`Maintenance Mode <maintenance_mode>` | |
319 | * Verification of incoming backups | |
d2c79c54 | 320 | |
2c323b65 DC |
321 | .. _datastore_tuning_options: |
322 | ||
d2c79c54 DC |
323 | Tuning |
324 | ^^^^^^ | |
7faf4b62 | 325 | There are some tuning related options for the datastore that are more advanced: |
d2c79c54 DC |
326 | |
327 | * ``chunk-order``: Chunk order for verify & tape backup: | |
328 | ||
329 | You can specify the order in which Proxmox Backup Server iterates the chunks | |
330 | when doing a verify or backing up to tape. The two options are: | |
331 | ||
332 | - `inode` (default): Sorts the chunks by inode number of the filesystem before iterating | |
333 | over them. This should be fine for most storages, especially spinning disks. | |
334 | - `none` Iterates the chunks in the order they appear in the | |
335 | index file (.fidx/.didx). While this might slow down iterating on many slow | |
336 | storages, on very fast ones (for example: NVMEs) the collecting and sorting | |
337 | can take more time than gained through the sorted iterating. | |
304a6df2 | 338 | This option can be set with: |
d2c79c54 | 339 | |
304a6df2 | 340 | .. code-block:: console |
d2c79c54 | 341 | |
304a6df2 | 342 | # proxmox-backup-manager datastore update <storename> --tuning 'chunk-order=none' |
d2c79c54 | 343 | |
389f8c13 DC |
344 | * ``sync-level``: Datastore fsync level: |
345 | ||
346 | You can set the level of syncing on the datastore for chunks, which influences | |
347 | the crash resistance of backups in case of a powerloss or hard shutoff. | |
348 | There are currently three levels: | |
349 | ||
4694dede | 350 | - `none` : Does not do any syncing when writing chunks. This is fast |
389f8c13 DC |
351 | and normally OK, since the kernel eventually flushes writes onto the disk. |
352 | The kernel sysctls `dirty_expire_centisecs` and `dirty_writeback_centisecs` | |
353 | are used to tune that behaviour, while the default is to flush old data | |
354 | after ~30s. | |
355 | ||
4694dede | 356 | - `filesystem` (default): This triggers a ``syncfs(2)`` after a backup, but before |
389f8c13 DC |
357 | the task returns `OK`. This way it is ensured that the written backups |
358 | are on disk. This is a good balance between speed and consistency. | |
359 | Note that the underlying storage device still needs to protect itself against | |
360 | powerloss to flush its internal ephemeral caches to the permanent storage layer. | |
361 | ||
362 | - `file` With this mode, a fsync is triggered on every chunk insertion, which | |
363 | makes sure each and every chunk reaches the disk as soon as possible. While | |
364 | this reaches the highest level of consistency, for many storages (especially | |
365 | slower ones) this comes at the cost of speed. For many users the `filesystem` | |
366 | mode is better suited, but for very fast storages this mode can be OK. | |
367 | ||
368 | This can be set with: | |
369 | ||
370 | .. code-block:: console | |
371 | ||
372 | # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem' | |
373 | ||
374 | If you want to set multiple tuning options simultaneously, you can separate them | |
375 | with a comma, like this: | |
376 | ||
377 | .. code-block:: console | |
378 | ||
379 | # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem,chunk-order=none' | |
b8e78fae NU |
380 | |
381 | .. _ransomware_protection: | |
382 | ||
d0fecab6 TL |
383 | Ransomware Protection & Recovery |
384 | -------------------------------- | |
b8e78fae NU |
385 | |
386 | `Ransomware <https://en.wikipedia.org/wiki/Ransomware>`_ is a type of malware | |
387 | that encrypts files until a ransom is paid. Proxmox Backup Server includes | |
d0fecab6 | 388 | features that help mitigate and recover from ransomware attacks by offering |
b27d0f82 | 389 | off-server and off-site synchronization and easy restoration from backups. |
b8e78fae | 390 | |
d0fecab6 TL |
391 | Built-in Protection |
392 | ~~~~~~~~~~~~~~~~~~~ | |
d2641fbb TL |
393 | |
394 | Proxmox Backup Server does not rewrite data for existing blocks. This means | |
d0fecab6 TL |
395 | that a compromised Proxmox VE host or any other compromised system that uses |
396 | the client to back up data cannot corrupt or modify existing backups in any | |
397 | way. | |
398 | ||
b8e78fae | 399 | |
d0fecab6 TL |
400 | The 3-2-1 Rule with Proxmox Backup Server |
401 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
b8e78fae | 402 | |
d0fecab6 TL |
403 | The `3-2-1 rule <https://en.wikipedia.org/wiki/Backup#Storage>`_ is simple but |
404 | effective in protecting important data from all sorts of threats, be it fires, | |
b27d0f82 | 405 | natural disasters or attacks on your infrastructure by adversaries. |
d0fecab6 TL |
406 | In short, the rule states that one should create *3* backups on at least *2* |
407 | different types of storage media, of which *1* copy is kept off-site. | |
408 | ||
409 | Proxmox Backup Server provides tools for storing extra copies of backups in | |
410 | remote locations and on various types of media. | |
411 | ||
b27d0f82 | 412 | By setting up a remote Proxmox Backup Server, you can take advantage of the |
d0fecab6 TL |
413 | :ref:`remote sync jobs <backup_remote>` feature and easily create off-site |
414 | copies of your backups. | |
415 | This is recommended, since off-site instances are less likely to be infected by | |
416 | ransomware in your local network. | |
b27d0f82 | 417 | You can configure sync jobs to not remove snapshots if they vanished on the |
d0fecab6 TL |
418 | remote-source to avoid that an attacker that took over the source can cause |
419 | deletions of backups on the target hosts. | |
b27d0f82 SS |
420 | If the source-host became victim of a ransomware attack, there is a good chance |
421 | that sync jobs will fail, triggering an :ref:`error notification | |
d0fecab6 TL |
422 | <maintenance_notification>`. |
423 | ||
424 | It is also possible to create :ref:`tape backups <tape_backup>` as a second | |
b27d0f82 SS |
425 | storage medium. This way, you get an additional copy of your data on a |
426 | different storage medium designed for long-term storage. Additionally, it can | |
0ecd7ca1 | 427 | easily be moved around, be it to an off-site location or, for example, into an |
b27d0f82 | 428 | on-site fireproof vault for quicker access. |
d0fecab6 TL |
429 | |
430 | Restrictive User & Access Management | |
431 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
432 | ||
b27d0f82 | 433 | Proxmox Backup Server offers a comprehensive and fine-grained :ref:`user and |
d0fecab6 TL |
434 | access management <user_mgmt>` system. The `Datastore.Backup` privilege, for |
435 | example, allows only to create, but not to delete or alter existing backups. | |
436 | ||
437 | The best way to leverage this access control system is to: | |
b27d0f82 | 438 | |
d0fecab6 TL |
439 | - Use separate API tokens for each host or Proxmox VE Cluster that should be |
440 | able to back data up to a Proxmox Backup Server. | |
441 | - Configure only minimal permissions for such API tokens. They should only have | |
442 | a single permission that grants the `DataStore` access role on a very narrow | |
443 | ACL path that is restricted to a specific namespace on a specific datastore, | |
444 | for example `/datastore/tank/pve-abc-cluster`. | |
445 | ||
446 | .. tip:: One best practice to protect against ransomware is not to grant delete | |
447 | permissions, but to perform backup pruning directly on Proxmox Backup Server | |
448 | using :ref:`prune jobs <maintenance_prune_jobs>`. | |
449 | ||
b27d0f82 SS |
450 | Please note that the same also applies for sync jobs. By limiting a sync user's |
451 | or an access token's right to only write backups, not delete them, compromised | |
d0fecab6 TL |
452 | clients cannot delete existing backups. |
453 | ||
454 | Ransomware Detection | |
455 | ~~~~~~~~~~~~~~~~~~~~ | |
456 | ||
457 | A Proxmox Backup Server might still get compromised within insecure networks, | |
458 | if physical access to the server is attained, or due to weak or insufficiently | |
459 | protected credentials. | |
460 | If that happens, and your on-site backups are encrypted by ransomware, the | |
461 | SHA-256 checksums of the backups will not match the previously recorded ones | |
462 | anymore, hence, restoring the backup will fail. | |
b8e78fae | 463 | |
d2641fbb TL |
464 | To detect ransomware inside a compromised guest, it is recommended to |
465 | frequently test restoring and booting backups. Make sure to restore to a new | |
466 | guest and not to overwrite your current guest. | |
467 | In the case of many backed-up guests, it is recommended to automate this | |
b27d0f82 SS |
468 | restore testing. If this is not possible, restoring random samples from the |
469 | backups periodically (for example, once a week or month), is advised'. | |
b8e78fae | 470 | |
d2641fbb TL |
471 | In order to be able to react quickly in case of a ransomware attack, it is |
472 | recommended to regularly test restoring from your backups. Make sure to restore | |
473 | to a new guest and not to overwrite your current guest. | |
474 | Restoring many guests at once can be cumbersome, which is why it is advisable | |
475 | to automate this task and verify that your automated process works. If this is | |
476 | not feasible, it is recommended to restore random samples from your backups. | |
b27d0f82 | 477 | While creating backups is important, verifying that they work is equally |
c8f66efd TL |
478 | important. This ensures that you are able to react quickly in case of an |
479 | emergency and keeps disruption of your services to a minimum. | |
b8e78fae | 480 | |
37c64a4a SS |
481 | :ref:`Verification jobs <maintenance_verification>` can also assist in detecting |
482 | a ransomware presence on a Proxmox Backup Server. Since verification jobs | |
483 | regularly check if all backups still match the checksums on record, they will | |
484 | start to fail if a ransomware starts to encrypt existing backups. Please be | |
485 | aware, that an advanced enough ransomware could circumvent this mechanism. | |
486 | Hence, consider verification jobs only as an additional, but not a sufficient | |
487 | protection measure. | |
b8e78fae | 488 | |
d0fecab6 TL |
489 | General Prevention Methods and Best Practices |
490 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
b8e78fae | 491 | |
d2641fbb TL |
492 | It is recommended to take additional security measures, apart from the ones |
493 | offered by Proxmox Backup Server. These recommendations include, but are not | |
494 | limited to: | |
b8e78fae NU |
495 | |
496 | * Keeping the firmware and software up-to-date to patch exploits and | |
497 | vulnerabilities (such as | |
498 | `Spectre <https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)>`_ or | |
499 | `Meltdown <https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)>`_). | |
500 | * Following safe and secure network practices, for example using logging and | |
d0fecab6 TL |
501 | monitoring tools and dividing your network so that infrastructure traffic and |
502 | user or even public traffic are separated, for example by setting up VLANs. | |
b27d0f82 | 503 | * Set up a long-term retention. Since some ransomware might lay dormant a |
d0fecab6 TL |
504 | couple of days or weeks before starting to encrypt data, it can be that |
505 | older, existing backups are compromised. Thus, it is important to keep at | |
506 | least a few backups over longer periods of time. | |
b8e78fae NU |
507 | |
508 | For more information on how to avoid ransomware attacks and what to do in case | |
b27d0f82 | 509 | of a ransomware infection, see official government recommendations like `CISA's |
d0fecab6 TL |
510 | (USA) guide <https://www.cisa.gov/stopransomware/ransomware-guide>`_ or EU |
511 | resources like ENSIA's `Threat Landscape for Ransomware Attacks | |
512 | <https://www.enisa.europa.eu/publications/enisa-threat-landscape-for-ransomware-attacks>`_ | |
513 | or `nomoreransom.org <https://www.nomoreransom.org/en/index.html>`_. |