1 //! This module implements the proxmox backup data storage
3 //! Proxmox backup splits large files into chunks, and stores them
4 //! deduplicated using a content addressable storage format.
6 //! A chunk is simply defined as binary blob, which is stored inside a
7 //! `ChunkStore`, addressed by the SHA256 digest of the binary blob.
9 //! Index files are used to reconstruct the original file. They
10 //! basically contain a list of SHA256 checksums. The `DynamicIndex*`
11 //! format is able to deal with dynamic chunk sizes, whereas the
12 //! `FixedIndex*` format is an optimization to store a list of equal
15 //! # ChunkStore Locking
17 //! We need to be able to restart the proxmox-backup service daemons,
18 //! so that we can update the software without rebooting the host. But
19 //! such restarts must not abort running backup jobs, so we need to
20 //! keep the old service running until those jobs are finished. This
21 //! implies that we need some kind of locking for the
22 //! ChunkStore. Please note that it is perfectly valid to have
23 //! multiple parallel ChunkStore writers, even when they write the
24 //! same chunk (because the chunk would have the same name and the
25 //! same data). The only real problem is garbage collection, because
26 //! we need to avoid deleting chunks which are still referenced.
28 //! * Read Index Files:
30 //! Acquire shared lock for .idx files.
33 //! * Delete Index Files:
35 //! Acquire exclusive lock for .idx files. This makes sure that we do
36 //! not delete index files while they are still in use.
39 //! * Create Index Files:
41 //! Acquire shared lock for ChunkStore (process wide).
43 //! Note: When creating .idx files, we create temporary a (.tmp) file,
44 //! then do an atomic rename ...
47 //! * Garbage Collect:
49 //! Acquire exclusive lock for ChunkStore (process wide). If we have
50 //! already a shared lock for the ChunkStore, try to upgrade that
56 //! Try to abort the running garbage collection to release exclusive
57 //! ChunkStore locks ASAP. Start the new service with the existing listening
61 //! # Garbage Collection (GC)
63 //! Deleting backups is as easy as deleting the corresponding .idx
64 //! files. Unfortunately, this does not free up any storage, because
65 //! those files just contain references to chunks.
67 //! To free up some storage, we run a garbage collection process at
68 //! regular intervals. The collector uses a mark and sweep
69 //! approach. In the first phase, it scans all .idx files to mark used
70 //! chunks. The second phase then removes all unmarked chunks from the
73 //! The above locking mechanism makes sure that we are the only
74 //! process running GC. But we still want to be able to create backups
75 //! during GC, so there may be multiple backup threads/tasks
76 //! running. Either started before GC started, or started while GC is
79 //! ## `atime` based GC
81 //! The idea here is to mark chunks by updating the `atime` (access
82 //! timestamp) on the chunk file. This is quite simple and does not
83 //! need additional RAM.
85 //! One minor problem is that recent Linux versions use the `relatime`
86 //! mount flag by default for performance reasons (yes, we want
87 //! that). When enabled, `atime` data is written to the disk only if
88 //! the file has been modified since the `atime` data was last updated
89 //! (`mtime`), or if the file was last accessed more than a certain
90 //! amount of time ago (by default 24h). So we may only delete chunks
91 //! with `atime` older than 24 hours.
93 //! Another problem arises from running backups. The mark phase does
94 //! not find any chunks from those backups, because there is no .idx
95 //! file for them (created after the backup). Chunks created or
96 //! touched by those backups may have an `atime` as old as the start
97 //! time of those backups. Please note that the backup start time may
98 //! predate the GC start time. So we may only delete chunks older than
99 //! the start time of those running backup jobs.
102 //! ## Store `marks` in RAM using a HASH
104 //! Not sure if this is better. TODO
106 use anyhow
::{bail, Error}
;
108 // Note: .pcat1 => Proxmox Catalog Format version 1
109 pub const CATALOG_NAME
: &str = "catalog.pcat1.didx";
112 macro_rules
! PROXMOX_BACKUP_PROTOCOL_ID_V1
{
113 () => { "proxmox-backup-protocol-v1" }
117 macro_rules
! PROXMOX_BACKUP_READER_PROTOCOL_ID_V1
{
118 () => { "proxmox-backup-reader-protocol-v1" }
121 /// Unix system user used by proxmox-backup-proxy
122 pub const BACKUP_USER_NAME
: &str = "backup";
124 /// Return User info for the 'backup' user (``getpwnam_r(3)``)
125 pub fn backup_user() -> Result
<nix
::unistd
::User
, Error
> {
126 match nix
::unistd
::User
::from_name(BACKUP_USER_NAME
)?
{
127 Some(user
) => Ok(user
),
128 None
=> bail
!("Unable to lookup backup user."),
133 pub use file_formats
::*;
139 pub use crypt_config
::*;
142 pub use key_derivation
::*;
145 pub use crypt_reader
::*;
148 pub use crypt_writer
::*;
151 pub use checksum_reader
::*;
154 pub use checksum_writer
::*;
160 pub use data_blob
::*;
162 mod data_blob_reader
;
163 pub use data_blob_reader
::*;
165 mod data_blob_writer
;
166 pub use data_blob_writer
::*;
172 pub use chunk_stream
::*;
175 pub use chunk_stat
::*;
178 pub use read_chunk
::*;
181 pub use chunk_store
::*;
187 pub use fixed_index
::*;
190 pub use dynamic_index
::*;
193 pub use backup_info
::*;
199 pub use datastore
::*;
205 pub use catalog_shell
::*;
207 mod async_index_reader
;
208 pub use async_index_reader
::*;