]> git.proxmox.com Git - proxmox-backup.git/blob - src/backup.rs
change catalog format, use dynamic index to store catalog.
[proxmox-backup.git] / src / backup.rs
1 //! This module implements the proxmox backup data storage
2 //!
3 //! Proxmox backup splits large files into chunks, and stores them
4 //! deduplicated using a content addressable storage format.
5 //!
6 //! A chunk is simply defined as binary blob, which is stored inside a
7 //! `ChunkStore`, addressed by the SHA256 digest of the binary blob.
8 //!
9 //! Index files are used to reconstruct the original file. They
10 //! basically contain a list of SHA256 checksums. The `DynamicIndex*`
11 //! format is able to deal with dynamic chunk sizes, whereas the
12 //! `FixedIndex*` format is an optimization to store a list of equal
13 //! sized chunks.
14 //!
15 //! # ChunkStore Locking
16 //!
17 //! We need to be able to restart the proxmox-backup service daemons,
18 //! so that we can update the software without rebooting the host. But
19 //! such restarts must not abort running backup jobs, so we need to
20 //! keep the old service running until those jobs are finished. This
21 //! implies that we need some kind of locking for the
22 //! ChunkStore. Please note that it is perfectly valid to have
23 //! multiple parallel ChunkStore writers, even when they write the
24 //! same chunk (because the chunk would have the same name and the
25 //! same data). The only real problem is garbage collection, because
26 //! we need to avoid deleting chunks which are still referenced.
27 //!
28 //! * Read Index Files:
29 //!
30 //! Acquire shared lock for .idx files.
31 //!
32 //!
33 //! * Delete Index Files:
34 //!
35 //! Acquire exclusive lock for .idx files. This makes sure that we do
36 //! not delete index files while they are still in use.
37 //!
38 //!
39 //! * Create Index Files:
40 //!
41 //! Acquire shared lock for ChunkStore (process wide).
42 //!
43 //! Note: When creating .idx files, we create temporary (.tmp) file,
44 //! then do an atomic rename ...
45 //!
46 //!
47 //! * Garbage Collect:
48 //!
49 //! Acquire exclusive lock for ChunkStore (process wide). If we have
50 //! already an shared lock for ChunkStore, try to updraged that
51 //! lock.
52 //!
53 //!
54 //! * Server Restart
55 //!
56 //! Try to abort running garbage collection to release exclusive
57 //! ChunkStore lock asap. Start new service with existing listening
58 //! socket.
59 //!
60 //!
61 //! # Garbage Collection (GC)
62 //!
63 //! Deleting backups is as easy as deleting the corresponding .idx
64 //! files. Unfortunately, this does not free up any storage, because
65 //! those files just contains references to chunks.
66 //!
67 //! To free up some storage, we run a garbage collection process at
68 //! regular intervals. The collector uses an mark and sweep
69 //! approach. In the first phase, it scans all .idx files to mark used
70 //! chunks. The second phase then removes all unmarked chunks from the
71 //! store.
72 //!
73 //! The above locking mechanism makes sure that we are the only
74 //! process running GC. But we still want to be able to create backups
75 //! during GC, so there may be multiple backup threads/tasks
76 //! running. Either started before GC started, or started while GC is
77 //! running.
78 //!
79 //! ## `atime` based GC
80 //!
81 //! The idea here is to mark chunks by updating the `atime` (access
82 //! timestamp) on the chunk file. This is quite simple and does not
83 //! need additional RAM.
84 //!
85 //! One minor problem is that recent Linux versions use the `relatime`
86 //! mount flag by default for performance reasons (yes, we want
87 //! that). When enabled, `atime` data is written to the disk only if
88 //! the file has been modified since the `atime` data was last updated
89 //! (`mtime`), or if the file was last accessed more than a certain
90 //! amount of time ago (by default 24h). So we may only delete chunks
91 //! with `atime` older than 24 hours.
92 //!
93 //! Another problem arise from running backups. The mark phase does
94 //! not find any chunks from those backups, because there is no .idx
95 //! file for them (created after the backup). Chunks created or
96 //! touched by those backups may have an `atime` as old as the start
97 //! time of those backup. Please not that the backup start time may
98 //! predate the GC start time. Se we may only delete chunk older than
99 //! the start time of those running backup jobs.
100 //!
101 //!
102 //! ## Store `marks` in RAM using a HASH
103 //!
104 //! Not sure if this is better. TODO
105
106 // Note: .pcat1 => Proxmox Catalog Format version 1
107 pub const CATALOG_NAME: &str = "catalog.pcat1.didx";
108
109 #[macro_export]
110 macro_rules! PROXMOX_BACKUP_PROTOCOL_ID_V1 {
111 () => { "proxmox-backup-protocol-v1" }
112 }
113
114 #[macro_export]
115 macro_rules! PROXMOX_BACKUP_READER_PROTOCOL_ID_V1 {
116 () => { "proxmox-backup-reader-protocol-v1" }
117 }
118
119 mod file_formats;
120 pub use file_formats::*;
121
122 mod manifest;
123 pub use manifest::*;
124
125 mod crypt_config;
126 pub use crypt_config::*;
127
128 mod key_derivation;
129 pub use key_derivation::*;
130
131 mod crypt_reader;
132 pub use crypt_reader::*;
133
134 mod crypt_writer;
135 pub use crypt_writer::*;
136
137 mod checksum_reader;
138 pub use checksum_reader::*;
139
140 mod checksum_writer;
141 pub use checksum_writer::*;
142
143 mod chunker;
144 pub use chunker::*;
145
146 mod data_blob;
147 pub use data_blob::*;
148
149 mod data_blob_reader;
150 pub use data_blob_reader::*;
151
152 mod data_blob_writer;
153 pub use data_blob_writer::*;
154
155 mod catalog_blob;
156 pub use catalog_blob::*;
157
158 mod chunk_stream;
159 pub use chunk_stream::*;
160
161 mod chunk_stat;
162 pub use chunk_stat::*;
163
164 mod read_chunk;
165 pub use read_chunk::*;
166
167 mod chunk_store;
168 pub use chunk_store::*;
169
170 mod index;
171 pub use index::*;
172
173 mod fixed_index;
174 pub use fixed_index::*;
175
176 mod dynamic_index;
177 pub use dynamic_index::*;
178
179 mod backup_info;
180 pub use backup_info::*;
181
182 mod datastore;
183 pub use datastore::*;