]> git.proxmox.com Git - proxmox-backup.git/blob - README.rst
bump version to 3.2.2-1
[proxmox-backup.git] / README.rst
1 ``rustup`` Toolchain
2 ====================
3
4 We normally want to build with the ``rustc`` Debian package. To do that
5 you can set the following ``rustup`` configuration:
6
7 # rustup toolchain link system /usr
8 # rustup default system
9
10
11 Versioning of proxmox helper crates
12 ===================================
13
14 To use current git master code of the proxmox* helper crates, add::
15
16 git = "git://git.proxmox.com/git/proxmox"
17
18 or::
19
20 path = "../proxmox/proxmox"
21
22 to the proxmox dependency, and update the version to reflect the current,
23 pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0").
24
25
26 Local cargo config
27 ==================
28
29 This repository ships with a ``.cargo/config`` that replaces the crates.io
30 registry with packaged crates located in ``/usr/share/cargo/registry``.
31
32 A similar config is also applied building with dh_cargo. Cargo.lock needs to be
33 deleted when switching between packaged crates and crates.io, since the
34 checksums are not compatible.
35
36 To reference new dependencies (or updated versions) that are not yet packaged,
37 the dependency needs to point directly to a path or git source (e.g., see
38 example for proxmox crate above).
39
40
41 Build
42 =====
43 on Debian Buster
44
45 Setup:
46 1. # echo 'deb http://download.proxmox.com/debian/devel/ buster main' >> /etc/apt/sources.list.d/proxmox-devel.list
47 2. # sudo wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
48 3. # sudo apt update
49 4. # sudo apt install devscripts debcargo clang
50 5. # git clone git://git.proxmox.com/git/proxmox-backup.git
51 6. # sudo mk-build-deps -ir
52
53 Note: 2. may be skipped if you already added the PVE or PBS package repository
54
55 You are now able to build using the Makefile or cargo itself.
56
57
58 Design Notes
59 ============
60
61 Here are some random thought about the software design (unless I find a better place).
62
63
64 Large chunk sizes
65 -----------------
66
67 It is important to notice that large chunk sizes are crucial for
68 performance. We have a multi-user system, where different people can do
69 different operations on a datastore at the same time, and most operation
70 involves reading a series of chunks.
71
72 So what is the maximal theoretical speed we can get when reading a
73 series of chunks? Reading a chunk sequence need the following steps:
74
75 - seek to the first chunk start location
76 - read the chunk data
77 - seek to the first chunk start location
78 - read the chunk data
79 - ...
80
81 Lets use the following disk performance metrics:
82
83 :AST: Average Seek Time (second)
84 :MRS: Maximum sequential Read Speed (bytes/second)
85 :ACS: Average Chunk Size (bytes)
86
87 The maximum performance you can get is::
88
89 MAX(ACS) = ACS /(AST + ACS/MRS)
90
91 Please note that chunk data is likely to be sequential arranged on disk, but
92 this it is sort of a best case assumption.
93
94 For a typical rotational disk, we assume the following values::
95
96 AST: 10ms
97 MRS: 170MB/s
98
99 MAX(4MB) = 115.37 MB/s
100 MAX(1MB) = 61.85 MB/s;
101 MAX(64KB) = 6.02 MB/s;
102 MAX(4KB) = 0.39 MB/s;
103 MAX(1KB) = 0.10 MB/s;
104
105 Modern SSD are much faster, lets assume the following::
106
107 max IOPS: 20000 => AST = 0.00005
108 MRS: 500Mb/s
109
110 MAX(4MB) = 474 MB/s
111 MAX(1MB) = 465 MB/s;
112 MAX(64KB) = 354 MB/s;
113 MAX(4KB) = 67 MB/s;
114 MAX(1KB) = 18 MB/s;
115
116
117 Also, the average chunk directly relates to the number of chunks produced by
118 a backup::
119
120 CHUNK_COUNT = BACKUP_SIZE / ACS
121
122 Here are some staticics from my developer worstation::
123
124 Disk Usage: 65 GB
125 Directories: 58971
126 Files: 726314
127 Files < 64KB: 617541
128
129 As you see, there are really many small files. If we would do file
130 level deduplication, i.e. generate one chunk per file, we end up with
131 more than 700000 chunks.
132
133 Instead, our current algorithm only produce large chunks with an
134 average chunks size of 4MB. With above data, this produce about 15000
135 chunks (factor 50 less chunks).