]>
Commit | Line | Data |
---|---|---|
bd88f9d9 | 1 | ifdef::manvolnum[] |
7e2fdb3d DM |
2 | PVE(8) |
3 | ====== | |
bd88f9d9 | 4 | include::attributes.txt[] |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
bd88f9d9 DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | pmxcfs - Proxmox Cluster File System | |
11 | ||
49a5e11c | 12 | SYNOPSIS |
bd88f9d9 DM |
13 | -------- |
14 | ||
15 | include::pmxcfs.8-cli.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Proxmox Cluster File System (pmxcfs) | |
ac1e3896 | 23 | ==================================== |
bd88f9d9 DM |
24 | include::attributes.txt[] |
25 | endif::manvolnum[] | |
ac1e3896 | 26 | |
5f09af76 DM |
27 | ifdef::wiki[] |
28 | :pve-toplevel: | |
29 | endif::wiki[] | |
30 | ||
8c1189b6 | 31 | The Proxmox Cluster file system (``pmxcfs'') is a database-driven file |
ac1e3896 | 32 | system for storing configuration files, replicated in real time to all |
8c1189b6 | 33 | cluster nodes using `corosync`. We use this to store all PVE related |
ac1e3896 DM |
34 | configuration files. |
35 | ||
36 | Although the file system stores all data inside a persistent database | |
37 | on disk, a copy of the data resides in RAM. That imposes restriction | |
5eba0743 | 38 | on the maximum size, which is currently 30MB. This is still enough to |
ac1e3896 DM |
39 | store the configuration of several thousand virtual machines. |
40 | ||
960f6344 | 41 | This system provides the following advantages: |
ac1e3896 DM |
42 | |
43 | * seamless replication of all configuration to all nodes in real time | |
44 | * provides strong consistency checks to avoid duplicate VM IDs | |
a8e99754 | 45 | * read-only when a node loses quorum |
ac1e3896 DM |
46 | * automatic updates of the corosync cluster configuration to all nodes |
47 | * includes a distributed locking mechanism | |
48 | ||
5eba0743 | 49 | |
ac1e3896 | 50 | POSIX Compatibility |
960f6344 | 51 | ------------------- |
ac1e3896 DM |
52 | |
53 | The file system is based on FUSE, so the behavior is POSIX like. But | |
54 | some feature are simply not implemented, because we do not need them: | |
55 | ||
56 | * you can just generate normal files and directories, but no symbolic | |
57 | links, ... | |
58 | ||
59 | * you can't rename non-empty directories (because this makes it easier | |
60 | to guarantee that VMIDs are unique). | |
61 | ||
62 | * you can't change file permissions (permissions are based on path) | |
63 | ||
64 | * `O_EXCL` creates were not atomic (like old NFS) | |
65 | ||
66 | * `O_TRUNC` creates are not atomic (FUSE restriction) | |
67 | ||
68 | ||
5eba0743 | 69 | File Access Rights |
960f6344 | 70 | ------------------ |
ac1e3896 | 71 | |
8c1189b6 FG |
72 | All files and directories are owned by user `root` and have group |
73 | `www-data`. Only root has write permissions, but group `www-data` can | |
ac1e3896 DM |
74 | read most files. Files below the following paths: |
75 | ||
76 | /etc/pve/priv/ | |
77 | /etc/pve/nodes/${NAME}/priv/ | |
78 | ||
79 | are only accessible by root. | |
80 | ||
960f6344 | 81 | |
ac1e3896 DM |
82 | Technology |
83 | ---------- | |
84 | ||
85 | We use the http://www.corosync.org[Corosync Cluster Engine] for | |
86 | cluster communication, and http://www.sqlite.org[SQlite] for the | |
5eba0743 | 87 | database file. The file system is implemented in user space using |
ac1e3896 DM |
88 | http://fuse.sourceforge.net[FUSE]. |
89 | ||
5eba0743 | 90 | File System Layout |
ac1e3896 DM |
91 | ------------------ |
92 | ||
93 | The file system is mounted at: | |
94 | ||
95 | /etc/pve | |
96 | ||
97 | Files | |
98 | ~~~~~ | |
99 | ||
100 | [width="100%",cols="m,d"] | |
101 | |======= | |
8c1189b6 FG |
102 | |`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf) |
103 | |`storage.cfg` | {pve} storage configuration | |
104 | |`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...) | |
105 | |`user.cfg` | {pve} access control configuration (users/groups/...) | |
106 | |`domains.cfg` | {pve} authentication domains | |
107 | |`authkey.pub` | Public key used by ticket system | |
108 | |`pve-root-ca.pem` | Public certificate of cluster CA | |
109 | |`priv/shadow.cfg` | Shadow password file | |
110 | |`priv/authkey.key` | Private key used by ticket system | |
111 | |`priv/pve-root-ca.key` | Private key of cluster CA | |
112 | |`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA) | |
113 | |`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem` | |
114 | |`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`) | |
115 | |`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional) | |
116 | |`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs | |
117 | |`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers | |
118 | |`firewall/cluster.fw` | Firewall configuration applied to all nodes | |
119 | |`firewall/<NAME>.fw` | Firewall configuration for individual nodes | |
120 | |`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers | |
ac1e3896 DM |
121 | |======= |
122 | ||
5eba0743 | 123 | |
ac1e3896 DM |
124 | Symbolic links |
125 | ~~~~~~~~~~~~~~ | |
126 | ||
127 | [width="100%",cols="m,m"] | |
128 | |======= | |
8c1189b6 FG |
129 | |`local` | `nodes/<LOCAL_HOST_NAME>` |
130 | |`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/` | |
131 | |`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/` | |
ac1e3896 DM |
132 | |======= |
133 | ||
5eba0743 | 134 | |
ac1e3896 DM |
135 | Special status files for debugging (JSON) |
136 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
137 | ||
138 | [width="100%",cols="m,d"] | |
139 | |======= | |
8c1189b6 FG |
140 | |`.version` |File versions (to detect file modifications) |
141 | |`.members` |Info about cluster members | |
142 | |`.vmlist` |List of all VMs | |
143 | |`.clusterlog` |Cluster log (last 50 entries) | |
144 | |`.rrd` |RRD data (most recent entries) | |
ac1e3896 DM |
145 | |======= |
146 | ||
5eba0743 | 147 | |
ac1e3896 DM |
148 | Enable/Disable debugging |
149 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
150 | ||
151 | You can enable verbose syslog messages with: | |
152 | ||
153 | echo "1" >/etc/pve/.debug | |
154 | ||
155 | And disable verbose syslog messages with: | |
156 | ||
157 | echo "0" >/etc/pve/.debug | |
158 | ||
159 | ||
160 | Recovery | |
161 | -------- | |
162 | ||
163 | If you have major problems with your Proxmox VE host, e.g. hardware | |
164 | issues, it could be helpful to just copy the pmxcfs database file | |
8c1189b6 | 165 | `/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE |
ac1e3896 | 166 | host. On the new host (with nothing running), you need to stop the |
8c1189b6 FG |
167 | `pve-cluster` service and replace the `config.db` file (needed permissions |
168 | `0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the | |
169 | lost Proxmox VE host, then reboot and check. (And don't forget your | |
ac1e3896 DM |
170 | VM/CT data) |
171 | ||
5eba0743 | 172 | |
ac1e3896 DM |
173 | Remove Cluster configuration |
174 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
175 | ||
176 | The recommended way is to reinstall the node after you removed it from | |
177 | your cluster. This makes sure that all secret cluster/ssh keys and any | |
178 | shared configuration data is destroyed. | |
179 | ||
180 | In some cases, you might prefer to put a node back to local mode | |
181 | without reinstall, which is described here: | |
182 | ||
8c1189b6 | 183 | * stop the cluster file system in `/etc/pve/` |
ac1e3896 DM |
184 | |
185 | # systemctl stop pve-cluster | |
186 | ||
187 | * start it again but forcing local mode | |
188 | ||
189 | # pmxcfs -l | |
190 | ||
5eba0743 | 191 | * remove the cluster configuration |
ac1e3896 DM |
192 | |
193 | # rm /etc/pve/cluster.conf | |
194 | # rm /etc/cluster/cluster.conf | |
195 | # rm /var/lib/pve-cluster/corosync.authkey | |
196 | ||
197 | * stop the cluster file system again | |
198 | ||
960f6344 | 199 | # systemctl stop pve-cluster |
ac1e3896 | 200 | |
5eba0743 | 201 | * restart PVE services (or reboot) |
ac1e3896 | 202 | |
960f6344 DM |
203 | # systemctl start pve-cluster |
204 | # systemctl restart pvedaemon | |
205 | # systemctl restart pveproxy | |
206 | # systemctl restart pvestatd | |
ac1e3896 | 207 | |
bd88f9d9 DM |
208 | |
209 | ifdef::manvolnum[] | |
210 | include::pve-copyright.adoc[] | |
211 | endif::manvolnum[] |