]>
Commit | Line | Data |
---|---|---|
1 | ifdef::manvolnum[] | |
2 | pmxcfs(8) | |
3 | ========= | |
4 | include::attributes.txt[] | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pmxcfs - Proxmox Cluster File System | |
11 | ||
12 | SYNOPSIS | |
13 | -------- | |
14 | ||
15 | include::pmxcfs.8-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Proxmox Cluster File System (pmxcfs) | |
23 | ==================================== | |
24 | include::attributes.txt[] | |
25 | :pve-toplevel: | |
26 | endif::manvolnum[] | |
27 | ||
28 | The Proxmox Cluster file system (``pmxcfs'') is a database-driven file | |
29 | system for storing configuration files, replicated in real time to all | |
30 | cluster nodes using `corosync`. We use this to store all PVE related | |
31 | configuration files. | |
32 | ||
33 | Although the file system stores all data inside a persistent database | |
34 | on disk, a copy of the data resides in RAM. That imposes restriction | |
35 | on the maximum size, which is currently 30MB. This is still enough to | |
36 | store the configuration of several thousand virtual machines. | |
37 | ||
38 | This system provides the following advantages: | |
39 | ||
40 | * seamless replication of all configuration to all nodes in real time | |
41 | * provides strong consistency checks to avoid duplicate VM IDs | |
42 | * read-only when a node loses quorum | |
43 | * automatic updates of the corosync cluster configuration to all nodes | |
44 | * includes a distributed locking mechanism | |
45 | ||
46 | ||
47 | POSIX Compatibility | |
48 | ------------------- | |
49 | ||
50 | The file system is based on FUSE, so the behavior is POSIX like. But | |
51 | some feature are simply not implemented, because we do not need them: | |
52 | ||
53 | * you can just generate normal files and directories, but no symbolic | |
54 | links, ... | |
55 | ||
56 | * you can't rename non-empty directories (because this makes it easier | |
57 | to guarantee that VMIDs are unique). | |
58 | ||
59 | * you can't change file permissions (permissions are based on path) | |
60 | ||
61 | * `O_EXCL` creates were not atomic (like old NFS) | |
62 | ||
63 | * `O_TRUNC` creates are not atomic (FUSE restriction) | |
64 | ||
65 | ||
66 | File Access Rights | |
67 | ------------------ | |
68 | ||
69 | All files and directories are owned by user `root` and have group | |
70 | `www-data`. Only root has write permissions, but group `www-data` can | |
71 | read most files. Files below the following paths: | |
72 | ||
73 | /etc/pve/priv/ | |
74 | /etc/pve/nodes/${NAME}/priv/ | |
75 | ||
76 | are only accessible by root. | |
77 | ||
78 | ||
79 | Technology | |
80 | ---------- | |
81 | ||
82 | We use the http://www.corosync.org[Corosync Cluster Engine] for | |
83 | cluster communication, and http://www.sqlite.org[SQlite] for the | |
84 | database file. The file system is implemented in user space using | |
85 | http://fuse.sourceforge.net[FUSE]. | |
86 | ||
87 | File System Layout | |
88 | ------------------ | |
89 | ||
90 | The file system is mounted at: | |
91 | ||
92 | /etc/pve | |
93 | ||
94 | Files | |
95 | ~~~~~ | |
96 | ||
97 | [width="100%",cols="m,d"] | |
98 | |======= | |
99 | |`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf) | |
100 | |`storage.cfg` | {pve} storage configuration | |
101 | |`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...) | |
102 | |`user.cfg` | {pve} access control configuration (users/groups/...) | |
103 | |`domains.cfg` | {pve} authentication domains | |
104 | |`authkey.pub` | Public key used by ticket system | |
105 | |`pve-root-ca.pem` | Public certificate of cluster CA | |
106 | |`priv/shadow.cfg` | Shadow password file | |
107 | |`priv/authkey.key` | Private key used by ticket system | |
108 | |`priv/pve-root-ca.key` | Private key of cluster CA | |
109 | |`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA) | |
110 | |`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem` | |
111 | |`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`) | |
112 | |`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional) | |
113 | |`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs | |
114 | |`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers | |
115 | |`firewall/cluster.fw` | Firewall configuration applied to all nodes | |
116 | |`firewall/<NAME>.fw` | Firewall configuration for individual nodes | |
117 | |`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers | |
118 | |======= | |
119 | ||
120 | ||
121 | Symbolic links | |
122 | ~~~~~~~~~~~~~~ | |
123 | ||
124 | [width="100%",cols="m,m"] | |
125 | |======= | |
126 | |`local` | `nodes/<LOCAL_HOST_NAME>` | |
127 | |`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/` | |
128 | |`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/` | |
129 | |======= | |
130 | ||
131 | ||
132 | Special status files for debugging (JSON) | |
133 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
134 | ||
135 | [width="100%",cols="m,d"] | |
136 | |======= | |
137 | |`.version` |File versions (to detect file modifications) | |
138 | |`.members` |Info about cluster members | |
139 | |`.vmlist` |List of all VMs | |
140 | |`.clusterlog` |Cluster log (last 50 entries) | |
141 | |`.rrd` |RRD data (most recent entries) | |
142 | |======= | |
143 | ||
144 | ||
145 | Enable/Disable debugging | |
146 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
147 | ||
148 | You can enable verbose syslog messages with: | |
149 | ||
150 | echo "1" >/etc/pve/.debug | |
151 | ||
152 | And disable verbose syslog messages with: | |
153 | ||
154 | echo "0" >/etc/pve/.debug | |
155 | ||
156 | ||
157 | Recovery | |
158 | -------- | |
159 | ||
160 | If you have major problems with your Proxmox VE host, e.g. hardware | |
161 | issues, it could be helpful to just copy the pmxcfs database file | |
162 | `/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE | |
163 | host. On the new host (with nothing running), you need to stop the | |
164 | `pve-cluster` service and replace the `config.db` file (needed permissions | |
165 | `0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the | |
166 | lost Proxmox VE host, then reboot and check. (And don't forget your | |
167 | VM/CT data) | |
168 | ||
169 | ||
170 | Remove Cluster configuration | |
171 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
172 | ||
173 | The recommended way is to reinstall the node after you removed it from | |
174 | your cluster. This makes sure that all secret cluster/ssh keys and any | |
175 | shared configuration data is destroyed. | |
176 | ||
177 | In some cases, you might prefer to put a node back to local mode | |
178 | without reinstall, which is described here: | |
179 | ||
180 | * stop the cluster file system in `/etc/pve/` | |
181 | ||
182 | # systemctl stop pve-cluster | |
183 | ||
184 | * start it again but forcing local mode | |
185 | ||
186 | # pmxcfs -l | |
187 | ||
188 | * remove the cluster configuration | |
189 | ||
190 | # rm /etc/pve/cluster.conf | |
191 | # rm /etc/cluster/cluster.conf | |
192 | # rm /var/lib/pve-cluster/corosync.authkey | |
193 | ||
194 | * stop the cluster file system again | |
195 | ||
196 | # systemctl stop pve-cluster | |
197 | ||
198 | * restart PVE services (or reboot) | |
199 | ||
200 | # systemctl start pve-cluster | |
201 | # systemctl restart pvedaemon | |
202 | # systemctl restart pveproxy | |
203 | # systemctl restart pvestatd | |
204 | ||
205 | ||
206 | ifdef::manvolnum[] | |
207 | include::pve-copyright.adoc[] | |
208 | endif::manvolnum[] |