]> git.proxmox.com Git - pve-docs.git/blame_incremental - pmxcfs.adoc
change http links to https
[pve-docs.git] / pmxcfs.adoc
... / ...
CommitLineData
1[[chapter_pmxcfs]]
2ifdef::manvolnum[]
3pmxcfs(8)
4=========
5:pve-toplevel:
6
7NAME
8----
9
10pmxcfs - Proxmox Cluster File System
11
12SYNOPSIS
13--------
14
15include::pmxcfs.8-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Proxmox Cluster File System (pmxcfs)
23====================================
24:pve-toplevel:
25endif::manvolnum[]
26
27The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
28system for storing configuration files, replicated in real time to all
29cluster nodes using `corosync`. We use this to store all PVE related
30configuration files.
31
32Although the file system stores all data inside a persistent database
33on disk, a copy of the data resides in RAM. That imposes restriction
34on the maximum size, which is currently 30MB. This is still enough to
35store the configuration of several thousand virtual machines.
36
37This system provides the following advantages:
38
39* seamless replication of all configuration to all nodes in real time
40* provides strong consistency checks to avoid duplicate VM IDs
41* read-only when a node loses quorum
42* automatic updates of the corosync cluster configuration to all nodes
43* includes a distributed locking mechanism
44
45
46POSIX Compatibility
47-------------------
48
49The file system is based on FUSE, so the behavior is POSIX like. But
50some feature are simply not implemented, because we do not need them:
51
52* you can just generate normal files and directories, but no symbolic
53 links, ...
54
55* you can't rename non-empty directories (because this makes it easier
56 to guarantee that VMIDs are unique).
57
58* you can't change file permissions (permissions are based on path)
59
60* `O_EXCL` creates were not atomic (like old NFS)
61
62* `O_TRUNC` creates are not atomic (FUSE restriction)
63
64
65File Access Rights
66------------------
67
68All files and directories are owned by user `root` and have group
69`www-data`. Only root has write permissions, but group `www-data` can
70read most files. Files below the following paths:
71
72 /etc/pve/priv/
73 /etc/pve/nodes/${NAME}/priv/
74
75are only accessible by root.
76
77
78Technology
79----------
80
81We use the https://www.corosync.org[Corosync Cluster Engine] for
82cluster communication, and https://www.sqlite.org[SQlite] for the
83database file. The file system is implemented in user space using
84https://github.com/libfuse/libfuse[FUSE].
85
86File System Layout
87------------------
88
89The file system is mounted at:
90
91 /etc/pve
92
93Files
94~~~~~
95
96[width="100%",cols="m,d"]
97|=======
98|`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
99|`storage.cfg` | {pve} storage configuration
100|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
101|`user.cfg` | {pve} access control configuration (users/groups/...)
102|`domains.cfg` | {pve} authentication domains
103|`status.cfg` | {pve} external metrics server configuration
104|`authkey.pub` | Public key used by ticket system
105|`pve-root-ca.pem` | Public certificate of cluster CA
106|`priv/shadow.cfg` | Shadow password file
107|`priv/authkey.key` | Private key used by ticket system
108|`priv/pve-root-ca.key` | Private key of cluster CA
109|`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
110|`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
111|`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
112|`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
113|`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
114|`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
115|`firewall/cluster.fw` | Firewall configuration applied to all nodes
116|`firewall/<NAME>.fw` | Firewall configuration for individual nodes
117|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
118|=======
119
120
121Symbolic links
122~~~~~~~~~~~~~~
123
124[width="100%",cols="m,m"]
125|=======
126|`local` | `nodes/<LOCAL_HOST_NAME>`
127|`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
128|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
129|=======
130
131
132Special status files for debugging (JSON)
133~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134
135[width="100%",cols="m,d"]
136|=======
137|`.version` |File versions (to detect file modifications)
138|`.members` |Info about cluster members
139|`.vmlist` |List of all VMs
140|`.clusterlog` |Cluster log (last 50 entries)
141|`.rrd` |RRD data (most recent entries)
142|=======
143
144
145Enable/Disable debugging
146~~~~~~~~~~~~~~~~~~~~~~~~
147
148You can enable verbose syslog messages with:
149
150 echo "1" >/etc/pve/.debug
151
152And disable verbose syslog messages with:
153
154 echo "0" >/etc/pve/.debug
155
156
157Recovery
158--------
159
160If you have major problems with your Proxmox VE host, e.g. hardware
161issues, it could be helpful to just copy the pmxcfs database file
162`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
163host. On the new host (with nothing running), you need to stop the
164`pve-cluster` service and replace the `config.db` file (needed permissions
165`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
166lost Proxmox VE host, then reboot and check. (And don't forget your
167VM/CT data)
168
169
170Remove Cluster configuration
171~~~~~~~~~~~~~~~~~~~~~~~~~~~~
172
173The recommended way is to reinstall the node after you removed it from
174your cluster. This makes sure that all secret cluster/ssh keys and any
175shared configuration data is destroyed.
176
177In some cases, you might prefer to put a node back to local mode without
178reinstall, which is described in
179<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
180
181
182Recovering/Moving Guests from Failed Nodes
183~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184
185For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
186`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
187owner of the respective guest. This concept enables the usage of local locks
188instead of expensive cluster-wide locks for preventing concurrent guest
189configuration changes.
190
191As a consequence, if the owning node of a guest fails (e.g., because of a power
192outage, fencing event, ..), a regular migration is not possible (even if all
193the disks are located on shared storage) because such a local lock on the
194(dead) owning node is unobtainable. This is not a problem for HA-managed
195guests, as {pve}'s High Availability stack includes the necessary
196(cluster-wide) locking and watchdog functionality to ensure correct and
197automatic recovery of guests from fenced nodes.
198
199If a non-HA-managed guest has only shared disks (and no other local resources
200which are only available on the failed node are configured), a manual recovery
201is possible by simply moving the guest configuration file from the failed
202node's directory in `/etc/pve/` to an alive node's directory (which changes the
203logical owner or location of the guest).
204
205For example, recovering the VM with ID `100` from a dead `node1` to another
206node `node2` works with the following command executed when logged in as root
207on any member node of the cluster:
208
209 mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
210
211WARNING: Before manually recovering a guest like this, make absolutely sure
212that the failed source node is really powered off/fenced. Otherwise {pve}'s
213locking principles are violated by the `mv` command, which can have unexpected
214consequences.
215
216WARNING: Guest with local disks (or other local resources which are only
217available on the dead node) are not recoverable like this. Either wait for the
218failed node to rejoin the cluster or restore such guests from backups.
219
220ifdef::manvolnum[]
221include::pve-copyright.adoc[]
222endif::manvolnum[]