]> git.proxmox.com Git - pve-docs.git/blame - pmxcfs.adoc
whitespace cleanup
[pve-docs.git] / pmxcfs.adoc
CommitLineData
2409e808 1[[chapter_pmxcfs]]
bd88f9d9 2ifdef::manvolnum[]
b2f242ab
DM
3pmxcfs(8)
4=========
5f09af76
DM
5:pve-toplevel:
6
bd88f9d9
DM
7NAME
8----
9
10pmxcfs - Proxmox Cluster File System
11
49a5e11c 12SYNOPSIS
bd88f9d9
DM
13--------
14
54079101 15include::pmxcfs.8-synopsis.adoc[]
bd88f9d9
DM
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Proxmox Cluster File System (pmxcfs)
ac1e3896 23====================================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6 27The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
ac1e3896 28system for storing configuration files, replicated in real time to all
8c1189b6 29cluster nodes using `corosync`. We use this to store all PVE related
ac1e3896
DM
30configuration files.
31
32Although the file system stores all data inside a persistent database
33on disk, a copy of the data resides in RAM. That imposes restriction
5eba0743 34on the maximum size, which is currently 30MB. This is still enough to
ac1e3896
DM
35store the configuration of several thousand virtual machines.
36
960f6344 37This system provides the following advantages:
ac1e3896
DM
38
39* seamless replication of all configuration to all nodes in real time
40* provides strong consistency checks to avoid duplicate VM IDs
a8e99754 41* read-only when a node loses quorum
ac1e3896
DM
42* automatic updates of the corosync cluster configuration to all nodes
43* includes a distributed locking mechanism
44
5eba0743 45
ac1e3896 46POSIX Compatibility
960f6344 47-------------------
ac1e3896
DM
48
49The file system is based on FUSE, so the behavior is POSIX like. But
50some feature are simply not implemented, because we do not need them:
51
52* you can just generate normal files and directories, but no symbolic
53 links, ...
54
55* you can't rename non-empty directories (because this makes it easier
56 to guarantee that VMIDs are unique).
57
58* you can't change file permissions (permissions are based on path)
59
60* `O_EXCL` creates were not atomic (like old NFS)
61
62* `O_TRUNC` creates are not atomic (FUSE restriction)
63
64
5eba0743 65File Access Rights
960f6344 66------------------
ac1e3896 67
8c1189b6
FG
68All files and directories are owned by user `root` and have group
69`www-data`. Only root has write permissions, but group `www-data` can
ac1e3896
DM
70read most files. Files below the following paths:
71
72 /etc/pve/priv/
73 /etc/pve/nodes/${NAME}/priv/
74
75are only accessible by root.
76
960f6344 77
ac1e3896
DM
78Technology
79----------
80
81We use the http://www.corosync.org[Corosync Cluster Engine] for
82cluster communication, and http://www.sqlite.org[SQlite] for the
5eba0743 83database file. The file system is implemented in user space using
ac1e3896
DM
84http://fuse.sourceforge.net[FUSE].
85
5eba0743 86File System Layout
ac1e3896
DM
87------------------
88
89The file system is mounted at:
90
91 /etc/pve
92
93Files
94~~~~~
95
96[width="100%",cols="m,d"]
97|=======
8c1189b6
FG
98|`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
99|`storage.cfg` | {pve} storage configuration
100|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
101|`user.cfg` | {pve} access control configuration (users/groups/...)
102|`domains.cfg` | {pve} authentication domains
7b7e71f1 103|`status.cfg` | {pve} external metrics server configuration
8c1189b6
FG
104|`authkey.pub` | Public key used by ticket system
105|`pve-root-ca.pem` | Public certificate of cluster CA
106|`priv/shadow.cfg` | Shadow password file
107|`priv/authkey.key` | Private key used by ticket system
108|`priv/pve-root-ca.key` | Private key of cluster CA
109|`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
110|`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
111|`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
112|`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
113|`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
114|`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
115|`firewall/cluster.fw` | Firewall configuration applied to all nodes
116|`firewall/<NAME>.fw` | Firewall configuration for individual nodes
117|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
ac1e3896
DM
118|=======
119
5eba0743 120
ac1e3896
DM
121Symbolic links
122~~~~~~~~~~~~~~
123
124[width="100%",cols="m,m"]
125|=======
8c1189b6
FG
126|`local` | `nodes/<LOCAL_HOST_NAME>`
127|`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
128|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
ac1e3896
DM
129|=======
130
5eba0743 131
ac1e3896
DM
132Special status files for debugging (JSON)
133~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134
135[width="100%",cols="m,d"]
136|=======
8c1189b6
FG
137|`.version` |File versions (to detect file modifications)
138|`.members` |Info about cluster members
139|`.vmlist` |List of all VMs
140|`.clusterlog` |Cluster log (last 50 entries)
141|`.rrd` |RRD data (most recent entries)
ac1e3896
DM
142|=======
143
5eba0743 144
ac1e3896
DM
145Enable/Disable debugging
146~~~~~~~~~~~~~~~~~~~~~~~~
147
148You can enable verbose syslog messages with:
149
100194d7 150 echo "1" >/etc/pve/.debug
ac1e3896
DM
151
152And disable verbose syslog messages with:
153
100194d7 154 echo "0" >/etc/pve/.debug
ac1e3896
DM
155
156
157Recovery
158--------
159
160If you have major problems with your Proxmox VE host, e.g. hardware
161issues, it could be helpful to just copy the pmxcfs database file
8c1189b6 162`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
ac1e3896 163host. On the new host (with nothing running), you need to stop the
8c1189b6
FG
164`pve-cluster` service and replace the `config.db` file (needed permissions
165`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
166lost Proxmox VE host, then reboot and check. (And don't forget your
ac1e3896
DM
167VM/CT data)
168
5eba0743 169
ac1e3896
DM
170Remove Cluster configuration
171~~~~~~~~~~~~~~~~~~~~~~~~~~~~
172
173The recommended way is to reinstall the node after you removed it from
174your cluster. This makes sure that all secret cluster/ssh keys and any
175shared configuration data is destroyed.
176
38ae8db3
TL
177In some cases, you might prefer to put a node back to local mode without
178reinstall, which is described in
179<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
bd88f9d9 180
5db724de
FG
181
182Recovering/Moving Guests from Failed Nodes
183~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184
185For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
186`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
187owner of the respective guest. This concept enables the usage of local locks
188instead of expensive cluster-wide locks for preventing concurrent guest
189configuration changes.
190
191As a consequence, if the owning node of a guest fails (e.g., because of a power
192outage, fencing event, ..), a regular migration is not possible (even if all
193the disks are located on shared storage) because such a local lock on the
194(dead) owning node is unobtainable. This is not a problem for HA-managed
195guests, as {pve}'s High Availability stack includes the necessary
196(cluster-wide) locking and watchdog functionality to ensure correct and
197automatic recovery of guests from fenced nodes.
198
199If a non-HA-managed guest has only shared disks (and no other local resources
200which are only available on the failed node are configured), a manual recovery
201is possible by simply moving the guest configuration file from the failed
202node's directory in `/etc/pve/` to an alive node's directory (which changes the
203logical owner or location of the guest).
204
205For example, recovering the VM with ID `100` from a dead `node1` to another
206node `node2` works with the following command executed when logged in as root
207on any member node of the cluster:
208
209 mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
210
211WARNING: Before manually recovering a guest like this, make absolutely sure
212that the failed source node is really powered off/fenced. Otherwise {pve}'s
213locking principles are violated by the `mv` command, which can have unexpected
214consequences.
215
216WARNING: Guest with local disks (or other local resources which are only
217available on the dead node) are not recoverable like this. Either wait for the
218failed node to rejoin the cluster or restore such guests from backups.
219
bd88f9d9
DM
220ifdef::manvolnum[]
221include::pve-copyright.adoc[]
222endif::manvolnum[]