4 include::attributes.txt[]
10 pmxcfs - Proxmox Cluster File System
15 include::pmxcfs.8-cli.adoc[]
22 Proxmox Cluster File System (pmxcfs)
23 ====================================
24 include::attributes.txt[]
30 The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
31 system for storing configuration files, replicated in real time to all
32 cluster nodes using `corosync`. We use this to store all PVE related
35 Although the file system stores all data inside a persistent database
36 on disk, a copy of the data resides in RAM. That imposes restriction
37 on the maximum size, which is currently 30MB. This is still enough to
38 store the configuration of several thousand virtual machines.
40 This system provides the following advantages:
42 * seamless replication of all configuration to all nodes in real time
43 * provides strong consistency checks to avoid duplicate VM IDs
44 * read-only when a node loses quorum
45 * automatic updates of the corosync cluster configuration to all nodes
46 * includes a distributed locking mechanism
52 The file system is based on FUSE, so the behavior is POSIX like. But
53 some feature are simply not implemented, because we do not need them:
55 * you can just generate normal files and directories, but no symbolic
58 * you can't rename non-empty directories (because this makes it easier
59 to guarantee that VMIDs are unique).
61 * you can't change file permissions (permissions are based on path)
63 * `O_EXCL` creates were not atomic (like old NFS)
65 * `O_TRUNC` creates are not atomic (FUSE restriction)
71 All files and directories are owned by user `root` and have group
72 `www-data`. Only root has write permissions, but group `www-data` can
73 read most files. Files below the following paths:
76 /etc/pve/nodes/${NAME}/priv/
78 are only accessible by root.
84 We use the http://www.corosync.org[Corosync Cluster Engine] for
85 cluster communication, and http://www.sqlite.org[SQlite] for the
86 database file. The file system is implemented in user space using
87 http://fuse.sourceforge.net[FUSE].
92 The file system is mounted at:
99 [width="100%",cols="m,d"]
101 |`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
102 |`storage.cfg` | {pve} storage configuration
103 |`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
104 |`user.cfg` | {pve} access control configuration (users/groups/...)
105 |`domains.cfg` | {pve} authentication domains
106 |`authkey.pub` | Public key used by ticket system
107 |`pve-root-ca.pem` | Public certificate of cluster CA
108 |`priv/shadow.cfg` | Shadow password file
109 |`priv/authkey.key` | Private key used by ticket system
110 |`priv/pve-root-ca.key` | Private key of cluster CA
111 |`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
112 |`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
113 |`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
114 |`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
115 |`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
116 |`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
117 |`firewall/cluster.fw` | Firewall configuration applied to all nodes
118 |`firewall/<NAME>.fw` | Firewall configuration for individual nodes
119 |`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
126 [width="100%",cols="m,m"]
128 |`local` | `nodes/<LOCAL_HOST_NAME>`
129 |`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
130 |`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
134 Special status files for debugging (JSON)
135 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
137 [width="100%",cols="m,d"]
139 |`.version` |File versions (to detect file modifications)
140 |`.members` |Info about cluster members
141 |`.vmlist` |List of all VMs
142 |`.clusterlog` |Cluster log (last 50 entries)
143 |`.rrd` |RRD data (most recent entries)
147 Enable/Disable debugging
148 ~~~~~~~~~~~~~~~~~~~~~~~~
150 You can enable verbose syslog messages with:
152 echo "1" >/etc/pve/.debug
154 And disable verbose syslog messages with:
156 echo "0" >/etc/pve/.debug
162 If you have major problems with your Proxmox VE host, e.g. hardware
163 issues, it could be helpful to just copy the pmxcfs database file
164 `/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
165 host. On the new host (with nothing running), you need to stop the
166 `pve-cluster` service and replace the `config.db` file (needed permissions
167 `0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
168 lost Proxmox VE host, then reboot and check. (And don't forget your
172 Remove Cluster configuration
173 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175 The recommended way is to reinstall the node after you removed it from
176 your cluster. This makes sure that all secret cluster/ssh keys and any
177 shared configuration data is destroyed.
179 In some cases, you might prefer to put a node back to local mode
180 without reinstall, which is described here:
182 * stop the cluster file system in `/etc/pve/`
184 # systemctl stop pve-cluster
186 * start it again but forcing local mode
190 * remove the cluster configuration
192 # rm /etc/pve/cluster.conf
193 # rm /etc/cluster/cluster.conf
194 # rm /var/lib/pve-cluster/corosync.authkey
196 * stop the cluster file system again
198 # systemctl stop pve-cluster
200 * restart PVE services (or reboot)
202 # systemctl start pve-cluster
203 # systemctl restart pvedaemon
204 # systemctl restart pveproxy
205 # systemctl restart pvestatd
209 include::pve-copyright.adoc[]