4 include::attributes.txt[]
10 pmxcfs - Proxmox Cluster File System
15 include::pmxcfs.8-cli.adoc[]
22 Proxmox Cluster File System (pmxcfs)
23 ====================================
24 include::attributes.txt[]
31 The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
32 system for storing configuration files, replicated in real time to all
33 cluster nodes using `corosync`. We use this to store all PVE related
36 Although the file system stores all data inside a persistent database
37 on disk, a copy of the data resides in RAM. That imposes restriction
38 on the maximum size, which is currently 30MB. This is still enough to
39 store the configuration of several thousand virtual machines.
41 This system provides the following advantages:
43 * seamless replication of all configuration to all nodes in real time
44 * provides strong consistency checks to avoid duplicate VM IDs
45 * read-only when a node loses quorum
46 * automatic updates of the corosync cluster configuration to all nodes
47 * includes a distributed locking mechanism
53 The file system is based on FUSE, so the behavior is POSIX like. But
54 some feature are simply not implemented, because we do not need them:
56 * you can just generate normal files and directories, but no symbolic
59 * you can't rename non-empty directories (because this makes it easier
60 to guarantee that VMIDs are unique).
62 * you can't change file permissions (permissions are based on path)
64 * `O_EXCL` creates were not atomic (like old NFS)
66 * `O_TRUNC` creates are not atomic (FUSE restriction)
72 All files and directories are owned by user `root` and have group
73 `www-data`. Only root has write permissions, but group `www-data` can
74 read most files. Files below the following paths:
77 /etc/pve/nodes/${NAME}/priv/
79 are only accessible by root.
85 We use the http://www.corosync.org[Corosync Cluster Engine] for
86 cluster communication, and http://www.sqlite.org[SQlite] for the
87 database file. The file system is implemented in user space using
88 http://fuse.sourceforge.net[FUSE].
93 The file system is mounted at:
100 [width="100%",cols="m,d"]
102 |`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
103 |`storage.cfg` | {pve} storage configuration
104 |`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
105 |`user.cfg` | {pve} access control configuration (users/groups/...)
106 |`domains.cfg` | {pve} authentication domains
107 |`authkey.pub` | Public key used by ticket system
108 |`pve-root-ca.pem` | Public certificate of cluster CA
109 |`priv/shadow.cfg` | Shadow password file
110 |`priv/authkey.key` | Private key used by ticket system
111 |`priv/pve-root-ca.key` | Private key of cluster CA
112 |`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
113 |`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
114 |`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
115 |`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
116 |`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
117 |`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
118 |`firewall/cluster.fw` | Firewall configuration applied to all nodes
119 |`firewall/<NAME>.fw` | Firewall configuration for individual nodes
120 |`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
127 [width="100%",cols="m,m"]
129 |`local` | `nodes/<LOCAL_HOST_NAME>`
130 |`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
131 |`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
135 Special status files for debugging (JSON)
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138 [width="100%",cols="m,d"]
140 |`.version` |File versions (to detect file modifications)
141 |`.members` |Info about cluster members
142 |`.vmlist` |List of all VMs
143 |`.clusterlog` |Cluster log (last 50 entries)
144 |`.rrd` |RRD data (most recent entries)
148 Enable/Disable debugging
149 ~~~~~~~~~~~~~~~~~~~~~~~~
151 You can enable verbose syslog messages with:
153 echo "1" >/etc/pve/.debug
155 And disable verbose syslog messages with:
157 echo "0" >/etc/pve/.debug
163 If you have major problems with your Proxmox VE host, e.g. hardware
164 issues, it could be helpful to just copy the pmxcfs database file
165 `/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
166 host. On the new host (with nothing running), you need to stop the
167 `pve-cluster` service and replace the `config.db` file (needed permissions
168 `0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
169 lost Proxmox VE host, then reboot and check. (And don't forget your
173 Remove Cluster configuration
174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
176 The recommended way is to reinstall the node after you removed it from
177 your cluster. This makes sure that all secret cluster/ssh keys and any
178 shared configuration data is destroyed.
180 In some cases, you might prefer to put a node back to local mode
181 without reinstall, which is described here:
183 * stop the cluster file system in `/etc/pve/`
185 # systemctl stop pve-cluster
187 * start it again but forcing local mode
191 * remove the cluster configuration
193 # rm /etc/pve/cluster.conf
194 # rm /etc/cluster/cluster.conf
195 # rm /var/lib/pve-cluster/corosync.authkey
197 * stop the cluster file system again
199 # systemctl stop pve-cluster
201 * restart PVE services (or reboot)
203 # systemctl start pve-cluster
204 # systemctl restart pvedaemon
205 # systemctl restart pveproxy
206 # systemctl restart pvestatd
210 include::pve-copyright.adoc[]