| 1 | ifdef::manvolnum[] |
| 2 | PVE({manvolnum}) |
| 3 | ================ |
| 4 | include::attributes.txt[] |
| 5 | |
| 6 | NAME |
| 7 | ---- |
| 8 | |
| 9 | pmxcfs - Proxmox Cluster File System |
| 10 | |
| 11 | SYNOPSYS |
| 12 | -------- |
| 13 | |
| 14 | include::pmxcfs.8-cli.adoc[] |
| 15 | |
| 16 | DESCRIPTION |
| 17 | ----------- |
| 18 | endif::manvolnum[] |
| 19 | |
| 20 | ifndef::manvolnum[] |
| 21 | Proxmox Cluster File System (pmxcfs) |
| 22 | ==================================== |
| 23 | include::attributes.txt[] |
| 24 | endif::manvolnum[] |
| 25 | |
| 26 | The Proxmox Cluster file system (pmxcfs) is a database-driven file |
| 27 | system for storing configuration files, replicated in real time to all |
| 28 | cluster nodes using corosync. We use this to store all PVE related |
| 29 | configuration files. |
| 30 | |
| 31 | Although the file system stores all data inside a persistent database |
| 32 | on disk, a copy of the data resides in RAM. That imposes restriction |
| 33 | on the maximal size, which is currently 30MB. This is still enough to |
| 34 | store the configuration of several thousand virtual machines. |
| 35 | |
| 36 | This system provides the following advantages: |
| 37 | |
| 38 | * seamless replication of all configuration to all nodes in real time |
| 39 | * provides strong consistency checks to avoid duplicate VM IDs |
| 40 | * read-only when a node loses quorum |
| 41 | * automatic updates of the corosync cluster configuration to all nodes |
| 42 | * includes a distributed locking mechanism |
| 43 | |
| 44 | POSIX Compatibility |
| 45 | ------------------- |
| 46 | |
| 47 | The file system is based on FUSE, so the behavior is POSIX like. But |
| 48 | some feature are simply not implemented, because we do not need them: |
| 49 | |
| 50 | * you can just generate normal files and directories, but no symbolic |
| 51 | links, ... |
| 52 | |
| 53 | * you can't rename non-empty directories (because this makes it easier |
| 54 | to guarantee that VMIDs are unique). |
| 55 | |
| 56 | * you can't change file permissions (permissions are based on path) |
| 57 | |
| 58 | * `O_EXCL` creates were not atomic (like old NFS) |
| 59 | |
| 60 | * `O_TRUNC` creates are not atomic (FUSE restriction) |
| 61 | |
| 62 | |
| 63 | File access rights |
| 64 | ------------------ |
| 65 | |
| 66 | All files and directories are owned by user 'root' and have group |
| 67 | 'www-data'. Only root has write permissions, but group 'www-data' can |
| 68 | read most files. Files below the following paths: |
| 69 | |
| 70 | /etc/pve/priv/ |
| 71 | /etc/pve/nodes/${NAME}/priv/ |
| 72 | |
| 73 | are only accessible by root. |
| 74 | |
| 75 | |
| 76 | Technology |
| 77 | ---------- |
| 78 | |
| 79 | We use the http://www.corosync.org[Corosync Cluster Engine] for |
| 80 | cluster communication, and http://www.sqlite.org[SQlite] for the |
| 81 | database file. The filesystem is implemented in user space using |
| 82 | http://fuse.sourceforge.net[FUSE]. |
| 83 | |
| 84 | File system layout |
| 85 | ------------------ |
| 86 | |
| 87 | The file system is mounted at: |
| 88 | |
| 89 | /etc/pve |
| 90 | |
| 91 | Files |
| 92 | ~~~~~ |
| 93 | |
| 94 | [width="100%",cols="m,d"] |
| 95 | |======= |
| 96 | |corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf) |
| 97 | |storage.cfg |{pve} storage configuration |
| 98 | |datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...) |
| 99 | |user.cfg |{pve} access control configuration (users/groups/...) |
| 100 | |domains.cfg |{pve} Authentication domains |
| 101 | |authkey.pub | public key used by ticket system |
| 102 | |pve-root-ca.pem | public certificate of cluster CA |
| 103 | |priv/shadow.cfg | shadow password file |
| 104 | |priv/authkey.key | private key used by ticket system |
| 105 | |priv/pve-root-ca.key | private key of cluster CA |
| 106 | |nodes/<NAME>/pve-ssl.pem | public ssl certificate for web server (signed by cluster CA) |
| 107 | |nodes/<NAME>/pve-ssl.key | private ssl key for pve-ssl.pem |
| 108 | |nodes/<NAME>/pveproxy-ssl.pem | public ssl certificate (chain) for web server (optional override for pve-ssl.pem) |
| 109 | |nodes/<NAME>/pveproxy-ssl.key | private ssl key for pveproxy-ssl.pem (optional) |
| 110 | |nodes/<NAME>/qemu-server/<VMID>.conf | VM configuration data for KVM VMs |
| 111 | |nodes/<NAME>/lxc/<VMID>.conf | VM configuration data for LXC containers |
| 112 | |firewall/cluster.fw | Firewall config applied to all nodes |
| 113 | |firewall/<NAME>.fw | Firewall config for individual nodes |
| 114 | |firewall/<VMID>.fw | Firewall config for VMs and Containers |
| 115 | |======= |
| 116 | |
| 117 | Symbolic links |
| 118 | ~~~~~~~~~~~~~~ |
| 119 | |
| 120 | [width="100%",cols="m,m"] |
| 121 | |======= |
| 122 | |local |nodes/<LOCAL_HOST_NAME> |
| 123 | |qemu-server |nodes/<LOCAL_HOST_NAME>/qemu-server/ |
| 124 | |lxc |nodes/<LOCAL_HOST_NAME>/lxc/ |
| 125 | |======= |
| 126 | |
| 127 | Special status files for debugging (JSON) |
| 128 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 129 | |
| 130 | [width="100%",cols="m,d"] |
| 131 | |======= |
| 132 | | .version |file versions (to detect file modifications) |
| 133 | | .members |Info about cluster members |
| 134 | | .vmlist |List of all VMs |
| 135 | | .clusterlog |Cluster log (last 50 entries) |
| 136 | | .rrd |RRD data (most recent entries) |
| 137 | |======= |
| 138 | |
| 139 | Enable/Disable debugging |
| 140 | ~~~~~~~~~~~~~~~~~~~~~~~~ |
| 141 | |
| 142 | You can enable verbose syslog messages with: |
| 143 | |
| 144 | echo "1" >/etc/pve/.debug |
| 145 | |
| 146 | And disable verbose syslog messages with: |
| 147 | |
| 148 | echo "0" >/etc/pve/.debug |
| 149 | |
| 150 | |
| 151 | Recovery |
| 152 | -------- |
| 153 | |
| 154 | If you have major problems with your Proxmox VE host, e.g. hardware |
| 155 | issues, it could be helpful to just copy the pmxcfs database file |
| 156 | /var/lib/pve-cluster/config.db and move it to a new Proxmox VE |
| 157 | host. On the new host (with nothing running), you need to stop the |
| 158 | pve-cluster service and replace the config.db file (needed permissions |
| 159 | 0600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the |
| 160 | lost Proxmox VE host, then reboot and check. (And donĀ“t forget your |
| 161 | VM/CT data) |
| 162 | |
| 163 | Remove Cluster configuration |
| 164 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 165 | |
| 166 | The recommended way is to reinstall the node after you removed it from |
| 167 | your cluster. This makes sure that all secret cluster/ssh keys and any |
| 168 | shared configuration data is destroyed. |
| 169 | |
| 170 | In some cases, you might prefer to put a node back to local mode |
| 171 | without reinstall, which is described here: |
| 172 | |
| 173 | * stop the cluster file system in '/etc/pve/' |
| 174 | |
| 175 | # systemctl stop pve-cluster |
| 176 | |
| 177 | * start it again but forcing local mode |
| 178 | |
| 179 | # pmxcfs -l |
| 180 | |
| 181 | * remove the cluster config |
| 182 | |
| 183 | # rm /etc/pve/cluster.conf |
| 184 | # rm /etc/cluster/cluster.conf |
| 185 | # rm /var/lib/pve-cluster/corosync.authkey |
| 186 | |
| 187 | * stop the cluster file system again |
| 188 | |
| 189 | # systemctl stop pve-cluster |
| 190 | |
| 191 | * restart pve services (or reboot) |
| 192 | |
| 193 | # systemctl start pve-cluster |
| 194 | # systemctl restart pvedaemon |
| 195 | # systemctl restart pveproxy |
| 196 | # systemctl restart pvestatd |
| 197 | |
| 198 | |
| 199 | ifdef::manvolnum[] |
| 200 | include::pve-copyright.adoc[] |
| 201 | endif::manvolnum[] |