+[[chapter_pmxcfs]]
ifdef::manvolnum[]
-PVE({manvolnum})
-================
-include::attributes.txt[]
+pmxcfs(8)
+=========
+:pve-toplevel:
NAME
----
pmxcfs - Proxmox Cluster File System
-SYNOPSYS
+SYNOPSIS
--------
-include::pmxcfs.8-cli.adoc[]
+include::pmxcfs.8-synopsis.adoc[]
DESCRIPTION
-----------
ifndef::manvolnum[]
Proxmox Cluster File System (pmxcfs)
====================================
-include::attributes.txt[]
+:pve-toplevel:
endif::manvolnum[]
The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
Although the file system stores all data inside a persistent database
on disk, a copy of the data resides in RAM. That imposes restriction
-on the maximal size, which is currently 30MB. This is still enough to
+on the maximum size, which is currently 30MB. This is still enough to
store the configuration of several thousand virtual machines.
This system provides the following advantages:
* automatic updates of the corosync cluster configuration to all nodes
* includes a distributed locking mechanism
+
POSIX Compatibility
-------------------
* `O_TRUNC` creates are not atomic (FUSE restriction)
-File access rights
+File Access Rights
------------------
All files and directories are owned by user `root` and have group
We use the http://www.corosync.org[Corosync Cluster Engine] for
cluster communication, and http://www.sqlite.org[SQlite] for the
-database file. The filesystem is implemented in user space using
+database file. The file system is implemented in user space using
http://fuse.sourceforge.net[FUSE].
-File system layout
+File System Layout
------------------
The file system is mounted at:
|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
|`user.cfg` | {pve} access control configuration (users/groups/...)
|`domains.cfg` | {pve} authentication domains
+|`status.cfg` | {pve} external metrics server configuration
|`authkey.pub` | Public key used by ticket system
|`pve-root-ca.pem` | Public certificate of cluster CA
|`priv/shadow.cfg` | Shadow password file
|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
|=======
+
Symbolic links
~~~~~~~~~~~~~~
|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
|=======
+
Special status files for debugging (JSON)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|`.rrd` |RRD data (most recent entries)
|=======
+
Enable/Disable debugging
~~~~~~~~~~~~~~~~~~~~~~~~
You can enable verbose syslog messages with:
- echo "1" >/etc/pve/.debug
+ echo "1" >/etc/pve/.debug
And disable verbose syslog messages with:
- echo "0" >/etc/pve/.debug
+ echo "0" >/etc/pve/.debug
Recovery
lost Proxmox VE host, then reboot and check. (And don't forget your
VM/CT data)
+
Remove Cluster configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
your cluster. This makes sure that all secret cluster/ssh keys and any
shared configuration data is destroyed.
-In some cases, you might prefer to put a node back to local mode
-without reinstall, which is described here:
-
-* stop the cluster file system in `/etc/pve/`
-
- # systemctl stop pve-cluster
+In some cases, you might prefer to put a node back to local mode without
+reinstall, which is described in
+<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
-* start it again but forcing local mode
- # pmxcfs -l
+Recovering/Moving Guests from Failed Nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* remove the cluster config
+For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
+`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
+owner of the respective guest. This concept enables the usage of local locks
+instead of expensive cluster-wide locks for preventing concurrent guest
+configuration changes.
- # rm /etc/pve/cluster.conf
- # rm /etc/cluster/cluster.conf
- # rm /var/lib/pve-cluster/corosync.authkey
+As a consequence, if the owning node of a guest fails (e.g., because of a power
+outage, fencing event, ..), a regular migration is not possible (even if all
+the disks are located on shared storage) because such a local lock on the
+(dead) owning node is unobtainable. This is not a problem for HA-managed
+guests, as {pve}'s High Availability stack includes the necessary
+(cluster-wide) locking and watchdog functionality to ensure correct and
+automatic recovery of guests from fenced nodes.
-* stop the cluster file system again
+If a non-HA-managed guest has only shared disks (and no other local resources
+which are only available on the failed node are configured), a manual recovery
+is possible by simply moving the guest configuration file from the failed
+node's directory in `/etc/pve/` to an alive node's directory (which changes the
+logical owner or location of the guest).
- # systemctl stop pve-cluster
+For example, recovering the VM with ID `100` from a dead `node1` to another
+node `node2` works with the following command executed when logged in as root
+on any member node of the cluster:
-* restart pve services (or reboot)
+ mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
- # systemctl start pve-cluster
- # systemctl restart pvedaemon
- # systemctl restart pveproxy
- # systemctl restart pvestatd
+WARNING: Before manually recovering a guest like this, make absolutely sure
+that the failed source node is really powered off/fenced. Otherwise {pve}'s
+locking principles are violated by the `mv` command, which can have unexpected
+consequences.
+WARNING: Guest with local disks (or other local resources which are only
+available on the dead node) are not recoverable like this. Either wait for the
+failed node to rejoin the cluster or restore such guests from backups.
ifdef::manvolnum[]
include::pve-copyright.adoc[]