X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pmxcfs.adoc;h=d35c9606123a171cf02c375ae299cb8bd240230f;hp=493b22f909081cd95b363ef65fb1f4b4f35f140e;hb=540791016979f4ec2e83cd1933872f3266a10da6;hpb=0370748a0da407bb63e0ab8c0d0046e86aed6eaf diff --git a/pmxcfs.adoc b/pmxcfs.adoc index 493b22f..d35c960 100644 --- a/pmxcfs.adoc +++ b/pmxcfs.adoc @@ -1,27 +1,51 @@ -Proxmox Cluster file system (pmxcfs) +ifdef::manvolnum[] +pmxcfs(8) +========= +include::attributes.txt[] +:pve-toplevel: + +NAME +---- + +pmxcfs - Proxmox Cluster File System + +SYNOPSIS +-------- + +include::pmxcfs.8-synopsis.adoc[] + +DESCRIPTION +----------- +endif::manvolnum[] + +ifndef::manvolnum[] +Proxmox Cluster File System (pmxcfs) ==================================== +include::attributes.txt[] +:pve-toplevel: +endif::manvolnum[] -The Proxmox Cluster file system (pmxcfs) is a database-driven file +The Proxmox Cluster file system (``pmxcfs'') is a database-driven file system for storing configuration files, replicated in real time to all -cluster nodes using corosync. We use this to store all PVE related +cluster nodes using `corosync`. We use this to store all PVE related configuration files. Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. That imposes restriction -on the maximal size, which is currently 30MB. This is still enough to +on the maximum size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines. -Advantages ----------- +This system provides the following advantages: * seamless replication of all configuration to all nodes in real time * provides strong consistency checks to avoid duplicate VM IDs -* read-only when a node looses quorum +* read-only when a node loses quorum * automatic updates of the corosync cluster configuration to all nodes * includes a distributed locking mechanism + POSIX Compatibility -~~~~~~~~~~~~~~~~~~~ +------------------- The file system is based on FUSE, so the behavior is POSIX like. But some feature are simply not implemented, because we do not need them: @@ -39,11 +63,11 @@ some feature are simply not implemented, because we do not need them: * `O_TRUNC` creates are not atomic (FUSE restriction) -File access rights -~~~~~~~~~~~~~~~~~~ +File Access Rights +------------------ -All files and directories are owned by user 'root' and have group -'www-data'. Only root has write permissions, but group 'www-data' can +All files and directories are owned by user `root` and have group +`www-data`. Only root has write permissions, but group `www-data` can read most files. Files below the following paths: /etc/pve/priv/ @@ -51,15 +75,16 @@ read most files. Files below the following paths: are only accessible by root. + Technology ---------- We use the http://www.corosync.org[Corosync Cluster Engine] for cluster communication, and http://www.sqlite.org[SQlite] for the -database file. The filesystem is implemented in user space using +database file. The file system is implemented in user space using http://fuse.sourceforge.net[FUSE]. -File system layout +File System Layout ------------------ The file system is mounted at: @@ -71,49 +96,52 @@ Files [width="100%",cols="m,d"] |======= -|corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf) -|storage.cfg |{pve} storage configuration -|datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...) -|user.cfg |{pve} access control configuration (users/groups/...) -|domains.cfg |{pve} Authentication domains -|authkey.pub | public key used by ticket system -|pve-root-ca.pem | public certificate of cluster CA -|priv/shadow.cfg | shadow password file -|priv/authkey.key | private key used by ticket system -|priv/pve-root-ca.key | private key of cluster CA -|nodes//pve-ssl.pem | public ssl certificate for web server (signed by cluster CA) -|nodes//pve-ssl.key | private ssl key for pve-ssl.pem -|nodes//pveproxy-ssl.pem | public ssl certificate (chain) for web server (optional override for pve-ssl.pem) -|nodes//pveproxy-ssl.key | private ssl key for pveproxy-ssl.pem (optional) -|nodes//qemu-server/.conf | VM configuration data for KVM VMs -|nodes//lxc/.conf | VM configuration data for LXC containers -|firewall/cluster.fw | Firewall config applied to all nodes -|firewall/.fw | Firewall config for individual nodes -|firewall/.fw | Firewall config for VMs and Containers +|`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf) +|`storage.cfg` | {pve} storage configuration +|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...) +|`user.cfg` | {pve} access control configuration (users/groups/...) +|`domains.cfg` | {pve} authentication domains +|`authkey.pub` | Public key used by ticket system +|`pve-root-ca.pem` | Public certificate of cluster CA +|`priv/shadow.cfg` | Shadow password file +|`priv/authkey.key` | Private key used by ticket system +|`priv/pve-root-ca.key` | Private key of cluster CA +|`nodes//pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA) +|`nodes//pve-ssl.key` | Private SSL key for `pve-ssl.pem` +|`nodes//pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`) +|`nodes//pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional) +|`nodes//qemu-server/.conf` | VM configuration data for KVM VMs +|`nodes//lxc/.conf` | VM configuration data for LXC containers +|`firewall/cluster.fw` | Firewall configuration applied to all nodes +|`firewall/.fw` | Firewall configuration for individual nodes +|`firewall/.fw` | Firewall configuration for VMs and Containers |======= + Symbolic links ~~~~~~~~~~~~~~ [width="100%",cols="m,m"] |======= -|local |nodes/ -|qemu-server |nodes//qemu-server/ -|lxc |nodes//lxc/ +|`local` | `nodes/` +|`qemu-server` | `nodes//qemu-server/` +|`lxc` | `nodes//lxc/` |======= + Special status files for debugging (JSON) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [width="100%",cols="m,d"] |======= -| .version |file versions (to detect file modifications) -| .members |Info about cluster members -| .vmlist |List of all VMs -| .clusterlog |Cluster log (last 50 entries) -| .rrd |RRD data (most recent entries) +|`.version` |File versions (to detect file modifications) +|`.members` |Info about cluster members +|`.vmlist` |List of all VMs +|`.clusterlog` |Cluster log (last 50 entries) +|`.rrd` |RRD data (most recent entries) |======= + Enable/Disable debugging ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -131,13 +159,14 @@ Recovery If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file -/var/lib/pve-cluster/config.db and move it to a new Proxmox VE +`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the -pve-cluster service and replace the config.db file (needed permissions -0600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the -lost Proxmox VE host, then reboot and check. (And don´t forget your +`pve-cluster` service and replace the `config.db` file (needed permissions +`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the +lost Proxmox VE host, then reboot and check. (And don't forget your VM/CT data) + Remove Cluster configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -148,7 +177,7 @@ shared configuration data is destroyed. In some cases, you might prefer to put a node back to local mode without reinstall, which is described here: -* stop the cluster file system in '/etc/pve/' +* stop the cluster file system in `/etc/pve/` # systemctl stop pve-cluster @@ -156,7 +185,7 @@ without reinstall, which is described here: # pmxcfs -l -* remove the cluster config +* remove the cluster configuration # rm /etc/pve/cluster.conf # rm /etc/cluster/cluster.conf @@ -164,12 +193,16 @@ without reinstall, which is described here: * stop the cluster file system again - # service pve-cluster stop + # systemctl stop pve-cluster + +* restart PVE services (or reboot) -* restart pve services (or reboot) + # systemctl start pve-cluster + # systemctl restart pvedaemon + # systemctl restart pveproxy + # systemctl restart pvestatd - # service pve-cluster start - # service pvedaemon restart - # service pveproxy restart - # service pvestatd restart +ifdef::manvolnum[] +include::pve-copyright.adoc[] +endif::manvolnum[]