]> git.proxmox.com Git - pve-docs.git/blame_incremental - pmxcfs.adoc
remove unecessary symbols at the end of the line
[pve-docs.git] / pmxcfs.adoc
... / ...
CommitLineData
1ifdef::manvolnum[]
2pmxcfs(8)
3=========
4:pve-toplevel:
5
6NAME
7----
8
9pmxcfs - Proxmox Cluster File System
10
11SYNOPSIS
12--------
13
14include::pmxcfs.8-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Proxmox Cluster File System (pmxcfs)
22====================================
23:pve-toplevel:
24endif::manvolnum[]
25
26The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
27system for storing configuration files, replicated in real time to all
28cluster nodes using `corosync`. We use this to store all PVE related
29configuration files.
30
31Although the file system stores all data inside a persistent database
32on disk, a copy of the data resides in RAM. That imposes restriction
33on the maximum size, which is currently 30MB. This is still enough to
34store the configuration of several thousand virtual machines.
35
36This system provides the following advantages:
37
38* seamless replication of all configuration to all nodes in real time
39* provides strong consistency checks to avoid duplicate VM IDs
40* read-only when a node loses quorum
41* automatic updates of the corosync cluster configuration to all nodes
42* includes a distributed locking mechanism
43
44
45POSIX Compatibility
46-------------------
47
48The file system is based on FUSE, so the behavior is POSIX like. But
49some feature are simply not implemented, because we do not need them:
50
51* you can just generate normal files and directories, but no symbolic
52 links, ...
53
54* you can't rename non-empty directories (because this makes it easier
55 to guarantee that VMIDs are unique).
56
57* you can't change file permissions (permissions are based on path)
58
59* `O_EXCL` creates were not atomic (like old NFS)
60
61* `O_TRUNC` creates are not atomic (FUSE restriction)
62
63
64File Access Rights
65------------------
66
67All files and directories are owned by user `root` and have group
68`www-data`. Only root has write permissions, but group `www-data` can
69read most files. Files below the following paths:
70
71 /etc/pve/priv/
72 /etc/pve/nodes/${NAME}/priv/
73
74are only accessible by root.
75
76
77Technology
78----------
79
80We use the http://www.corosync.org[Corosync Cluster Engine] for
81cluster communication, and http://www.sqlite.org[SQlite] for the
82database file. The file system is implemented in user space using
83http://fuse.sourceforge.net[FUSE].
84
85File System Layout
86------------------
87
88The file system is mounted at:
89
90 /etc/pve
91
92Files
93~~~~~
94
95[width="100%",cols="m,d"]
96|=======
97|`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
98|`storage.cfg` | {pve} storage configuration
99|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
100|`user.cfg` | {pve} access control configuration (users/groups/...)
101|`domains.cfg` | {pve} authentication domains
102|`authkey.pub` | Public key used by ticket system
103|`pve-root-ca.pem` | Public certificate of cluster CA
104|`priv/shadow.cfg` | Shadow password file
105|`priv/authkey.key` | Private key used by ticket system
106|`priv/pve-root-ca.key` | Private key of cluster CA
107|`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
108|`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
109|`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
110|`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
111|`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
112|`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
113|`firewall/cluster.fw` | Firewall configuration applied to all nodes
114|`firewall/<NAME>.fw` | Firewall configuration for individual nodes
115|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
116|=======
117
118
119Symbolic links
120~~~~~~~~~~~~~~
121
122[width="100%",cols="m,m"]
123|=======
124|`local` | `nodes/<LOCAL_HOST_NAME>`
125|`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
126|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
127|=======
128
129
130Special status files for debugging (JSON)
131~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132
133[width="100%",cols="m,d"]
134|=======
135|`.version` |File versions (to detect file modifications)
136|`.members` |Info about cluster members
137|`.vmlist` |List of all VMs
138|`.clusterlog` |Cluster log (last 50 entries)
139|`.rrd` |RRD data (most recent entries)
140|=======
141
142
143Enable/Disable debugging
144~~~~~~~~~~~~~~~~~~~~~~~~
145
146You can enable verbose syslog messages with:
147
148 echo "1" >/etc/pve/.debug
149
150And disable verbose syslog messages with:
151
152 echo "0" >/etc/pve/.debug
153
154
155Recovery
156--------
157
158If you have major problems with your Proxmox VE host, e.g. hardware
159issues, it could be helpful to just copy the pmxcfs database file
160`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
161host. On the new host (with nothing running), you need to stop the
162`pve-cluster` service and replace the `config.db` file (needed permissions
163`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
164lost Proxmox VE host, then reboot and check. (And don't forget your
165VM/CT data)
166
167
168Remove Cluster configuration
169~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170
171The recommended way is to reinstall the node after you removed it from
172your cluster. This makes sure that all secret cluster/ssh keys and any
173shared configuration data is destroyed.
174
175In some cases, you might prefer to put a node back to local mode without
176reinstall, which is described in
177<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
178
179
180Recovering/Moving Guests from Failed Nodes
181~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182
183For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
184`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
185owner of the respective guest. This concept enables the usage of local locks
186instead of expensive cluster-wide locks for preventing concurrent guest
187configuration changes.
188
189As a consequence, if the owning node of a guest fails (e.g., because of a power
190outage, fencing event, ..), a regular migration is not possible (even if all
191the disks are located on shared storage) because such a local lock on the
192(dead) owning node is unobtainable. This is not a problem for HA-managed
193guests, as {pve}'s High Availability stack includes the necessary
194(cluster-wide) locking and watchdog functionality to ensure correct and
195automatic recovery of guests from fenced nodes.
196
197If a non-HA-managed guest has only shared disks (and no other local resources
198which are only available on the failed node are configured), a manual recovery
199is possible by simply moving the guest configuration file from the failed
200node's directory in `/etc/pve/` to an alive node's directory (which changes the
201logical owner or location of the guest).
202
203For example, recovering the VM with ID `100` from a dead `node1` to another
204node `node2` works with the following command executed when logged in as root
205on any member node of the cluster:
206
207 mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
208
209WARNING: Before manually recovering a guest like this, make absolutely sure
210that the failed source node is really powered off/fenced. Otherwise {pve}'s
211locking principles are violated by the `mv` command, which can have unexpected
212consequences.
213
214WARNING: Guest with local disks (or other local resources which are only
215available on the dead node) are not recoverable like this. Either wait for the
216failed node to rejoin the cluster or restore such guests from backups.
217
218ifdef::manvolnum[]
219include::pve-copyright.adoc[]
220endif::manvolnum[]