]> git.proxmox.com Git - pve-docs.git/blob - pmxcfs.adoc
pmxcfs: language and style fixup
[pve-docs.git] / pmxcfs.adoc
1 [[chapter_pmxcfs]]
2 ifdef::manvolnum[]
3 pmxcfs(8)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pmxcfs - Proxmox Cluster File System
11
12 SYNOPSIS
13 --------
14
15 include::pmxcfs.8-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Proxmox Cluster File System (pmxcfs)
23 ====================================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
28 system for storing configuration files, replicated in real time to all
29 cluster nodes using `corosync`. We use this to store all PVE related
30 configuration files.
31
32 Although the file system stores all data inside a persistent database
33 on disk, a copy of the data resides in RAM. This imposes restrictions
34 on the maximum size, which is currently 30MB. This is still enough to
35 store the configuration of several thousand virtual machines.
36
37 This system provides the following advantages:
38
39 * Seamless replication of all configuration to all nodes in real time
40 * Provides strong consistency checks to avoid duplicate VM IDs
41 * Read-only when a node loses quorum
42 * Automatic updates of the corosync cluster configuration to all nodes
43 * Includes a distributed locking mechanism
44
45
46 POSIX Compatibility
47 -------------------
48
49 The file system is based on FUSE, so the behavior is POSIX like. But
50 some feature are simply not implemented, because we do not need them:
51
52 * You can just generate normal files and directories, but no symbolic
53 links, ...
54
55 * You can't rename non-empty directories (because this makes it easier
56 to guarantee that VMIDs are unique).
57
58 * You can't change file permissions (permissions are based on paths)
59
60 * `O_EXCL` creates were not atomic (like old NFS)
61
62 * `O_TRUNC` creates are not atomic (FUSE restriction)
63
64
65 File Access Rights
66 ------------------
67
68 All files and directories are owned by user `root` and have group
69 `www-data`. Only root has write permissions, but group `www-data` can
70 read most files. Files below the following paths are only accessible by root:
71
72 /etc/pve/priv/
73 /etc/pve/nodes/${NAME}/priv/
74
75
76 Technology
77 ----------
78
79 We use the https://www.corosync.org[Corosync Cluster Engine] for
80 cluster communication, and https://www.sqlite.org[SQlite] for the
81 database file. The file system is implemented in user space using
82 https://github.com/libfuse/libfuse[FUSE].
83
84 File System Layout
85 ------------------
86
87 The file system is mounted at:
88
89 /etc/pve
90
91 Files
92 ~~~~~
93
94 [width="100%",cols="m,d"]
95 |=======
96 |`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
97 |`storage.cfg` | {pve} storage configuration
98 |`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
99 |`user.cfg` | {pve} access control configuration (users/groups/...)
100 |`domains.cfg` | {pve} authentication domains
101 |`status.cfg` | {pve} external metrics server configuration
102 |`authkey.pub` | Public key used by ticket system
103 |`pve-root-ca.pem` | Public certificate of cluster CA
104 |`priv/shadow.cfg` | Shadow password file
105 |`priv/authkey.key` | Private key used by ticket system
106 |`priv/pve-root-ca.key` | Private key of cluster CA
107 |`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
108 |`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
109 |`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
110 |`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
111 |`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
112 |`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
113 |`firewall/cluster.fw` | Firewall configuration applied to all nodes
114 |`firewall/<NAME>.fw` | Firewall configuration for individual nodes
115 |`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
116 |=======
117
118
119 Symbolic links
120 ~~~~~~~~~~~~~~
121
122 [width="100%",cols="m,m"]
123 |=======
124 |`local` | `nodes/<LOCAL_HOST_NAME>`
125 |`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
126 |`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
127 |=======
128
129
130 Special status files for debugging (JSON)
131 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132
133 [width="100%",cols="m,d"]
134 |=======
135 |`.version` |File versions (to detect file modifications)
136 |`.members` |Info about cluster members
137 |`.vmlist` |List of all VMs
138 |`.clusterlog` |Cluster log (last 50 entries)
139 |`.rrd` |RRD data (most recent entries)
140 |=======
141
142
143 Enable/Disable debugging
144 ~~~~~~~~~~~~~~~~~~~~~~~~
145
146 You can enable verbose syslog messages with:
147
148 echo "1" >/etc/pve/.debug
149
150 And disable verbose syslog messages with:
151
152 echo "0" >/etc/pve/.debug
153
154
155 Recovery
156 --------
157
158 If you have major problems with your {pve} host, for example hardware
159 issues, it could be helpful to copy the pmxcfs database file
160 `/var/lib/pve-cluster/config.db`, and move it to a new {pve}
161 host. On the new host (with nothing running), you need to stop the
162 `pve-cluster` service and replace the `config.db` file (required permissions
163 `0600`). Following this, adapt `/etc/hostname` and `/etc/hosts` according to the
164 lost {pve} host, then reboot and check (and don't forget your
165 VM/CT data).
166
167
168 Remove Cluster Configuration
169 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170
171 The recommended way is to reinstall the node after you remove it from
172 your cluster. This ensures that all secret cluster/ssh keys and any
173 shared configuration data is destroyed.
174
175 In some cases, you might prefer to put a node back to local mode without
176 reinstalling, which is described in
177 <<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
178
179
180 Recovering/Moving Guests from Failed Nodes
181 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182
183 For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
184 `nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as the
185 owner of the respective guest. This concept enables the usage of local locks
186 instead of expensive cluster-wide locks for preventing concurrent guest
187 configuration changes.
188
189 As a consequence, if the owning node of a guest fails (for example, due to a power
190 outage, fencing event, etc.), a regular migration is not possible (even if all
191 the disks are located on shared storage), because such a local lock on the
192 (offline) owning node is unobtainable. This is not a problem for HA-managed
193 guests, as {pve}'s High Availability stack includes the necessary
194 (cluster-wide) locking and watchdog functionality to ensure correct and
195 automatic recovery of guests from fenced nodes.
196
197 If a non-HA-managed guest has only shared disks (and no other local resources
198 which are only available on the failed node), a manual recovery
199 is possible by simply moving the guest configuration file from the failed
200 node's directory in `/etc/pve/` to an online node's directory (which changes the
201 logical owner or location of the guest).
202
203 For example, recovering the VM with ID `100` from an offline `node1` to another
204 node `node2` works by running the following command as root on any member node
205 of the cluster:
206
207 mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
208
209 WARNING: Before manually recovering a guest like this, make absolutely sure
210 that the failed source node is really powered off/fenced. Otherwise {pve}'s
211 locking principles are violated by the `mv` command, which can have unexpected
212 consequences.
213
214 WARNING: Guests with local disks (or other local resources which are only
215 available on the offline node) are not recoverable like this. Either wait for the
216 failed node to rejoin the cluster or restore such guests from backups.
217
218 ifdef::manvolnum[]
219 include::pve-copyright.adoc[]
220 endif::manvolnum[]