]> git.proxmox.com Git - pve-docs.git/blame_incremental - pmxcfs.adoc
allow to generate pmxcfs.8 man page
[pve-docs.git] / pmxcfs.adoc
... / ...
CommitLineData
1ifdef::manvolnum[]
2PVE({manvolnum})
3================
4include::attributes.txt[]
5
6NAME
7----
8
9pmxcfs - Proxmox Cluster File System
10
11SYNOPSYS
12--------
13
14include::pmxcfs.8-cli.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Proxmox Cluster File System (pmxcfs)
22====================================
23include::attributes.txt[]
24endif::manvolnum[]
25
26The Proxmox Cluster file system (pmxcfs) is a database-driven file
27system for storing configuration files, replicated in real time to all
28cluster nodes using corosync. We use this to store all PVE related
29configuration files.
30
31Although the file system stores all data inside a persistent database
32on disk, a copy of the data resides in RAM. That imposes restriction
33on the maximal size, which is currently 30MB. This is still enough to
34store the configuration of several thousand virtual machines.
35
36Advantages
37----------
38
39* seamless replication of all configuration to all nodes in real time
40* provides strong consistency checks to avoid duplicate VM IDs
41* read-only when a node loses quorum
42* automatic updates of the corosync cluster configuration to all nodes
43* includes a distributed locking mechanism
44
45POSIX Compatibility
46~~~~~~~~~~~~~~~~~~~
47
48The file system is based on FUSE, so the behavior is POSIX like. But
49some feature are simply not implemented, because we do not need them:
50
51* you can just generate normal files and directories, but no symbolic
52 links, ...
53
54* you can't rename non-empty directories (because this makes it easier
55 to guarantee that VMIDs are unique).
56
57* you can't change file permissions (permissions are based on path)
58
59* `O_EXCL` creates were not atomic (like old NFS)
60
61* `O_TRUNC` creates are not atomic (FUSE restriction)
62
63
64File access rights
65~~~~~~~~~~~~~~~~~~
66
67All files and directories are owned by user 'root' and have group
68'www-data'. Only root has write permissions, but group 'www-data' can
69read most files. Files below the following paths:
70
71 /etc/pve/priv/
72 /etc/pve/nodes/${NAME}/priv/
73
74are only accessible by root.
75
76Technology
77----------
78
79We use the http://www.corosync.org[Corosync Cluster Engine] for
80cluster communication, and http://www.sqlite.org[SQlite] for the
81database file. The filesystem is implemented in user space using
82http://fuse.sourceforge.net[FUSE].
83
84File system layout
85------------------
86
87The file system is mounted at:
88
89 /etc/pve
90
91Files
92~~~~~
93
94[width="100%",cols="m,d"]
95|=======
96|corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
97|storage.cfg |{pve} storage configuration
98|datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...)
99|user.cfg |{pve} access control configuration (users/groups/...)
100|domains.cfg |{pve} Authentication domains
101|authkey.pub | public key used by ticket system
102|pve-root-ca.pem | public certificate of cluster CA
103|priv/shadow.cfg | shadow password file
104|priv/authkey.key | private key used by ticket system
105|priv/pve-root-ca.key | private key of cluster CA
106|nodes/<NAME>/pve-ssl.pem | public ssl certificate for web server (signed by cluster CA)
107|nodes/<NAME>/pve-ssl.key | private ssl key for pve-ssl.pem
108|nodes/<NAME>/pveproxy-ssl.pem | public ssl certificate (chain) for web server (optional override for pve-ssl.pem)
109|nodes/<NAME>/pveproxy-ssl.key | private ssl key for pveproxy-ssl.pem (optional)
110|nodes/<NAME>/qemu-server/<VMID>.conf | VM configuration data for KVM VMs
111|nodes/<NAME>/lxc/<VMID>.conf | VM configuration data for LXC containers
112|firewall/cluster.fw | Firewall config applied to all nodes
113|firewall/<NAME>.fw | Firewall config for individual nodes
114|firewall/<VMID>.fw | Firewall config for VMs and Containers
115|=======
116
117Symbolic links
118~~~~~~~~~~~~~~
119
120[width="100%",cols="m,m"]
121|=======
122|local |nodes/<LOCAL_HOST_NAME>
123|qemu-server |nodes/<LOCAL_HOST_NAME>/qemu-server/
124|lxc |nodes/<LOCAL_HOST_NAME>/lxc/
125|=======
126
127Special status files for debugging (JSON)
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129
130[width="100%",cols="m,d"]
131|=======
132| .version |file versions (to detect file modifications)
133| .members |Info about cluster members
134| .vmlist |List of all VMs
135| .clusterlog |Cluster log (last 50 entries)
136| .rrd |RRD data (most recent entries)
137|=======
138
139Enable/Disable debugging
140~~~~~~~~~~~~~~~~~~~~~~~~
141
142You can enable verbose syslog messages with:
143
144 echo "1" >/etc/pve/.debug
145
146And disable verbose syslog messages with:
147
148 echo "0" >/etc/pve/.debug
149
150
151Recovery
152--------
153
154If you have major problems with your Proxmox VE host, e.g. hardware
155issues, it could be helpful to just copy the pmxcfs database file
156/var/lib/pve-cluster/config.db and move it to a new Proxmox VE
157host. On the new host (with nothing running), you need to stop the
158pve-cluster service and replace the config.db file (needed permissions
1590600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the
160lost Proxmox VE host, then reboot and check. (And donĀ“t forget your
161VM/CT data)
162
163Remove Cluster configuration
164~~~~~~~~~~~~~~~~~~~~~~~~~~~~
165
166The recommended way is to reinstall the node after you removed it from
167your cluster. This makes sure that all secret cluster/ssh keys and any
168shared configuration data is destroyed.
169
170In some cases, you might prefer to put a node back to local mode
171without reinstall, which is described here:
172
173* stop the cluster file system in '/etc/pve/'
174
175 # systemctl stop pve-cluster
176
177* start it again but forcing local mode
178
179 # pmxcfs -l
180
181* remove the cluster config
182
183 # rm /etc/pve/cluster.conf
184 # rm /etc/cluster/cluster.conf
185 # rm /var/lib/pve-cluster/corosync.authkey
186
187* stop the cluster file system again
188
189 # service pve-cluster stop
190
191* restart pve services (or reboot)
192
193 # service pve-cluster start
194 # service pvedaemon restart
195 # service pveproxy restart
196 # service pvestatd restart
197
198
199ifdef::manvolnum[]
200include::pve-copyright.adoc[]
201endif::manvolnum[]