]> git.proxmox.com Git - pve-docs.git/blame - pmxcfs.adoc
simplify pve-doc-generator.mk
[pve-docs.git] / pmxcfs.adoc
CommitLineData
ac1e3896
DM
1Proxmox Cluster file system (pmxcfs)
2====================================
3
4The Proxmox Cluster file system (pmxcfs) is a database-driven file
5system for storing configuration files, replicated in real time to all
6cluster nodes using corosync. We use this to store all PVE related
7configuration files.
8
9Although the file system stores all data inside a persistent database
10on disk, a copy of the data resides in RAM. That imposes restriction
11on the maximal size, which is currently 30MB. This is still enough to
12store the configuration of several thousand virtual machines.
13
14Advantages
15----------
16
17* seamless replication of all configuration to all nodes in real time
18* provides strong consistency checks to avoid duplicate VM IDs
a8e99754 19* read-only when a node loses quorum
ac1e3896
DM
20* automatic updates of the corosync cluster configuration to all nodes
21* includes a distributed locking mechanism
22
23POSIX Compatibility
24~~~~~~~~~~~~~~~~~~~
25
26The file system is based on FUSE, so the behavior is POSIX like. But
27some feature are simply not implemented, because we do not need them:
28
29* you can just generate normal files and directories, but no symbolic
30 links, ...
31
32* you can't rename non-empty directories (because this makes it easier
33 to guarantee that VMIDs are unique).
34
35* you can't change file permissions (permissions are based on path)
36
37* `O_EXCL` creates were not atomic (like old NFS)
38
39* `O_TRUNC` creates are not atomic (FUSE restriction)
40
41
42File access rights
43~~~~~~~~~~~~~~~~~~
44
45All files and directories are owned by user 'root' and have group
46'www-data'. Only root has write permissions, but group 'www-data' can
47read most files. Files below the following paths:
48
49 /etc/pve/priv/
50 /etc/pve/nodes/${NAME}/priv/
51
52are only accessible by root.
53
54Technology
55----------
56
57We use the http://www.corosync.org[Corosync Cluster Engine] for
58cluster communication, and http://www.sqlite.org[SQlite] for the
59database file. The filesystem is implemented in user space using
60http://fuse.sourceforge.net[FUSE].
61
62File system layout
63------------------
64
65The file system is mounted at:
66
67 /etc/pve
68
69Files
70~~~~~
71
72[width="100%",cols="m,d"]
73|=======
74|corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
75|storage.cfg |{pve} storage configuration
b9317531 76|datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...)
ac1e3896
DM
77|user.cfg |{pve} access control configuration (users/groups/...)
78|domains.cfg |{pve} Authentication domains
79|authkey.pub | public key used by ticket system
0370748a 80|pve-root-ca.pem | public certificate of cluster CA
ac1e3896
DM
81|priv/shadow.cfg | shadow password file
82|priv/authkey.key | private key used by ticket system
0370748a
FG
83|priv/pve-root-ca.key | private key of cluster CA
84|nodes/<NAME>/pve-ssl.pem | public ssl certificate for web server (signed by cluster CA)
85|nodes/<NAME>/pve-ssl.key | private ssl key for pve-ssl.pem
86|nodes/<NAME>/pveproxy-ssl.pem | public ssl certificate (chain) for web server (optional override for pve-ssl.pem)
87|nodes/<NAME>/pveproxy-ssl.key | private ssl key for pveproxy-ssl.pem (optional)
ac1e3896
DM
88|nodes/<NAME>/qemu-server/<VMID>.conf | VM configuration data for KVM VMs
89|nodes/<NAME>/lxc/<VMID>.conf | VM configuration data for LXC containers
90|firewall/cluster.fw | Firewall config applied to all nodes
91|firewall/<NAME>.fw | Firewall config for individual nodes
92|firewall/<VMID>.fw | Firewall config for VMs and Containers
93|=======
94
95Symbolic links
96~~~~~~~~~~~~~~
97
98[width="100%",cols="m,m"]
99|=======
100|local |nodes/<LOCAL_HOST_NAME>
101|qemu-server |nodes/<LOCAL_HOST_NAME>/qemu-server/
102|lxc |nodes/<LOCAL_HOST_NAME>/lxc/
103|=======
104
105Special status files for debugging (JSON)
106~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
107
108[width="100%",cols="m,d"]
109|=======
110| .version |file versions (to detect file modifications)
111| .members |Info about cluster members
112| .vmlist |List of all VMs
113| .clusterlog |Cluster log (last 50 entries)
114| .rrd |RRD data (most recent entries)
115|=======
116
117Enable/Disable debugging
118~~~~~~~~~~~~~~~~~~~~~~~~
119
120You can enable verbose syslog messages with:
121
122 echo "1" >/etc/pve/.debug
123
124And disable verbose syslog messages with:
125
126 echo "0" >/etc/pve/.debug
127
128
129Recovery
130--------
131
132If you have major problems with your Proxmox VE host, e.g. hardware
133issues, it could be helpful to just copy the pmxcfs database file
134/var/lib/pve-cluster/config.db and move it to a new Proxmox VE
135host. On the new host (with nothing running), you need to stop the
136pve-cluster service and replace the config.db file (needed permissions
1370600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the
138lost Proxmox VE host, then reboot and check. (And don´t forget your
139VM/CT data)
140
141Remove Cluster configuration
142~~~~~~~~~~~~~~~~~~~~~~~~~~~~
143
144The recommended way is to reinstall the node after you removed it from
145your cluster. This makes sure that all secret cluster/ssh keys and any
146shared configuration data is destroyed.
147
148In some cases, you might prefer to put a node back to local mode
149without reinstall, which is described here:
150
151* stop the cluster file system in '/etc/pve/'
152
153 # systemctl stop pve-cluster
154
155* start it again but forcing local mode
156
157 # pmxcfs -l
158
159* remove the cluster config
160
161 # rm /etc/pve/cluster.conf
162 # rm /etc/cluster/cluster.conf
163 # rm /var/lib/pve-cluster/corosync.authkey
164
165* stop the cluster file system again
166
167 # service pve-cluster stop
168
169* restart pve services (or reboot)
170
171 # service pve-cluster start
172 # service pvedaemon restart
173 # service pveproxy restart
174 # service pvestatd restart
175