]> git.proxmox.com Git - pve-docs.git/blob - pmxcfs.adoc
cc574abe210b0f8e249078af394c51b3983977b5
[pve-docs.git] / pmxcfs.adoc
1 Proxmox Cluster file system (pmxcfs)
2 ====================================
3
4 The Proxmox Cluster file system (pmxcfs) is a database-driven file
5 system for storing configuration files, replicated in real time to all
6 cluster nodes using corosync. We use this to store all PVE related
7 configuration files.
8
9 Although the file system stores all data inside a persistent database
10 on disk, a copy of the data resides in RAM. That imposes restriction
11 on the maximal size, which is currently 30MB. This is still enough to
12 store the configuration of several thousand virtual machines.
13
14 Advantages
15 ----------
16
17 * seamless replication of all configuration to all nodes in real time
18 * provides strong consistency checks to avoid duplicate VM IDs
19 * read-only when a node looses quorum
20 * automatic updates of the corosync cluster configuration to all nodes
21 * includes a distributed locking mechanism
22
23 POSIX Compatibility
24 ~~~~~~~~~~~~~~~~~~~
25
26 The file system is based on FUSE, so the behavior is POSIX like. But
27 some feature are simply not implemented, because we do not need them:
28
29 * you can just generate normal files and directories, but no symbolic
30 links, ...
31
32 * you can't rename non-empty directories (because this makes it easier
33 to guarantee that VMIDs are unique).
34
35 * you can't change file permissions (permissions are based on path)
36
37 * `O_EXCL` creates were not atomic (like old NFS)
38
39 * `O_TRUNC` creates are not atomic (FUSE restriction)
40
41
42 File access rights
43 ~~~~~~~~~~~~~~~~~~
44
45 All files and directories are owned by user 'root' and have group
46 'www-data'. Only root has write permissions, but group 'www-data' can
47 read most files. Files below the following paths:
48
49 /etc/pve/priv/
50 /etc/pve/nodes/${NAME}/priv/
51
52 are only accessible by root.
53
54 Technology
55 ----------
56
57 We use the http://www.corosync.org[Corosync Cluster Engine] for
58 cluster communication, and http://www.sqlite.org[SQlite] for the
59 database file. The filesystem is implemented in user space using
60 http://fuse.sourceforge.net[FUSE].
61
62 File system layout
63 ------------------
64
65 The file system is mounted at:
66
67 /etc/pve
68
69 Files
70 ~~~~~
71
72 [width="100%",cols="m,d"]
73 |=======
74 |corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
75 |storage.cfg |{pve} storage configuration
76 |datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...)
77 |user.cfg |{pve} access control configuration (users/groups/...)
78 |domains.cfg |{pve} Authentication domains
79 |authkey.pub | public key used by ticket system
80 |priv/shadow.cfg | shadow password file
81 |priv/authkey.key | private key used by ticket system
82 |nodes/<NAME>/pve-ssl.pem | public ssl key for web server
83 |nodes/<NAME>/priv/pve-ssl.key | private ssl key
84 |nodes/<NAME>/qemu-server/<VMID>.conf | VM configuration data for KVM VMs
85 |nodes/<NAME>/lxc/<VMID>.conf | VM configuration data for LXC containers
86 |firewall/cluster.fw | Firewall config applied to all nodes
87 |firewall/<NAME>.fw | Firewall config for individual nodes
88 |firewall/<VMID>.fw | Firewall config for VMs and Containers
89 |=======
90
91 Symbolic links
92 ~~~~~~~~~~~~~~
93
94 [width="100%",cols="m,m"]
95 |=======
96 |local |nodes/<LOCAL_HOST_NAME>
97 |qemu-server |nodes/<LOCAL_HOST_NAME>/qemu-server/
98 |lxc |nodes/<LOCAL_HOST_NAME>/lxc/
99 |=======
100
101 Special status files for debugging (JSON)
102 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
103
104 [width="100%",cols="m,d"]
105 |=======
106 | .version |file versions (to detect file modifications)
107 | .members |Info about cluster members
108 | .vmlist |List of all VMs
109 | .clusterlog |Cluster log (last 50 entries)
110 | .rrd |RRD data (most recent entries)
111 |=======
112
113 Enable/Disable debugging
114 ~~~~~~~~~~~~~~~~~~~~~~~~
115
116 You can enable verbose syslog messages with:
117
118 echo "1" >/etc/pve/.debug
119
120 And disable verbose syslog messages with:
121
122 echo "0" >/etc/pve/.debug
123
124
125 Recovery
126 --------
127
128 If you have major problems with your Proxmox VE host, e.g. hardware
129 issues, it could be helpful to just copy the pmxcfs database file
130 /var/lib/pve-cluster/config.db and move it to a new Proxmox VE
131 host. On the new host (with nothing running), you need to stop the
132 pve-cluster service and replace the config.db file (needed permissions
133 0600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the
134 lost Proxmox VE host, then reboot and check. (And donĀ“t forget your
135 VM/CT data)
136
137 Remove Cluster configuration
138 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139
140 The recommended way is to reinstall the node after you removed it from
141 your cluster. This makes sure that all secret cluster/ssh keys and any
142 shared configuration data is destroyed.
143
144 In some cases, you might prefer to put a node back to local mode
145 without reinstall, which is described here:
146
147 * stop the cluster file system in '/etc/pve/'
148
149 # systemctl stop pve-cluster
150
151 * start it again but forcing local mode
152
153 # pmxcfs -l
154
155 * remove the cluster config
156
157 # rm /etc/pve/cluster.conf
158 # rm /etc/cluster/cluster.conf
159 # rm /var/lib/pve-cluster/corosync.authkey
160
161 * stop the cluster file system again
162
163 # service pve-cluster stop
164
165 * restart pve services (or reboot)
166
167 # service pve-cluster start
168 # service pvedaemon restart
169 # service pveproxy restart
170 # service pvestatd restart
171