]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvenode.adoc
network: override device names: suggest running update-initramfs
[pve-docs.git] / pvenode.adoc
... / ...
CommitLineData
1ifdef::manvolnum[]
2pvenode(1)
3==========
4:pve-toplevel:
5
6NAME
7----
8
9pvenode - Proxmox VE Node Management
10
11SYNOPSIS
12--------
13
14include::pvenode.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19ifndef::manvolnum[]
20
21[[proxmox_node_management]]
22Proxmox Node Management
23-----------------------
24ifdef::wiki[]
25:pve-toplevel:
26endif::wiki[]
27endif::manvolnum[]
28
29The {PVE} node management tool (`pvenode`) allows you to control node specific
30settings and resources.
31
32Currently `pvenode` allows you to set a node's description, run various
33bulk operations on the node's guests, view the node's task history, and
34manage the node's SSL certificates, which are used for the API and the web GUI
35through `pveproxy`.
36
37ifdef::manvolnum[]
38include::output-format.adoc[]
39
40Examples
41~~~~~~~~
42
43.Install an externally provided certificate
44
45`pvenode cert set certificate.crt certificate.key -force`
46
47Both files need to be PEM encoded. `certificate.key` contains the private key
48and `certificate.crt` contains the whole certificate chain.
49
50.Setup ACME account and order a certificate for the local node.
51
52-----
53pvenode acme account register default mail@example.invalid
54pvenode config set --acme domains=example.invalid
55pvenode acme cert order
56systemctl restart pveproxy
57-----
58
59endif::manvolnum[]
60
61Wake-on-LAN
62~~~~~~~~~~~
63Wake-on-LAN (WoL) allows you to switch on a sleeping computer in the network, by
64sending a magic packet. At least one NIC must support this feature, and the
65respective option needs to be enabled in the computer's firmware (BIOS/UEFI)
66configuration. The option name can vary from 'Enable Wake-on-Lan' to
67'Power On By PCIE Device'; check your motherboard's vendor manual, if you're
68unsure. `ethtool` can be used to check the WoL configuration of `<interface>`
69by running:
70
71----
72ethtool <interface> | grep Wake-on
73----
74
75`pvenode` allows you to wake sleeping members of a cluster via WoL, using the
76command:
77
78----
79pvenode wakeonlan <node>
80----
81
82This broadcasts the WoL magic packet on UDP port 9, containing the MAC address
83of `<node>` obtained from the `wakeonlan` property. The node-specific
84`wakeonlan` property can be set using the following command:
85
86----
87pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX
88----
89
90The interface via which to send the WoL packet is determined from the default
91route. It can be overwritten by setting the `bind-interface` via the following
92command:
93
94----
95pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,bind-interface=<iface-name>
96----
97
98The broadcast address (default `255.255.255.255`) used when sending the WoL
99packet can further be changed by setting the `broadcast-address` explicitly
100using the following command:
101
102----
103pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,broadcast-address=<broadcast-address>
104----
105
106Task History
107~~~~~~~~~~~~
108
109When troubleshooting server issues, for example, failed backup jobs, it can
110often be helpful to have a log of the previously run tasks. With {pve}, you can
111access the nodes's task history through the `pvenode task` command.
112
113You can get a filtered list of a node's finished tasks with the `list`
114subcommand. For example, to get a list of tasks related to VM '100'
115that ended with an error, the command would be:
116
117----
118pvenode task list --errors --vmid 100
119----
120
121The log of a task can then be printed using its UPID:
122
123----
124pvenode task log UPID:pve1:00010D94:001CA6EA:6124E1B9:vzdump:100:root@pam:
125----
126
127
128Bulk Guest Power Management
129~~~~~~~~~~~~~~~~~~~~~~~~~~~
130
131In case you have many VMs/containers, starting and stopping guests can be
132carried out in bulk operations with the `startall` and `stopall` subcommands of
133`pvenode`. By default, `pvenode startall` will only start VMs/containers which
134have been set to automatically start on boot (see
135xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]),
136however, you can override this behavior with the `--force` flag. Both commands
137also have a `--vms` option, which limits the stopped/started guests to the
138specified VMIDs.
139
140For example, to start VMs '100', '101', and '102', regardless of whether they
141have `onboot` set, you can use:
142
143----
144pvenode startall --vms 100,101,102 --force
145----
146
147To stop these guests (and any other guests that may be running), use the
148command:
149
150----
151pvenode stopall
152----
153
154NOTE: The stopall command first attempts to perform a clean shutdown and then
155waits until either all guests have successfully shut down or an overridable
156timeout (3 minutes by default) has expired. Once that happens and the
157force-stop parameter is not explicitly set to 0 (false), all virtual guests
158that are still running are hard stopped.
159
160
161[[first_guest_boot_delay]]
162First Guest Boot Delay
163~~~~~~~~~~~~~~~~~~~~~~
164
165In case your VMs/containers rely on slow-to-start external resources, for
166example an NFS server, you can also set a per-node delay between the time {pve}
167boots and the time the first VM/container that is configured to autostart boots
168(see xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]).
169
170You can achieve this by setting the following (where `10` represents the delay
171in seconds):
172
173----
174pvenode config set --startall-onboot-delay 10
175----
176
177
178Bulk Guest Migration
179~~~~~~~~~~~~~~~~~~~~
180
181In case an upgrade situation requires you to migrate all of your guests from one
182node to another, `pvenode` also offers the `migrateall` subcommand for bulk
183migration. By default, this command will migrate every guest on the system to
184the target node. It can however be set to only migrate a set of guests.
185
186For example, to migrate VMs '100', '101', and '102', to the node 'pve2', with
187live-migration for local disks enabled, you can run:
188
189----
190pvenode migrateall pve2 --vms 100,101,102 --with-local-disks
191----
192
193// TODO: explain node shutdown (stopall is used there) and maintenance options
194
195ifdef::manvolnum[]
196include::pve-copyright.adoc[]
197endif::manvolnum[]