]> git.proxmox.com Git - pve-docs.git/blob - pvenode.adoc
pvenode/wake-on-lan: mention optional config options
[pve-docs.git] / pvenode.adoc
1 ifdef::manvolnum[]
2 pvenode(1)
3 ==========
4 :pve-toplevel:
5
6 NAME
7 ----
8
9 pvenode - Proxmox VE Node Management
10
11 SYNOPSIS
12 --------
13
14 include::pvenode.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19 ifndef::manvolnum[]
20
21 [[proxmox_node_management]]
22 Proxmox Node Management
23 -----------------------
24 ifdef::wiki[]
25 :pve-toplevel:
26 endif::wiki[]
27 endif::manvolnum[]
28
29 The {PVE} node management tool (`pvenode`) allows you to control node specific
30 settings and resources.
31
32 Currently `pvenode` allows you to set a node's description, run various
33 bulk operations on the node's guests, view the node's task history, and
34 manage the node's SSL certificates, which are used for the API and the web GUI
35 through `pveproxy`.
36
37 ifdef::manvolnum[]
38 include::output-format.adoc[]
39
40 Examples
41 ~~~~~~~~
42
43 .Install an externally provided certificate
44
45 `pvenode cert set certificate.crt certificate.key -force`
46
47 Both files need to be PEM encoded. `certificate.key` contains the private key
48 and `certificate.crt` contains the whole certificate chain.
49
50 .Setup ACME account and order a certificate for the local node.
51
52 -----
53 pvenode acme account register default mail@example.invalid
54 pvenode config set --acme domains=example.invalid
55 pvenode acme cert order
56 systemctl restart pveproxy
57 -----
58
59 endif::manvolnum[]
60
61 Wake-on-LAN
62 ~~~~~~~~~~~
63 Wake-on-LAN (WoL) allows you to switch on a sleeping computer in the network, by
64 sending a magic packet. At least one NIC must support this feature, and the
65 respective option needs to be enabled in the computer's firmware (BIOS/UEFI)
66 configuration. The option name can vary from 'Enable Wake-on-Lan' to
67 'Power On By PCIE Device'; check your motherboard's vendor manual, if you're
68 unsure. `ethtool` can be used to check the WoL configuration of `<interface>`
69 by running:
70
71 ----
72 ethtool <interface> | grep Wake-on
73 ----
74
75 `pvenode` allows you to wake sleeping members of a cluster via WoL, using the
76 command:
77
78 ----
79 pvenode wakeonlan <node>
80 ----
81
82 This broadcasts the WoL magic packet on UDP port 9, containing the MAC address
83 of `<node>` obtained from the `wakeonlan` property. The node-specific
84 `wakeonlan` property can be set using the following command:
85
86 ----
87 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX
88 ----
89
90 Optionally, the interface via which to send the WoL packet can be specified by
91 setting the `bind-interface` via the following command:
92
93 ----
94 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,bind-interface=<iface-name>
95 ----
96
97 The broadcast address used when sending the WoL packet can further be set by
98 specifying the `broadcast-address` using the following command:
99
100 ----
101 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,broadcast-address=<broadcast-address>
102 ----
103
104 Task History
105 ~~~~~~~~~~~~
106
107 When troubleshooting server issues, for example, failed backup jobs, it can
108 often be helpful to have a log of the previously run tasks. With {pve}, you can
109 access the nodes's task history through the `pvenode task` command.
110
111 You can get a filtered list of a node's finished tasks with the `list`
112 subcommand. For example, to get a list of tasks related to VM '100'
113 that ended with an error, the command would be:
114
115 ----
116 pvenode task list --errors --vmid 100
117 ----
118
119 The log of a task can then be printed using its UPID:
120
121 ----
122 pvenode task log UPID:pve1:00010D94:001CA6EA:6124E1B9:vzdump:100:root@pam:
123 ----
124
125
126 Bulk Guest Power Management
127 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
128
129 In case you have many VMs/containers, starting and stopping guests can be
130 carried out in bulk operations with the `startall` and `stopall` subcommands of
131 `pvenode`. By default, `pvenode startall` will only start VMs/containers which
132 have been set to automatically start on boot (see
133 xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]),
134 however, you can override this behavior with the `--force` flag. Both commands
135 also have a `--vms` option, which limits the stopped/started guests to the
136 specified VMIDs.
137
138 For example, to start VMs '100', '101', and '102', regardless of whether they
139 have `onboot` set, you can use:
140
141 ----
142 pvenode startall --vms 100,101,102 --force
143 ----
144
145 To stop these guests (and any other guests that may be running), use the
146 command:
147
148 ----
149 pvenode stopall
150 ----
151
152 NOTE: The stopall command first attempts to perform a clean shutdown and then
153 waits until either all guests have successfully shut down or an overridable
154 timeout (3 minutes by default) has expired. Once that happens and the
155 force-stop parameter is not explicitly set to 0 (false), all virtual guests
156 that are still running are hard stopped.
157
158
159 [[first_guest_boot_delay]]
160 First Guest Boot Delay
161 ~~~~~~~~~~~~~~~~~~~~~~
162
163 In case your VMs/containers rely on slow-to-start external resources, for
164 example an NFS server, you can also set a per-node delay between the time {pve}
165 boots and the time the first VM/container that is configured to autostart boots
166 (see xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]).
167
168 You can achieve this by setting the following (where `10` represents the delay
169 in seconds):
170
171 ----
172 pvenode config set --startall-onboot-delay 10
173 ----
174
175
176 Bulk Guest Migration
177 ~~~~~~~~~~~~~~~~~~~~
178
179 In case an upgrade situation requires you to migrate all of your guests from one
180 node to another, `pvenode` also offers the `migrateall` subcommand for bulk
181 migration. By default, this command will migrate every guest on the system to
182 the target node. It can however be set to only migrate a set of guests.
183
184 For example, to migrate VMs '100', '101', and '102', to the node 'pve2', with
185 live-migration for local disks enabled, you can run:
186
187 ----
188 pvenode migrateall pve2 --vms 100,101,102 --with-local-disks
189 ----
190
191 // TODO: explain node shutdown (stopall is used there) and maintenance options
192
193 ifdef::manvolnum[]
194 include::pve-copyright.adoc[]
195 endif::manvolnum[]