]> git.proxmox.com Git - pve-docs.git/blob - pvenode.adoc
fix #5429: network: override device names: include Type=ether
[pve-docs.git] / pvenode.adoc
1 ifdef::manvolnum[]
2 pvenode(1)
3 ==========
4 :pve-toplevel:
5
6 NAME
7 ----
8
9 pvenode - Proxmox VE Node Management
10
11 SYNOPSIS
12 --------
13
14 include::pvenode.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19 ifndef::manvolnum[]
20
21 [[proxmox_node_management]]
22 Proxmox Node Management
23 -----------------------
24 ifdef::wiki[]
25 :pve-toplevel:
26 endif::wiki[]
27 endif::manvolnum[]
28
29 The {PVE} node management tool (`pvenode`) allows you to control node specific
30 settings and resources.
31
32 Currently `pvenode` allows you to set a node's description, run various
33 bulk operations on the node's guests, view the node's task history, and
34 manage the node's SSL certificates, which are used for the API and the web GUI
35 through `pveproxy`.
36
37 ifdef::manvolnum[]
38 include::output-format.adoc[]
39
40 Examples
41 ~~~~~~~~
42
43 .Install an externally provided certificate
44
45 `pvenode cert set certificate.crt certificate.key -force`
46
47 Both files need to be PEM encoded. `certificate.key` contains the private key
48 and `certificate.crt` contains the whole certificate chain.
49
50 .Setup ACME account and order a certificate for the local node.
51
52 -----
53 pvenode acme account register default mail@example.invalid
54 pvenode config set --acme domains=example.invalid
55 pvenode acme cert order
56 systemctl restart pveproxy
57 -----
58
59 endif::manvolnum[]
60
61 Wake-on-LAN
62 ~~~~~~~~~~~
63 Wake-on-LAN (WoL) allows you to switch on a sleeping computer in the network, by
64 sending a magic packet. At least one NIC must support this feature, and the
65 respective option needs to be enabled in the computer's firmware (BIOS/UEFI)
66 configuration. The option name can vary from 'Enable Wake-on-Lan' to
67 'Power On By PCIE Device'; check your motherboard's vendor manual, if you're
68 unsure. `ethtool` can be used to check the WoL configuration of `<interface>`
69 by running:
70
71 ----
72 ethtool <interface> | grep Wake-on
73 ----
74
75 `pvenode` allows you to wake sleeping members of a cluster via WoL, using the
76 command:
77
78 ----
79 pvenode wakeonlan <node>
80 ----
81
82 This broadcasts the WoL magic packet on UDP port 9, containing the MAC address
83 of `<node>` obtained from the `wakeonlan` property. The node-specific
84 `wakeonlan` property can be set using the following command:
85
86 ----
87 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX
88 ----
89
90 The interface via which to send the WoL packet is determined from the default
91 route. It can be overwritten by setting the `bind-interface` via the following
92 command:
93
94 ----
95 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,bind-interface=<iface-name>
96 ----
97
98 The broadcast address (default `255.255.255.255`) used when sending the WoL
99 packet can further be changed by setting the `broadcast-address` explicitly
100 using the following command:
101
102 ----
103 pvenode config set -wakeonlan XX:XX:XX:XX:XX:XX,broadcast-address=<broadcast-address>
104 ----
105
106 Task History
107 ~~~~~~~~~~~~
108
109 When troubleshooting server issues, for example, failed backup jobs, it can
110 often be helpful to have a log of the previously run tasks. With {pve}, you can
111 access the nodes's task history through the `pvenode task` command.
112
113 You can get a filtered list of a node's finished tasks with the `list`
114 subcommand. For example, to get a list of tasks related to VM '100'
115 that ended with an error, the command would be:
116
117 ----
118 pvenode task list --errors --vmid 100
119 ----
120
121 The log of a task can then be printed using its UPID:
122
123 ----
124 pvenode task log UPID:pve1:00010D94:001CA6EA:6124E1B9:vzdump:100:root@pam:
125 ----
126
127
128 Bulk Guest Power Management
129 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
130
131 In case you have many VMs/containers, starting and stopping guests can be
132 carried out in bulk operations with the `startall` and `stopall` subcommands of
133 `pvenode`. By default, `pvenode startall` will only start VMs/containers which
134 have been set to automatically start on boot (see
135 xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]),
136 however, you can override this behavior with the `--force` flag. Both commands
137 also have a `--vms` option, which limits the stopped/started guests to the
138 specified VMIDs.
139
140 For example, to start VMs '100', '101', and '102', regardless of whether they
141 have `onboot` set, you can use:
142
143 ----
144 pvenode startall --vms 100,101,102 --force
145 ----
146
147 To stop these guests (and any other guests that may be running), use the
148 command:
149
150 ----
151 pvenode stopall
152 ----
153
154 NOTE: The stopall command first attempts to perform a clean shutdown and then
155 waits until either all guests have successfully shut down or an overridable
156 timeout (3 minutes by default) has expired. Once that happens and the
157 force-stop parameter is not explicitly set to 0 (false), all virtual guests
158 that are still running are hard stopped.
159
160
161 [[first_guest_boot_delay]]
162 First Guest Boot Delay
163 ~~~~~~~~~~~~~~~~~~~~~~
164
165 In case your VMs/containers rely on slow-to-start external resources, for
166 example an NFS server, you can also set a per-node delay between the time {pve}
167 boots and the time the first VM/container that is configured to autostart boots
168 (see xref:qm_startup_and_shutdown[Automatic Start and Shutdown of Virtual Machines]).
169
170 You can achieve this by setting the following (where `10` represents the delay
171 in seconds):
172
173 ----
174 pvenode config set --startall-onboot-delay 10
175 ----
176
177
178 Bulk Guest Migration
179 ~~~~~~~~~~~~~~~~~~~~
180
181 In case an upgrade situation requires you to migrate all of your guests from one
182 node to another, `pvenode` also offers the `migrateall` subcommand for bulk
183 migration. By default, this command will migrate every guest on the system to
184 the target node. It can however be set to only migrate a set of guests.
185
186 For example, to migrate VMs '100', '101', and '102', to the node 'pve2', with
187 live-migration for local disks enabled, you can run:
188
189 ----
190 pvenode migrateall pve2 --vms 100,101,102 --with-local-disks
191 ----
192
193 // TODO: explain node shutdown (stopall is used there) and maintenance options
194
195 ifdef::manvolnum[]
196 include::pve-copyright.adoc[]
197 endif::manvolnum[]