]> git.proxmox.com Git - pve-ha-manager.git/blame - debian/changelog
bump version to 4.0.2
[pve-ha-manager.git] / debian / changelog
CommitLineData
dfe080ba
TL
1pve-ha-manager (4.0.2) bookworm; urgency=medium
2
3 * cluster resource manager: clear stale maintenance node, which can be
4 caused by simultaneous cluster shutdown
5
6 -- Proxmox Support Team <support@proxmox.com> Tue, 13 Jun 2023 08:35:52 +0200
7
eee63557
TL
8pve-ha-manager (4.0.1) bookworm; urgency=medium
9
10 * test, simulator: make it possible to add already running service
11
12 * lrm: do not migrate via rebalance-on-start if service already running upon
13 rebalance on start
14
15 * api: fix/add return description for status endpoint
16
17 * resources: pve: avoid relying on internal configuration details, use new
18 helpers in pve-container and qemu-server
19
20 -- Proxmox Support Team <support@proxmox.com> Fri, 09 Jun 2023 10:41:06 +0200
21
973bf032
TL
22pve-ha-manager (4.0.0) bookworm; urgency=medium
23
24 * re-build for Proxmox VE 8 / Debian 12 Bookworm
25
26 -- Proxmox Support Team <support@proxmox.com> Wed, 24 May 2023 19:26:51 +0200
27
b0274c4a
TL
28pve-ha-manager (3.6.1) bullseye; urgency=medium
29
30 * cli: assert that node exist when changing CRS request state to avoid
31 creating a phantom node by mistake
32
33 * manager: ensure node-request state gets transferred to new active CRM, so
34 that the request for (manual) maintenance mode is upheld, even if the node
35 that is in maintenace mode is also the current active CRM and gets
36 rebooted.
37
38 * lrm: ignore shutdown policy if (manual) maintenance mode is requested to
39 avoid exiting from maintenance mode to early.
40
41 -- Proxmox Support Team <support@proxmox.com> Thu, 20 Apr 2023 14:16:14 +0200
42
03f825db
TL
43pve-ha-manager (3.6.0) bullseye; urgency=medium
44
45 * fix #4371: add CRM command to switch an online node manually into
46 maintenance (without reboot), moving away all active services, but
47 automatically migrate them back once the maintenance mode is disabled
48 again.
49
50 * manager: service start: make EWRONG_NODE a non-fatal error, but try to
51 find its the actual node the service is residing on
52
53 * manager: add new intermediate 'request_started' state for stop->start
54 transitions
55
56 * request start: optionally enable automatic selection of the best rated
57 node by the CRS on service start up, bypassing the very high priority of
58 the current node on which a service is located.
59
60 -- Proxmox Support Team <support@proxmox.com> Mon, 20 Mar 2023 13:38:26 +0100
61
071e69ce
TL
62pve-ha-manager (3.5.1) bullseye; urgency=medium
63
64 * manager: update crs scheduling mode once per round to avoid the need for a
65 restart of the currently active manager.
66
67 * api: status: add CRS info to manager if not set to default
68
69 -- Proxmox Support Team <support@proxmox.com> Sat, 19 Nov 2022 15:51:11 +0100
70
091f8904
TL
71pve-ha-manager (3.5.0) bullseye; urgency=medium
72
73 * env: datacenter config: include crs (cluster-resource-scheduling) setting
74
75 * manager: use static resource scheduler when configured
76
77 * manager: avoid scoring nodes if maintenance fallback node is valid
78
79 * manager: avoid scoring nodes when not trying next and current node is
80 valid
81
82 * usage: static: use service count on nodes as a fallback
83
84 -- Proxmox Support Team <support@proxmox.com> Fri, 18 Nov 2022 15:02:55 +0100
85
2a1638b7
TL
86pve-ha-manager (3.4.0) bullseye; urgency=medium
87
88 * switch to native version formatting
89
90 * fix accounting of online services when moving services due to their source
91 node going gracefully nonoperational (maintenance mode). This ensures a
92 better balance of services on the cluster after such an operation.
93
94 -- Proxmox Support Team <support@proxmox.com> Fri, 22 Jul 2022 09:21:20 +0200
95
c00c4481
TL
96pve-ha-manager (3.3-4) bullseye; urgency=medium
97
98 * lrm: fix getting stuck on restart due to finished worker state not
99 being collected
100
101 -- Proxmox Support Team <support@proxmox.com> Wed, 27 Apr 2022 14:01:55 +0200
102
c15a8b80
TL
103pve-ha-manager (3.3-3) bullseye; urgency=medium
104
105 * lrm: avoid possible job starvation on huge workloads
106
107 * lrm: increase run_worker loop-time for doing actually work to 80%
108 duty-cycle
109
110 -- Proxmox Support Team <support@proxmox.com> Thu, 20 Jan 2022 18:05:33 +0100
111
ccf328a8
TL
112pve-ha-manager (3.3-2) bullseye; urgency=medium
113
114 * fix #3826: fix restarting LRM/CRM when triggered by package management
115 system due to other updates
116
117 * lrm: also check CRM node-status for determining if there's a fence-request
118 and avoid starting up in that case to ensure that the current manager can
119 get our lock and do a clean fence -> unknown -> online FSM transition.
120 This avoids a problematic edge case where an admin manually removed all
0179818f
TL
121 services of a to-be-fenced node, and re-added them again before the
122 manager could actually get that nodes LRM lock.
ccf328a8
TL
123
124 * manage: handle edge case where a node gets seemingly stuck in 'fence'
125 state if all its services got manually removed by an admin before the
126 fence transition could be finished. While the LRM could come up again in
127 previous versions (it won't now, see above point) and start/stop services
128 got executed, the node was seen as unavailable for all recovery,
129 relocation and migrate actions.
130
131 -- Proxmox Support Team <support@proxmox.com> Wed, 19 Jan 2022 14:30:15 +0100
132
a2d12984
TL
133pve-ha-manager (3.3-1) bullseye; urgency=medium
134
135 * LRM: release lock and close watchdog if no service configured for >10min
136
137 * manager: make recovery actual state in finite state machine, showing a
138 clear transition from fence -> reocvery.
139
140 * fix #3415: never switch in error state on recovery, try to find a new node
141 harder. This improves using the HA manager for services with local
142 resources (e.g., local storage) to ensure it always gets started, which is
143 an OK use-case as long as the service is restricted to a group with only
144 that node. Previously failure of that node would have a high possibility
145 of the service going into the errors state, as no new node can be found.
146 Now it will retry finding a new node, and if one of the restricted set,
147 e.g., the node it was previous on, comes back up, it will start again
148 there.
149
150 * recovery: allow disabling a in-recovery service manually
151
152 -- Proxmox Support Team <support@proxmox.com> Fri, 02 Jul 2021 20:03:29 +0200
153
19265402
TL
154pve-ha-manager (3.2-2) bullseye; urgency=medium
155
156 * fix systemd service restart behavior on package upgrade with Debian
157 Bullseye
158
159 -- Proxmox Support Team <support@proxmox.com> Mon, 24 May 2021 11:38:42 +0200
160
8a35366f
TL
161pve-ha-manager (3.2-1) bullseye; urgency=medium
162
163 * Re-build for Debian Bullseye / PVE 7
164
165 -- Proxmox Support Team <support@proxmox.com> Wed, 12 May 2021 20:55:53 +0200
166
985b7352
TL
167pve-ha-manager (3.1-1) pve; urgency=medium
168
169 * allow 'with-local-disks' migration for replicated guests
170
171 -- Proxmox Support Team <support@proxmox.com> Mon, 31 Aug 2020 10:52:23 +0200
172
efc332d9
TL
173pve-ha-manager (3.0-9) pve; urgency=medium
174
175 * factor out service configured/delete helpers
176
177 * typo and grammar fixes
178
179 -- Proxmox Support Team <support@proxmox.com> Thu, 12 Mar 2020 13:17:36 +0100
180
ab472a7e 181pve-ha-manager (3.0-8) pve; urgency=medium
908054ba
TL
182
183 * bump LRM stop wait time to an hour
184
185 * do not mark maintenaned nodes as unkown
186
187 * api/status: extra handling of maintenance mode
188
ab472a7e 189 -- Proxmox Support Team <support@proxmox.com> Mon, 02 Dec 2019 10:33:03 +0100
908054ba 190
1c4cf427
TL
191pve-ha-manager (3.0-6) pve; urgency=medium
192
193 * add 'migrate' node shutdown policy
194
195 * do simple fallback if node comes back online from maintenance
196
197 * account service to both, source and target during migration
198
199 * add 'After' ordering for SSH and pveproxy to LRM service, ensuring the node
200 stays accessible until HA services got moved or shutdown, depending on
201 policy.
202
203 -- Proxmox Support Team <support@proxmox.com> Tue, 26 Nov 2019 18:03:26 +0100
204
ad8a5e12
TL
205pve-ha-manager (3.0-5) pve; urgency=medium
206
207 * fix #1339: remove more locks from services IF the node got fenced
208
209 * adapt to qemu-server code refactoring
210
211 -- Proxmox Support Team <support@proxmox.com> Wed, 20 Nov 2019 20:12:49 +0100
212
6225c47c
FG
213pve-ha-manager (3.0-4) pve; urgency=medium
214
215 * use PVE::DataCenterConfig from new split-out cluster library package
216
217 -- Proxmox Support Team <support@proxmox.com> Mon, 18 Nov 2019 12:16:29 +0100
218
2378f1c1
TL
219pve-ha-manager (3.0-3) pve; urgency=medium
220
221 * fix #1919, #1920: improve handling zombie (without node) services
222
223 * fix # 2241: VM resource: allow migration with local device, when not running
224
225 * HA status: render removal transition of service as 'deleting'
226
227 * fix #1140: add crm command 'stop', which allows to request immediate
228 service hard-stops if a timeout of zero (0) is passed
229
230 -- Proxmox Support Team <support@proxmox.com> Mon, 11 Nov 2019 17:04:35 +0100
231
58500679
TL
232pve-ha-manager (3.0-2) pve; urgency=medium
233
234 * services: update PIDFile to point directly to /run
235
236 * fix #2234: fix typo in service description
237
238 * Add missing Dependencies to pve-ha-simulator
239
240 -- Proxmox Support Team <support@proxmox.com> Thu, 11 Jul 2019 19:26:03 +0200
241
bd29ad29
TL
242pve-ha-manager (3.0-1) pve; urgency=medium
243
244 * handle the case where a node gets fully purged
245
246 * Re-build for Debian Buster / PVE 6
247
248 -- Proxmox Support Team <support@proxmox.com> Wed, 22 May 2019 19:11:59 +0200
249
42294dfd
TL
250pve-ha-manager (2.0-9) unstable; urgency=medium
251
252 * get_ha_settings: cope with (temporarily) unavailable pmxcfs
253
254 * lrm: exit on restart and agent lock lost for > 90s
255
256 * service data: only set failed_nodes key if needed
257
258 -- Proxmox Support Team <support@proxmox.com> Thu, 04 Apr 2019 16:27:32 +0200
259
ffb4bc0d
TL
260pve-ha-manager (2.0-8) unstable; urgency=medium
261
262 * address an issue in dpkg 1.18 with wrong trigger cycle detections if cyclic
263 dependencies are involed
264
265 -- Proxmox Support Team <support@proxmox.com> Wed, 06 Mar 2019 07:49:58 +0100
266
b65075d6
TL
267pve-ha-manager (2.0-7) unstable; urgency=medium
268
269 * fix #1842: do not pass forceStop to CT shutdown
270
271 * fix #1602: allow one to delete ignored services over API
272
273 * fix #1891: Add zsh command completion for ha-manager CLI tools
274
275 * fix #1794: VM resource: catch qmp command exceptions
276
277 * show sent emails in regression tests
278
279 -- Proxmox Support Team <support@proxmox.com> Mon, 04 Mar 2019 10:37:25 +0100
280
e3e02f46
TL
281pve-ha-manager (2.0-6) unstable; urgency=medium
282
7655c92c 283 * fix #1378: allow one to specify a service shutdown policy
e3e02f46 284
7655c92c
TL
285 * remove some unused external dependencies from the standalone simulator
286 package
e3e02f46
TL
287
288 * document api result for ha resources
289
290 -- Proxmox Support Team <support@proxmox.com> Mon, 07 Jan 2019 12:59:27 +0100
291
c253924f
TL
292pve-ha-manager (2.0-5) unstable; urgency=medium
293
294 * skip CRM and LRM work if last cfs update failed
295
296 * regression test system: allow to simulate cluster fs failures
297
298 * postinst: drop transitional cleanup for systemd watchdog mux socket
299
300 -- Proxmox Support Team <support@proxmox.com> Wed, 07 Feb 2018 11:00:12 +0100
301
5d82b887
WB
302pve-ha-manager (2.0-4) unstable; urgency=medium
303
304 * address timing issues happening when pve-cluster.service is being restarted
305
306 -- Proxmox Support Team <support@proxmox.com> Thu, 09 Nov 2017 11:46:50 +0100
307
b340ba63
FG
308pve-ha-manager (2.0-3) unstable; urgency=medium
309
310 * add ignore state for resources
311
312 * lrm/crm service: restart on API changes
313
314 * lrm.service: do not timeout on stop
315
316 * fix #1347: let postfix fill in FQDN in fence mails
317
318 * fix #1073: do not count backup-suspended VMs as running
319
320 -- Proxmox Support Team <support@proxmox.com> Fri, 13 Oct 2017 11:10:51 +0200
321
95ebe188
DM
322pve-ha-manager (2.0-2) unstable; urgency=medium
323
324 * explicitly sync journal when disabling watchdog updates
325
326 * always queue service stop if node shuts down
327
328 * Fix shutdown order of HA and storage services
329
330 * Resource/API: abort early if resource in error state
331
332 -- Proxmox Support Team <support@proxmox.com> Wed, 14 Jun 2017 07:49:59 +0200
333
f9b7a596
FG
334pve-ha-manager (2.0-1) unstable; urgency=medium
335
336 * rebuild for PVE 5.0 / Debian Stretch
337
338 -- Proxmox Support Team <support@proxmox.com> Mon, 13 Mar 2017 11:31:53 +0100
339
5b9aeabd
DM
340pve-ha-manager (1.0-40) unstable; urgency=medium
341
342 * ha-simulator: allow adding service on runtime
343
344 * ha-simulator: allow deleting service via GUI
345
346 * ha-simulator: allow new service request states over gui
347
348 * ha-simulator: use JSON instead of Dumper for manager status view
349
350 -- Proxmox Support Team <support@proxmox.com> Tue, 24 Jan 2017 10:03:07 +0100
351
c791d525
DM
352pve-ha-manager (1.0-39) unstable; urgency=medium
353
354 * add setup_environment hook to CLIHandler class
355
356 * ha-simulator: fix typo s/Mode/Node/
357
358 * is_node_shutdown: check for correct systemd targets
359
360 * Simulator: fix scrolling to end of cluster log view
361
362 * Simulator: do not use cursor position to insert log
363
364 -- Proxmox Support Team <support@proxmox.com> Thu, 12 Jan 2017 13:15:08 +0100
365
7615290c
DM
366pve-ha-manager (1.0-38) unstable; urgency=medium
367
368 * update manual page
369
370 -- Proxmox Support Team <support@proxmox.com> Wed, 23 Nov 2016 11:46:21 +0100
371
1cb2c7c9
DM
372pve-ha-manager (1.0-37) unstable; urgency=medium
373
374 * HA::Status: provide better/faster feedback
375
376 * Manager.pm: store flag to indicate successful start
377
378 * ha status: include common service attributes
379
380 * Groups.pm: add verbose_description for 'restricted'
381
382 * Resources.pm: use verbose_description for state
383
384 * pve-ha-group-node-list: add verbose_description
385
386 * ha-manager: remove 'enabled' and 'disabled' commands
387
388 * rename request state 'enabled' to 'started'
389
390 * get_pve_lock: correctly send a lock update request
391
392 -- Proxmox Support Team <support@proxmox.com> Tue, 22 Nov 2016 17:04:57 +0100
393
7b4fc061
DM
394pve-ha-manager (1.0-36) unstable; urgency=medium
395
396 * Resources: implement 'stopped' state
397
398 * ha-manager: remove obsolet pod content
399
400 * Fix #1189: correct spelling in fence mail
401
402 * API/Status: avoid using HA Environment
403
404 * factor out resource config check and default set code
405
406 -- Proxmox Support Team <support@proxmox.com> Tue, 15 Nov 2016 16:42:07 +0100
407
65242be6
DM
408pve-ha-manager (1.0-35) unstable; urgency=medium
409
410 * change service state to error if no recovery node is available
411
412 * cleanup backup & mounted locks after recovery (fixes #1100)
413
414 * add possibility to simulate locks from services
415
416 * don't run regression test when building the simulator package
417
418 -- Proxmox Support Team <support@proxmox.com> Thu, 15 Sep 2016 13:23:00 +0200
419
6d7d4159
DM
420pve-ha-manager (1.0-34) unstable; urgency=medium
421
422 * fix race condition on slow resource commands in started state
423
424 -- Proxmox Support Team <support@proxmox.com> Mon, 12 Sep 2016 13:07:05 +0200
425
48ef1174
DM
426pve-ha-manager (1.0-33) unstable; urgency=medium
427
428 * relocate policy: try to avoid already failed nodes
429
430 * allow empty json status files
431
432 * more regression tests
433
434 -- Proxmox Support Team <support@proxmox.com> Fri, 22 Jul 2016 12:16:48 +0200
435
494032ae
TL
436pve-ha-manager (1.0-32) unstable; urgency=medium
437
438 * use correct verify function for ha-group-node-list
439
42e43fe4
DM
440 * send email on fence failure and success
441
494032ae
TL
442 -- Proxmox Support Team <support@proxmox.com> Wed, 15 Jun 2016 17:01:12 +0200
443
13fbf397
DM
444pve-ha-manager (1.0-31) unstable; urgency=medium
445
446 * selcet_service_node: include all online nodes in default group
447
448 * LRM: do not count erroneous service as active
449
450 * fix relocate/restart trial count leak on service deletion
451
452 -- Proxmox Support Team <support@proxmox.com> Fri, 06 May 2016 08:26:10 +0200
453
7b9d1983
DM
454pve-ha-manager (1.0-30) unstable; urgency=medium
455
456 * Env: allow debug logging
457
458 -- Proxmox Support Team <support@proxmox.com> Fri, 29 Apr 2016 16:50:34 +0200
459
762e4e29
DM
460pve-ha-manager (1.0-29) unstable; urgency=medium
461
462 * Resources: deny setting nonexistent group
463
464 -- Proxmox Support Team <support@proxmox.com> Wed, 20 Apr 2016 18:22:28 +0200
465
a45069d1
DM
466pve-ha-manager (1.0-28) unstable; urgency=medium
467
468 * Config: add get_service_status method
469
470 -- Proxmox Support Team <support@proxmox.com> Tue, 19 Apr 2016 08:41:22 +0200
471
c4fefd86
DM
472pve-ha-manager (1.0-27) unstable; urgency=medium
473
474 * use pve-doc-generator to generate man pages
475
476 -- Proxmox Support Team <support@proxmox.com> Fri, 08 Apr 2016 08:25:07 +0200
477
7b8f6329
DM
478pve-ha-manager (1.0-26) unstable; urgency=medium
479
480 * status: show added but not yet active services
481
482 * status: mark CRM as idle if no service is configured
483
484 -- Proxmox Support Team <support@proxmox.com> Tue, 15 Mar 2016 12:49:18 +0100
485
411062be
DM
486pve-ha-manager (1.0-25) unstable; urgency=medium
487
488 * Use config_file from PVE::QemuConfig
489
490 -- Proxmox Support Team <support@proxmox.com> Tue, 08 Mar 2016 11:50:49 +0100
491
a5cd9004
TL
492pve-ha-manager (1.0-24) unstable; urgency=medium
493
494 * simulator: install all virtual resources
495
496 -- Proxmox Support Team <support@proxmox.com> Wed, 02 Mar 2016 10:30:40 +0100
497
b0cdfae4
DM
498pve-ha-manager (1.0-23) unstable; urgency=medium
499
500 * fix infinite started <=> migrate cycle
501
502 * exec_resource_agent: process error state early
503
504 * avoid out of sync command execution in LRM
505
506 * do not pass ETRY_AGAIN back to the CRM
507
508 -- Proxmox Support Team <support@proxmox.com> Wed, 24 Feb 2016 12:15:21 +0100
509
272ce899
DM
510pve-ha-manager (1.0-22) unstable; urgency=medium
511
512 * fix 'change_service_location' misuse and recovery from fencing
513
514 * add VirtFail resource and use it in new regression tests
515
516 * improve relocation policy code in manager and LRM
517
518 * improve verbosity of API status call
519
520 -- Proxmox Support Team <support@proxmox.com> Mon, 15 Feb 2016 10:57:44 +0100
521
276a0f03
DM
522pve-ha-manager (1.0-21) unstable; urgency=medium
523
524 * Fix postinstall script not removing watchdog-mux.socket
525
526 -- Proxmox Support Team <support@proxmox.com> Thu, 04 Feb 2016 18:23:47 +0100
527
e9197621
DM
528pve-ha-manager (1.0-20) unstable; urgency=medium
529
530 * LRM: do not release lock on shutdown errors
531
532 * Split up resources and move them to own sub folder
533
534 * Add virtual resources for tests and simulation
535
536 * add after_fork method to HA environment and use it in LRM
537
538 -- Proxmox Support Team <support@proxmox.com> Wed, 27 Jan 2016 17:05:23 +0100
539
043a1f8f
DM
540pve-ha-manager (1.0-19) unstable; urgency=medium
541
542 * remove 'running' from migrate/relocate log message
543
544 * LRM: release agent lock on graceful shutdown
545
546 * LRM: release agent lock also on restart
547
548 * CRM: release lock on shutdown request
549
550 * TestHardware: correct shutdown/reboot behaviour of CRM and LRM
551
552 * resource agents: fix relocate
553
554 -- Proxmox Support Team <support@proxmox.com> Mon, 18 Jan 2016 12:41:08 +0100
555
ee1873c7
DM
556pve-ha-manager (1.0-18) unstable; urgency=medium
557
558 * pve-ha-lrm.service: depend on lxc.service
559
560 * output watchdog module name if it gets loaded
561
562 * remove watchdog-mux.socket
563
564 -- Proxmox Support Team <support@proxmox.com> Tue, 12 Jan 2016 12:27:49 +0100
565
e179de28
DM
566pve-ha-manager (1.0-17) unstable; urgency=medium
567
568 * Resources.pm: use PVE::API2::LXC
569
570 -- Proxmox Support Team <support@proxmox.com> Mon, 11 Jan 2016 12:25:38 +0100
571
de40f390
DM
572pve-ha-manager (1.0-16) unstable; urgency=medium
573
574 * check_active_workers: fix typo /uuid/uid/
575
576 -- Proxmox Support Team <support@proxmox.com> Mon, 21 Dec 2015 10:21:30 +0100
577
501c4c8c
DM
578pve-ha-manager (1.0-15) unstable; urgency=medium
579
580 * stop all resources on node shutdown (instead of freeze)
581
582 -- Proxmox Support Team <support@proxmox.com> Wed, 16 Dec 2015 10:33:30 +0100
583
4bf36805
DM
584pve-ha-manager (1.0-14) unstable; urgency=medium
585
586 * allow to configure watchdog module in /etc/default/pve-ha-manager
587
588 -- Proxmox Support Team <support@proxmox.com> Thu, 03 Dec 2015 11:09:47 +0100
589
7ce093b0
DM
590pve-ha-manager (1.0-13) unstable; urgency=medium
591
592 * HA API: Fix permissions
593
594 -- Proxmox Support Team <support@proxmox.com> Fri, 30 Oct 2015 11:16:50 +0100
595
24e0977c
DM
596pve-ha-manager (1.0-12) unstable; urgency=medium
597
598 * Adding constants to gain more readability
599
600 * exec_resource_agent: return valid exit code instead of die's
601
602 * code cleanups
603
604 -- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2015 10:21:49 +0100
605
4708bd05
DM
606pve-ha-manager (1.0-11) unstable; urgency=medium
607
608 * add workaround for bug #775
609
610 -- Proxmox Support Team <support@proxmox.com> Wed, 21 Oct 2015 08:58:41 +0200
611
4fe5f066
DM
612pve-ha-manager (1.0-10) unstable; urgency=medium
613
614 * better resource status check on addition and update
615
616 -- Proxmox Support Team <support@proxmox.com> Mon, 12 Oct 2015 18:26:24 +0200
617
2dc1dc5b
DM
618pve-ha-manager (1.0-9) unstable; urgency=medium
619
620 * delete node from CRM status when deleted from cluster
621
622 -- Proxmox Support Team <support@proxmox.com> Tue, 29 Sep 2015 07:35:30 +0200
623
ad1fbda1
DM
624pve-ha-manager (1.0-8) unstable; urgency=medium
625
626 * Use new lock domain sub instead of storage lock
627
628 -- Proxmox Support Team <support@proxmox.com> Sat, 26 Sep 2015 10:36:09 +0200
629
41e1635e
DM
630pve-ha-manager (1.0-7) unstable; urgency=medium
631
632 * enhance ha-managers group commands
633
634 * vm_is_ha_managed: allow check on service state
635
636 * improve Makefile
637
638 -- Proxmox Support Team <support@proxmox.com> Mon, 21 Sep 2015 12:17:41 +0200
639
b3195641
DM
640pve-ha-manager (1.0-6) unstable; urgency=medium
641
642 * implement bash completion for ha-manager
643
644 * implement recovery policy for services
645
646 * simulator: fix random output of manager status
647
648 -- Proxmox Support Team <support@proxmox.com> Wed, 16 Sep 2015 12:06:12 +0200
649
b0205617
DM
650pve-ha-manager (1.0-5) unstable; urgency=medium
651
652 * Replacing hardcoded qemu commands with plugin calls
653
654 * improve error state behaviour
655
656 -- Proxmox Support Team <support@proxmox.com> Tue, 08 Sep 2015 08:45:36 +0200
657
68fb5ef3
DM
658pve-ha-manager (1.0-4) unstable; urgency=medium
659
660 * groups: encode nodes as hash (internally)
661
662 * add trigger for pve-api-updates
663
664 -- Proxmox Support Team <support@proxmox.com> Tue, 16 Jun 2015 09:59:03 +0200
665
409ebb64
DM
666pve-ha-manager (1.0-3) unstable; urgency=medium
667
b83b4ae8 668 * CRM: do not start if there is no resource.cfg file to avoid warnings
409ebb64
DM
669
670 -- Proxmox Support Team <support@proxmox.com> Tue, 09 Jun 2015 14:35:09 +0200
671
6439115a
DM
672pve-ha-manager (1.0-2) unstable; urgency=medium
673
674 * use Wants instead of Requires inside systemd service definitions
675
676 -- Proxmox Support Team <support@proxmox.com> Tue, 09 Jun 2015 09:33:24 +0200
677
822e3a69
DM
678pve-ha-manager (1.0-1) unstable; urgency=medium
679
680 * enable/start crm and lrm services by default
681
682 -- Proxmox Support Team <support@proxmox.com> Fri, 05 Jun 2015 10:03:53 +0200
683
47fb608c
DM
684pve-ha-manager (0.9-3) unstable; urgency=medium
685
686 * regression test improvements
687
688 -- Proxmox Support Team <support@proxmox.com> Fri, 10 Apr 2015 06:53:23 +0200
689
53db1841
DM
690pve-ha-manager (0.9-2) unstable; urgency=medium
691
692 * issue warning if ha group does not exist
693
694 -- Proxmox Support Team <support@proxmox.com> Tue, 07 Apr 2015 09:52:07 +0200
695
a1c9bc71
DM
696pve-ha-manager (0.9-1) unstable; urgency=medium
697
698 * rename vm resource prefix: pvevm: => vm:
699
700 * add API to query ha status
701
6ca2edcd
DM
702 * allow to use simply VMIDs as resource id
703
896ef972
DM
704 * finalize ha api
705
a1c9bc71
DM
706 -- Proxmox Support Team <support@proxmox.com> Fri, 03 Apr 2015 06:18:05 +0200
707
fe5a3496
DM
708pve-ha-manager (0.8-2) unstable; urgency=medium
709
710 * lrm: reduce TimeoutStopSec to 95
711
712 * lrm: set systemd killmode to 'process'
713
714 -- Proxmox Support Team <support@proxmox.com> Thu, 02 Apr 2015 08:48:24 +0200
715
daec9a3c
DM
716pve-ha-manager (0.8-1) unstable; urgency=medium
717
718 * currecrtly send cfs lock update request
719
720 -- Proxmox Support Team <support@proxmox.com> Thu, 02 Apr 2015 08:18:00 +0200
721
fa5ba25e
DM
722pve-ha-manager (0.7-1) unstable; urgency=medium
723
724 * create /etc/pve/ha automatically
725
726 * use correct package for lock_ha_config
727
728 * fix ha-manager status when ha is unconfigured
729
730 * do not unlink watchdog socket when started via systemd
731
732 * depend on systemd
733
734 -- Proxmox Support Team <support@proxmox.com> Wed, 01 Apr 2015 11:05:08 +0200
735
e072900c
DM
736pve-ha-manager (0.6-1) unstable; urgency=medium
737
738 * move configuration handling into PVE::HA::Config
739
740 * ha-manager status: add --verbose flag
741
742 * depend on qemu-server
743
744 -- Proxmox Support Team <support@proxmox.com> Fri, 27 Mar 2015 12:28:50 +0100
745
ba9bce4d
DM
746pve-ha-manager (0.5-1) unstable; urgency=medium
747
748 * implement service migration
749
750 * fix service dependencies (allow restart, reboot)
751
752 * freeze services during reboot/restart
753
754 -- Proxmox Support Team <support@proxmox.com> Thu, 26 Mar 2015 13:22:58 +0100
755
152eb965
DM
756pve-ha-manager (0.4-1) unstable; urgency=medium
757
758 * increase fence_delay to 60 seconds
759
760 * fix regression test environment
761
762 * fix failover after master crash with pending fence action
763
764 -- Proxmox Support Team <support@proxmox.com> Wed, 25 Mar 2015 13:59:28 +0100
765
50280cc3
DM
766pve-ha-manager (0.3-1) unstable; urgency=medium
767
768 * really activate softdog
769
770 * correctly count active services
771
772 * implement fence_delay to avoid immediate fencing
773
774 * pve-ha-simulator: reset watchdog with poweroff
775
776 * pve-ha-simulator: use option nofailback for default groups
777
778 -- Proxmox Support Team <support@proxmox.com> Mon, 16 Mar 2015 13:03:23 +0100
779
29cd2f24
DM
780pve-ha-manager (0.2-1) unstable; urgency=medium
781
782 * add ha-manager command line tool
783
784 * start implementing resources and groups API
785
786 -- Proxmox Support Team <support@proxmox.com> Fri, 13 Mar 2015 09:26:12 +0100
787
6cbcb5f7
DM
788pve-ha-manager (0.1-1) unstable; urgency=low
789
790 * first package
791
792 -- Proxmox Support Team <support@proxmox.com> Wed, 18 Feb 2015 11:30:21 +0100
793