From 39ae355f72b1d71f2212a99f2bd9f6c1e0d35528 Mon Sep 17 00:00:00 2001 From: Thomas Lamprecht Date: Tue, 11 Apr 2023 09:43:51 +0200 Subject: [PATCH] import ceph quincy 17.2.6 Signed-off-by: Thomas Lamprecht --- ceph/.github/CODEOWNERS | 1 + ceph/.readthedocs.yml | 8 +- ceph/CMakeLists.txt | 13 +- ceph/CodingStyle | 4 +- ceph/PendingReleaseNotes | 40 + ceph/admin/rtd-checkout-main | 10 - ceph/ceph.spec | 24 +- ceph/ceph.spec.in | 18 +- ceph/changelog.upstream | 6 + ceph/cmake/modules/BuildFIO.cmake | 12 +- ceph/debian/control | 1 + ceph/doc/_static/css/custom.css | 20 + ceph/doc/architecture.rst | 13 +- ceph/doc/ceph-volume/lvm/activate.rst | 6 +- ceph/doc/ceph-volume/lvm/encryption.rst | 70 +- ceph/doc/ceph-volume/lvm/prepare.rst | 211 +- ceph/doc/cephadm/adoption.rst | 10 +- ceph/doc/cephadm/compatibility.rst | 70 +- ceph/doc/cephadm/host-management.rst | 258 +- ceph/doc/cephadm/index.rst | 45 +- ceph/doc/cephadm/install.rst | 102 +- ceph/doc/cephadm/operations.rst | 62 + ceph/doc/cephadm/services/index.rst | 59 +- ceph/doc/cephadm/services/monitoring.rst | 34 + ceph/doc/cephadm/services/osd.rst | 52 + ceph/doc/cephadm/services/rgw.rst | 9 +- ceph/doc/cephadm/troubleshooting.rst | 107 +- ceph/doc/cephadm/upgrade.rst | 25 +- ceph/doc/cephfs/add-remove-mds.rst | 2 + ceph/doc/cephfs/cephfs-top.png | Bin 13928 -> 41036 bytes ceph/doc/cephfs/cephfs-top.rst | 35 +- ceph/doc/cephfs/disaster-recovery-experts.rst | 5 +- ceph/doc/cephfs/fs-volumes.rst | 66 +- ceph/doc/cephfs/mount-using-fuse.rst | 3 +- ceph/doc/cephfs/mount-using-kernel-driver.rst | 22 + ceph/doc/cephfs/posix.rst | 14 +- ceph/doc/cephfs/quota.rst | 62 + ceph/doc/cephfs/scrub.rst | 12 + ceph/doc/cephfs/snap-schedule.rst | 26 +- ceph/doc/dev/ceph_krb_auth.rst | 10 +- ceph/doc/dev/cephadm/developing-cephadm.rst | 2 +- ceph/doc/dev/cephadm/host-maintenance.rst | 4 +- ceph/doc/dev/cephfs-snapshots.rst | 5 + ceph/doc/dev/deduplication.rst | 151 - .../dev/developer_guide/basic-workflow.rst | 106 +- ceph/doc/dev/developer_guide/dash-devel.rst | 96 +- ceph/doc/dev/developer_guide/essentials.rst | 5 + ...s-integration-testing-teuthology-intro.rst | 105 +- .../dev/developer_guide/tests-unit-tests.rst | 7 + ceph/doc/dev/documenting.rst | 4 + ceph/doc/dev/object-store.rst | 7 +- ceph/doc/dev/osd_internals/erasure_coding.rst | 71 +- ceph/doc/glossary.rst | 486 +- ceph/doc/images/zone-sync.svg | 16707 ++++++---- ceph/doc/index.rst | 7 +- ceph/doc/install/clone-source.rst | 168 +- ceph/doc/install/containers.rst | 2 +- ceph/doc/install/get-packages.rst | 3 +- ceph/doc/install/index.rst | 41 +- ceph/doc/man/8/ceph-osd.rst | 2 + ceph/doc/man/8/ceph-rbdnamer.rst | 13 +- ceph/doc/man/8/crushtool.rst | 10 + ceph/doc/man/8/mount.ceph.rst | 13 + ceph/doc/man/8/radosgw-admin.rst | 4 + ceph/doc/man/8/rbd-nbd.rst | 6 +- ceph/doc/man/8/rbd.rst | 8 +- ceph/doc/mgr/dashboard.rst | 23 +- ceph/doc/mgr/modules.rst | 205 +- ceph/doc/mgr/nfs.rst | 28 +- ceph/doc/mgr/telemetry.rst | 23 +- ceph/doc/rados/api/index.rst | 2 +- ceph/doc/rados/api/libcephsqlite.rst | 16 + ceph/doc/rados/api/librados-intro.rst | 149 +- ceph/doc/rados/api/python.rst | 2 +- .../rados/configuration/auth-config-ref.rst | 6 +- .../configuration/bluestore-config-ref.rst | 221 +- ceph/doc/rados/configuration/ceph-conf.rst | 138 +- ceph/doc/rados/configuration/index.rst | 13 +- .../rados/configuration/mclock-config-ref.rst | 179 +- .../rados/configuration/mon-config-ref.rst | 2 + ceph/doc/rados/configuration/msgr2.rst | 13 +- .../configuration/network-config-ref.rst | 20 +- .../configuration/pool-pg-config-ref.rst | 13 +- .../rados/configuration/storage-devices.rst | 17 +- ceph/doc/rados/operations/add-or-rm-mons.rst | 157 +- ceph/doc/rados/operations/add-or-rm-osds.rst | 179 +- ceph/doc/rados/operations/balancer.rst | 121 +- .../rados/operations/bluestore-migration.rst | 205 +- ceph/doc/rados/operations/cache-tiering.rst | 260 +- .../rados/operations/change-mon-elections.rst | 30 +- ceph/doc/rados/operations/control.rst | 344 +- ceph/doc/rados/operations/crush-map-edits.rst | 101 +- ceph/doc/rados/operations/crush-map.rst | 227 +- ceph/doc/rados/operations/devices.rst | 111 +- .../rados/operations/erasure-code-clay.rst | 42 +- .../doc/rados/operations/erasure-code-isa.rst | 26 +- .../operations/erasure-code-jerasure.rst | 26 +- .../doc/rados/operations/erasure-code-lrc.rst | 171 +- .../rados/operations/erasure-code-shec.rst | 11 +- ceph/doc/rados/operations/erasure-code.rst | 162 +- ceph/doc/rados/operations/health-checks.rst | 617 +- .../rados/operations/monitoring-osd-pg.rst | 79 +- ceph/doc/rados/operations/monitoring.rst | 173 +- ceph/doc/rados/operations/pg-repair.rst | 54 +- .../doc/rados/operations/placement-groups.rst | 208 +- ceph/doc/rados/operations/pools.rst | 185 +- ceph/doc/rados/operations/stretch-mode.rst | 48 +- ceph/doc/rados/operations/upmap.rst | 45 +- ceph/doc/rados/operations/user-management.rst | 173 +- ceph/doc/radosgw/STS.rst | 26 +- ceph/doc/radosgw/STSLite.rst | 12 +- ceph/doc/radosgw/index.rst | 19 +- ceph/doc/radosgw/keycloak.rst | 85 +- ceph/doc/radosgw/layout.rst | 2 +- ceph/doc/radosgw/multisite-sync-policy.rst | 2 + ceph/doc/radosgw/multisite.rst | 1582 +- ceph/doc/radosgw/notifications.rst | 339 +- ceph/doc/radosgw/placement.rst | 12 +- ceph/doc/radosgw/s3.rst | 2 +- ceph/doc/radosgw/session-tags.rst | 11 +- ceph/doc/rbd/iscsi-initiator-linux.rst | 83 +- ceph/doc/rbd/iscsi-overview.rst | 30 +- ceph/doc/rbd/rados-rbd-cmds.rst | 312 +- ceph/doc/rbd/rbd-exclusive-locks.rst | 149 +- .../rbd/rbd-persistent-read-only-cache.rst | 41 +- ceph/doc/rbd/rbd-snapshot.rst | 326 +- ceph/doc/releases/argonaut.rst | 185 - ceph/doc/releases/bobtail.rst | 546 - ceph/doc/releases/cuttlefish.rst | 720 - ceph/doc/releases/dumpling.rst | 947 - ceph/doc/releases/emperor.rst | 654 - ceph/doc/releases/firefly.rst | 1787 -- ceph/doc/releases/general.rst | 78 - ceph/doc/releases/giant.rst | 1286 - ceph/doc/releases/hammer.rst | 2325 -- ceph/doc/releases/index.rst | 262 - ceph/doc/releases/infernalis.rst | 1534 - ceph/doc/releases/jewel.rst | 3384 -- ceph/doc/releases/kraken.rst | 2337 -- ceph/doc/releases/luminous.rst | 5552 ---- ceph/doc/releases/mimic.rst | 4475 --- ceph/doc/releases/nautilus.rst | 5155 --- ceph/doc/releases/octopus.rst | 5864 ---- ceph/doc/releases/pacific.rst | 1362 - ceph/doc/releases/quincy.rst | 445 - ceph/doc/security/CVE-2022-0670.rst | 43 + ceph/doc/security/cves.rst | 134 +- ceph/doc/security/index.rst | 5 +- ceph/doc/start/documenting-ceph.rst | 171 +- ceph/make-dist | 15 +- ceph/monitoring/ceph-mixin/README.md | 27 +- .../ceph-mixin/dashboards/host.libsonnet | 27 +- .../ceph-mixin/dashboards/osd.libsonnet | 25 + .../ceph-mixin/dashboards/rgw.libsonnet | 24 +- .../dashboards_out/ceph-cluster.json | 363 +- .../dashboards_out/host-details.json | 87 +- .../dashboards_out/osds-overview.json | 85 + .../dashboards_out/radosgw-overview.json | 10 +- .../ceph-mixin/jsonnetfile.lock.json | 4 +- .../ceph-mixin/prometheus_alerts.libsonnet | 11 + .../ceph-mixin/prometheus_alerts.yml | 12 +- .../ceph-mixin/tests_alerts/test_alerts.yml | 58 +- .../features/radosgw_overview.feature | 8 +- ceph/monitoring/ceph-mixin/tox.ini | 15 +- ceph/qa/CMakeLists.txt | 2 +- ...ist_health.yaml => ignorelist_health.yaml} | 0 ...ml => ignorelist_wrongly_marked_down.yaml} | 0 ceph/qa/config/rados.yaml | 1 + ceph/qa/distros/all/ubuntu_22.04.yaml | 2 + .../centos_8.stream_container_tools.yaml | 2 +- .../centos_8.stream_container_tools_crun.yaml | 2 +- .../rhel_8.4_container_tools_3.0.yaml | 2 +- .../rhel_8.4_container_tools_rhel8.yaml | 2 +- .../centos_8.stream_container_tools.yaml | 2 +- .../podman/rhel_8.4_container_tools_3.0.yaml | 2 +- .../rhel_8.4_container_tools_rhel8.yaml | 2 +- ...ml => ignorelist_wrongly_marked_down.yaml} | 0 ceph/qa/run_xfstests-obsolete.sh | 6 +- ceph/qa/run_xfstests_qemu.sh | 2 +- ceph/qa/standalone/ceph-helpers.sh | 30 +- .../erasure-code/test-erasure-eio.sh | 1 + ...ock-profile-switch.sh => mclock-config.sh} | 122 +- ceph/qa/standalone/mon/misc.sh | 1 - ceph/qa/standalone/mon/mon-bind.sh | 4 - ceph/qa/standalone/mon/mon-handle-forward.sh | 2 +- .../osd-backfill/osd-backfill-recovery-log.sh | 1 + .../osd-backfill/osd-backfill-space.sh | 1 + ceph/qa/standalone/osd/osd-recovery-space.sh | 1 + .../32bits/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../fs/32bits/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../fs/full/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../fs/full/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../libcephfs/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - ceph/qa/suites/fs/mixed-clients/distro/$ | 0 ceph/qa/suites/fs/mixed-clients/distro/.qa | 1 + .../fs/mixed-clients/distro/centos_8.yaml | 1 + .../fs/mixed-clients/distro/rhel_8.yaml | 1 + .../suites/fs/mixed-clients/distro/ubuntu/+ | 0 .../suites/fs/mixed-clients/distro/ubuntu/.qa | 1 + .../mixed-clients/distro/ubuntu/latest.yaml | 1 + .../distro/ubuntu/overrides.yaml | 4 + .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../multifs/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../multifs/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../fs/shell/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../fs/shell/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../fs/snaps/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../fs/snaps/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../multifs/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../multifs/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../workloads/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../fs/top/overrides/ignorelist_health.yaml | 1 + .../fs/top/overrides/whitelist_health.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../traceless/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../nofs/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../nofs/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - ceph/qa/suites/fs/upgrade/upgraded_client/.qa | 1 + .../upgrade/upgraded_client/from_nautilus/% | 0 .../upgrade/upgraded_client/from_nautilus/.qa | 1 + .../from_nautilus/bluestore-bitmap.yaml | 1 + .../from_nautilus/centos_latest.yaml | 1 + .../upgraded_client/from_nautilus/clusters/% | 0 .../from_nautilus/clusters/.qa | 1 + .../clusters/1-mds-1-client-micro.yaml | 1 + .../upgraded_client/from_nautilus/conf | 1 + .../upgraded_client/from_nautilus/overrides/% | 0 .../from_nautilus/overrides/.qa | 1 + .../overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../from_nautilus/overrides/pg-warn.yaml | 5 + .../upgraded_client/from_nautilus/tasks/% | 0 .../upgraded_client/from_nautilus/tasks/.qa | 1 + .../from_nautilus/tasks/0-nautilus.yaml | 44 + .../from_nautilus/tasks/1-client-upgrade.yaml | 7 + .../from_nautilus/tasks/2-client-sanity.yaml | 4 + .../verify/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../fs/verify/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../volumes/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../volumes/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../workload/overrides/ignorelist_health.yaml | 1 + .../ignorelist_wrongly_marked_down.yaml | 1 + .../workload/overrides/whitelist_health.yaml | 1 - .../whitelist_wrongly_marked_down.yaml | 1 - .../cephadm/with-work/tasks/rotate-keys.yaml | 16 + .../workunits/task/test_iscsi_pids_limit.yaml | 20 + .../workunits/task/test_orch_cli_mon.yaml | 45 + ...ist_health.yaml => ignorelist_health.yaml} | 0 .../rados/dashboard/tasks/dashboard.yaml | 6 + .../rados/singleton/all/mon-auth-caps.yaml | 1 + .../pacific-x/realm.yaml | 5 +- ceph/qa/suites/rgw/cloud-transition/+ | 0 ceph/qa/suites/rgw/cloud-transition/.qa | 1 + .../suites/rgw/cloud-transition/cluster.yaml | 3 + .../rgw/cloud-transition/overrides.yaml | 14 + .../cloud-transition/supported-random-distro$ | 1 + .../tasks/cloud_transition_s3tests.yaml | 62 + ceph/qa/suites/rgw/crypt/2-kms/barbican.yaml | 6 +- ...zone-plus-pubsub.yaml => three-zones.yaml} | 5 +- .../suites/rgw/tempest/tasks/rgw_tempest.yaml | 9 +- .../client-upgrade-quincy-reef/.qa | 1 + .../quincy-client-x/rbd/% | 0 .../quincy-client-x/rbd/.qa | 1 + .../quincy-client-x/rbd/0-cluster/+ | 0 .../quincy-client-x/rbd/0-cluster/.qa | 1 + .../rbd/0-cluster/openstack.yaml | 4 + .../quincy-client-x/rbd/0-cluster/start.yaml | 19 + .../quincy-client-x/rbd/1-install/.qa | 1 + .../rbd/1-install/quincy-client-x.yaml | 11 + .../quincy-client-x/rbd/2-workload/.qa | 1 + .../2-workload/rbd_notification_tests.yaml | 34 + .../quincy-client-x/rbd/supported/.qa | 1 + .../rbd/supported/ubuntu_20.04.yaml | 1 + .../octopus-x/rgw-multisite/realm.yaml | 5 +- .../upgrade/quincy-p2p/quincy-p2p-parallel/% | 0 .../point-to-point-upgrade.yaml | 179 + .../supported-all-distro/centos_8.yaml | 1 + .../supported-all-distro/ubuntu_latest.yaml | 2 + .../quincy-p2p/quincy-p2p-stress-split/% | 0 .../quincy-p2p-stress-split/0-cluster/+ | 0 .../0-cluster/openstack.yaml | 6 + .../0-cluster/start.yaml | 33 + .../1-ceph-install/quincy.yaml | 21 + .../1.1.short_pg_log.yaml | 6 + .../2-partial-upgrade/firsthalf.yaml | 13 + .../3-thrash/default.yaml | 27 + .../quincy-p2p-stress-split/4-workload/+ | 0 .../4-workload/fsx.yaml | 8 + .../4-workload/radosbench.yaml | 52 + .../4-workload/rbd-cls.yaml | 10 + .../4-workload/rbd-import-export.yaml | 12 + .../4-workload/rbd_api.yaml | 18 + .../4-workload/readwrite.yaml | 16 + .../4-workload/snaps-few-objects.yaml | 18 + .../5-finish-upgrade.yaml | 8 + .../6-final-workload/+ | 0 .../6-final-workload/rbd-python.yaml | 10 + .../6-final-workload/snaps-many-objects.yaml | 16 + .../objectstore/bluestore-bitmap.yaml | 43 + .../objectstore/bluestore-comp.yaml | 23 + .../objectstore/bluestore-stupid.yaml | 43 + .../objectstore/filestore-xfs.yaml | 15 + .../supported-all-distro/ubuntu_latest.yaml | 2 + .../thrashosds-health.yaml | 15 + ceph/qa/tasks/barbican.py | 15 +- ceph/qa/tasks/cephadm.py | 13 +- ceph/qa/tasks/cephadm_cases/test_cli_mon.py | 71 + ceph/qa/tasks/cephfs/cephfs_test_case.py | 33 +- ceph/qa/tasks/cephfs/filesystem.py | 57 +- ceph/qa/tasks/cephfs/test_fragment.py | 40 + ceph/qa/tasks/cephfs/test_full.py | 8 +- ceph/qa/tasks/cephfs/test_mds_metrics.py | 228 +- ceph/qa/tasks/cephfs/test_newops.py | 25 + ceph/qa/tasks/cephfs/test_nfs.py | 9 + ceph/qa/tasks/cephfs/test_scrub.py | 9 + ceph/qa/tasks/cephfs/test_scrub_checks.py | 3 +- ceph/qa/tasks/cephfs/test_snap_schedules.py | 2 +- ceph/qa/tasks/cephfs/test_snapshots.py | 59 + ceph/qa/tasks/cephfs/test_volumes.py | 68 +- ceph/qa/tasks/cephfs/xfstests_dev.py | 9 +- ceph/qa/tasks/kubeadm.py | 2 +- ceph/qa/tasks/mgr/dashboard/test_rgw.py | 4 +- ceph/qa/tasks/rbd_fio.py | 2 +- ceph/qa/tasks/restart.py | 18 +- ceph/qa/tasks/rgw.py | 1 + ceph/qa/tasks/rgw_cloudtier.py | 122 + ceph/qa/tasks/rgw_multisite.py | 10 +- ceph/qa/tasks/rgw_multisite_tests.py | 2 +- ceph/qa/tasks/s3tests.py | 76 +- ceph/qa/tox.ini | 6 + ceph/qa/valgrind.supp | 16 + ceph/qa/workunits/cephadm/test_cephadm.sh | 15 +- .../cephadm/test_iscsi_pids_limit.sh | 24 + ceph/qa/workunits/cephtool/test.sh | 3 +- .../workunits/fs/snaps/snaptest-git-ceph.sh | 4 +- ceph/qa/workunits/libcephfs/test.sh | 1 + ceph/qa/workunits/mon/auth_key_rotation.sh | 58 + ceph/qa/workunits/rados/test_crash.sh | 5 + .../qa/workunits/rados/test_librados_build.sh | 2 +- ceph/qa/workunits/rbd/cli_generic.sh | 50 + ceph/qa/workunits/rbd/rbd-nbd.sh | 91 +- ceph/qa/workunits/rbd/rbd_groups.sh | 51 +- ceph/qa/workunits/rbd/rbd_mirror_bootstrap.sh | 13 +- ceph/qa/workunits/suites/ffsb.sh | 4 +- ceph/qa/workunits/suites/fsx.sh | 6 +- ceph/qa/workunits/windows/run-tests.ps1 | 31 + ceph/qa/workunits/windows/test_rbd_wnbd.py | 919 + ceph/src/.git_version | 4 +- ceph/src/CMakeLists.txt | 29 +- ceph/src/auth/Auth.h | 17 +- ceph/src/auth/Crypto.h | 4 + ceph/src/auth/KeyRing.cc | 48 +- ceph/src/auth/KeyRing.h | 10 +- ceph/src/auth/cephx/CephxKeyServer.cc | 23 +- ceph/src/auth/cephx/CephxKeyServer.h | 6 + ceph/src/auth/cephx/CephxServiceHandler.cc | 30 +- ceph/src/ceph-crash.in | 42 +- ceph/src/ceph-volume/ceph_volume/api/lvm.py | 2 +- .../tests/functional/batch/tox.ini | 2 +- .../lvm/playbooks/test_filestore.yml | 14 +- .../ceph_volume/tests/functional/lvm/tox.ini | 2 +- .../tests/functional/simple/tox.ini | 2 +- ceph/src/ceph-volume/ceph_volume/util/disk.py | 37 +- .../ceph_volume/util/encryption.py | 4 +- ceph/src/ceph-volume/shell_tox.ini | 2 +- ceph/src/ceph-volume/tox.ini | 2 + ceph/src/ceph_fuse.cc | 23 +- ceph/src/ceph_mon.cc | 8 +- ceph/src/ceph_osd.cc | 22 +- ceph/src/cephadm/box/box.py | 8 +- ceph/src/cephadm/cephadm | 337 +- ceph/src/cephadm/tests/fixtures.py | 17 +- ceph/src/cephadm/tests/test_cephadm.py | 69 +- ceph/src/cephadm/tox.ini | 5 +- ceph/src/client/Client.cc | 197 +- ceph/src/client/Client.h | 9 +- ceph/src/client/Inode.cc | 1 + ceph/src/cls/fifo/cls_fifo.cc | 66 +- ceph/src/cls/fifo/cls_fifo_ops.h | 15 +- ceph/src/cls/fifo/cls_fifo_types.h | 176 +- ceph/src/cls/queue/cls_queue_src.cc | 12 +- ceph/src/cls/rbd/cls_rbd.cc | 3 + ceph/src/cls/rgw/cls_rgw.cc | 36 +- ceph/src/cls/rgw/cls_rgw_client.cc | 1 + ceph/src/cls/rgw/cls_rgw_ops.h | 6 + ceph/src/cls/rgw/cls_rgw_types.cc | 18 + ceph/src/cls/rgw/cls_rgw_types.h | 44 + ceph/src/common/OutputDataSocket.cc | 2 + ceph/src/common/admin_socket.cc | 3 + ceph/src/common/admin_socket.h | 3 +- ceph/src/common/ceph_context.cc | 13 +- ceph/src/common/ceph_strings.cc | 1 + ceph/src/common/options/global.yaml.in | 22 +- ceph/src/common/options/mds-client.yaml.in | 8 + ceph/src/common/options/mds.yaml.in | 8 - ceph/src/common/options/osd.yaml.in | 51 +- ceph/src/common/pick_address.cc | 17 +- ceph/src/common/pick_address.h | 1 + ceph/src/common/subsys_types.h | 1 + ceph/src/compressor/CMakeLists.txt | 1 + ceph/src/crush/CrushTester.cc | 22 +- ceph/src/crush/CrushTester.h | 5 +- ceph/src/exporter/DaemonMetricCollector.cc | 110 +- ceph/src/include/ceph_fs.h | 1 + ceph/src/include/rbd/librbd.h | 10 + ceph/src/include/rbd/librbd.hpp | 21 + ceph/src/librbd/AsioEngine.cc | 5 +- ceph/src/librbd/Journal.cc | 43 + ceph/src/librbd/Journal.h | 5 + ceph/src/librbd/LibrbdAdminSocketHook.cc | 1 + ceph/src/librbd/LibrbdAdminSocketHook.h | 1 + ceph/src/librbd/api/Image.cc | 25 +- .../cache/ObjectCacherObjectDispatch.cc | 29 +- ceph/src/librbd/cache/pwl/AbstractWriteLog.cc | 32 +- ceph/src/librbd/io/ImageRequest.cc | 57 +- ceph/src/librbd/io/ImageRequest.h | 2 +- ceph/src/librbd/librbd.cc | 54 +- ceph/src/mds/CDentry.h | 15 +- ceph/src/mds/CDir.cc | 15 +- ceph/src/mds/CDir.h | 2 +- ceph/src/mds/CInode.cc | 11 +- ceph/src/mds/DamageTable.cc | 18 +- ceph/src/mds/FSMap.h | 34 +- ceph/src/mds/Locker.cc | 9 +- ceph/src/mds/MDCache.cc | 87 +- ceph/src/mds/MDCache.h | 3 +- ceph/src/mds/MDLog.cc | 20 +- ceph/src/mds/MDLog.h | 5 +- ceph/src/mds/MDSDaemon.cc | 4 +- ceph/src/mds/PurgeQueue.cc | 3 +- ceph/src/mds/Server.cc | 190 +- ceph/src/mds/Server.h | 3 + ceph/src/mds/cephfs_features.cc | 2 + ceph/src/mds/cephfs_features.h | 32 +- ceph/src/messages/MClientCaps.h | 4 +- ceph/src/messages/MDentryUnlink.h | 60 +- ceph/src/messages/MMonUsedPendingKeys.h | 48 + ceph/src/mgr/ActivePyModules.cc | 23 + ceph/src/mgr/ActivePyModules.h | 1 + ceph/src/mgr/BaseMgrModule.cc | 9 + ceph/src/mgr/CMakeLists.txt | 5 +- ceph/src/mgr/ClusterState.cc | 2 + ceph/src/mgr/DaemonHealthMetric.h | 8 +- ceph/src/mgr/DaemonServer.cc | 2 - ceph/src/mgr/Mgr.cc | 1 + ceph/src/mgr/Mgr.h | 1 + ceph/src/mon/AuthMonitor.cc | 169 +- ceph/src/mon/AuthMonitor.h | 3 + ceph/src/mon/ConnectionTracker.cc | 121 +- ceph/src/mon/ConnectionTracker.h | 31 +- ceph/src/mon/ElectionLogic.cc | 30 +- ceph/src/mon/Elector.cc | 62 +- ceph/src/mon/Elector.h | 9 +- ceph/src/mon/FSCommands.cc | 6 + ceph/src/mon/LogMonitor.cc | 15 +- ceph/src/mon/MgrMap.h | 4 +- ceph/src/mon/MonClient.cc | 35 + ceph/src/mon/MonClient.h | 12 +- ceph/src/mon/MonCommands.h | 18 +- ceph/src/mon/MonMap.cc | 3 +- ceph/src/mon/Monitor.cc | 64 +- ceph/src/mon/Monitor.h | 2 +- ceph/src/mon/MonmapMonitor.cc | 4 +- ceph/src/mon/OSDMonitor.cc | 121 +- ceph/src/mon/OSDMonitor.h | 1 + ceph/src/mount/mount.ceph.c | 4 +- ceph/src/msg/Message.cc | 7 + ceph/src/msg/Message.h | 6 + ceph/src/msg/Messenger.cc | 5 +- ceph/src/msg/Messenger.h | 19 +- ceph/src/msg/async/AsyncMessenger.cc | 59 +- ceph/src/msg/async/AsyncMessenger.h | 20 +- ceph/src/neorados/CMakeLists.txt | 3 - ceph/src/neorados/cls/fifo.cc | 387 - ceph/src/neorados/cls/fifo.h | 1754 - ceph/src/os/ObjectStore.h | 2 +- ceph/src/os/bluestore/Allocator.cc | 1 + ceph/src/os/bluestore/BlueFS.cc | 1001 +- ceph/src/os/bluestore/BlueFS.h | 41 +- ceph/src/os/bluestore/BlueStore.cc | 1435 +- ceph/src/os/bluestore/BlueStore.h | 312 +- ceph/src/os/bluestore/FreelistManager.cc | 9 +- ceph/src/os/bluestore/FreelistManager.h | 2 +- ceph/src/os/bluestore/bluefs_types.cc | 1 - ceph/src/os/bluestore/bluefs_types.h | 27 +- ceph/src/os/memstore/MemStore.cc | 2 +- ceph/src/osd/OSD.cc | 301 +- ceph/src/osd/OSD.h | 6 +- ceph/src/osd/OSDMapMapping.h | 2 +- ceph/src/osd/PGBackend.cc | 2 - ceph/src/osd/PrimaryLogPG.h | 2 +- ceph/src/osd/osd_types.h | 4 + ceph/src/osd/scheduler/OpScheduler.cc | 7 +- ceph/src/osd/scheduler/OpScheduler.h | 5 +- ceph/src/osd/scheduler/mClockScheduler.cc | 75 +- ceph/src/osd/scheduler/mClockScheduler.h | 6 +- ceph/src/osd/scrubber/pg_scrubber.cc | 6 +- ceph/src/osd/scrubber/scrub_machine.cc | 8 + ceph/src/osd/scrubber/scrub_machine.h | 6 +- ceph/src/osdc/Journaler.cc | 41 +- ceph/src/osdc/Journaler.h | 5 +- ceph/src/osdc/Objecter.cc | 2 + ceph/src/pybind/cephfs/cephfs.pyx | 6 +- ceph/src/pybind/mgr/balancer/module.py | 3 + ceph/src/pybind/mgr/ceph_module.pyi | 1 + ceph/src/pybind/mgr/cephadm/agent.py | 17 +- ceph/src/pybind/mgr/cephadm/inventory.py | 11 +- ceph/src/pybind/mgr/cephadm/migrations.py | 26 +- ceph/src/pybind/mgr/cephadm/module.py | 185 +- .../src/pybind/mgr/cephadm/offline_watcher.py | 2 +- ceph/src/pybind/mgr/cephadm/schedule.py | 4 +- ceph/src/pybind/mgr/cephadm/serve.py | 149 +- .../mgr/cephadm/services/cephadmservice.py | 109 +- .../pybind/mgr/cephadm/services/ingress.py | 41 +- ceph/src/pybind/mgr/cephadm/services/iscsi.py | 16 +- .../pybind/mgr/cephadm/services/monitoring.py | 36 +- ceph/src/pybind/mgr/cephadm/ssh.py | 38 +- .../templates/services/ingress/haproxy.cfg.j2 | 8 +- .../services/prometheus/prometheus.yml.j2 | 11 + ceph/src/pybind/mgr/cephadm/tests/fixtures.py | 3 +- .../pybind/mgr/cephadm/tests/test_agent.py | 18 + .../pybind/mgr/cephadm/tests/test_cephadm.py | 297 +- .../mgr/cephadm/tests/test_migration.py | 29 + .../pybind/mgr/cephadm/tests/test_services.py | 360 +- ceph/src/pybind/mgr/cephadm/tests/test_ssh.py | 2 +- .../mgr/cephadm/tests/test_tuned_profiles.py | 3 +- .../pybind/mgr/cephadm/tests/test_upgrade.py | 48 +- ceph/src/pybind/mgr/cephadm/tuned_profiles.py | 27 +- ceph/src/pybind/mgr/cephadm/upgrade.py | 28 +- ceph/src/pybind/mgr/cephadm/utils.py | 5 +- .../dashboard/ci/cephadm/bootstrap-cluster.sh | 4 +- .../mgr/dashboard/ci/cephadm/ceph_cluster.yml | 2 +- .../ci/cephadm/run-cephadm-e2e-tests.sh | 7 + .../mgr/dashboard/ci/cephadm/start-cluster.sh | 8 +- ceph/src/pybind/mgr/dashboard/constraints.txt | 14 +- .../mgr/dashboard/controllers/_paginate.py | 0 .../mgr/dashboard/controllers/cephfs.py | 11 + .../pybind/mgr/dashboard/controllers/docs.py | 2 +- .../pybind/mgr/dashboard/controllers/host.py | 23 +- .../pybind/mgr/dashboard/controllers/nfs.py | 4 +- .../pybind/mgr/dashboard/controllers/osd.py | 14 +- .../mgr/dashboard/controllers/prometheus.py | 14 +- .../pybind/mgr/dashboard/controllers/rbd.py | 23 +- .../dashboard/controllers/rbd_mirroring.py | 116 +- .../pybind/mgr/dashboard/controllers/rgw.py | 100 +- .../mgr/dashboard/controllers/service.py | 10 +- .../mgr/dashboard/frontend/.eslintrc.json | 87 + .../mgr/dashboard/frontend/CMakeLists.txt | 2 +- .../mgr/dashboard/frontend/angular.json | 136 +- .../mgr/dashboard/frontend/cypress.json | 2 +- .../integration/a11y/dashboard.e2e-spec.ts | 27 + .../integration/a11y/navigation.e2e-spec.ts | 21 + .../cypress/integration/block/mirroring.po.ts | 5 +- .../cypress/integration/cluster/hosts.po.ts | 9 +- .../integration/cluster/osds.e2e-spec.ts | 2 +- .../integration/cluster/services.po.ts | 12 +- .../common/01-global.feature.po.ts | 2 +- .../02-create-cluster-add-host.feature | 14 +- ...create-cluster-create-services.e2e-spec.ts | 4 +- .../04-create-cluster-create-osds.e2e-spec.ts | 2 +- .../workflow/06-cluster-check.e2e-spec.ts | 35 +- .../workflow/08-hosts.e2e-spec.ts | 2 +- .../workflow/09-services.e2e-spec.ts | 30 +- .../workflow/10-nfs-exports.e2e-spec.ts | 2 +- .../cypress/integration/page-helper.po.ts | 6 +- .../cypress/integration/pools/pools.po.ts | 2 +- .../cypress/integration/ui/login.e2e-spec.ts | 6 + .../frontend/cypress/plugins/index.js | 2 +- .../frontend/cypress/support/commands.ts | 31 + .../frontend/cypress/support/index.ts | 9 +- .../dashboard/frontend/cypress/tsconfig.json | 1 + .../dist/en-US/281.7c1918629ff8b413cc76.js | 1 - .../frontend/dist/en-US/3rdpartylicenses.txt | 51 +- .../dist/en-US/43.819b1fed46aadf1b.js | 1 + .../dist/en-US/437.7720eaff4a1def1b.js | 1 + .../dist/en-US/483.43ef92bcd845cb24eae3.js | 1 - .../dist/en-US/585.7d0bcf3a0ac0c40fef3b.js | 1 - .../dist/en-US/95.1ae8f43a396d3fea.js | 1 + ...232.svg => Ceph_Logo.beb815b55d2e7363.svg} | 0 ...f => ceph_background.3fbdf95cd52530d7.gif} | Bin ...forkawesome-webfont.23671bdbd055fa7b.woff} | Bin ... forkawesome-webfont.3217b1b06e001045.svg} | 0 ... forkawesome-webfont.3b3951dce6cf5d60.ttf} | Bin ... forkawesome-webfont.c0fee260bb6fd5fd.eot} | Bin ...orkawesome-webfont.d0a4ad9e6369d510.woff2} | Bin .../dashboard/frontend/dist/en-US/index.html | 6 +- .../dist/en-US/main.86799889c70942fa9a19.js | 3 - .../dist/en-US/main.ddd4de0999172734.js | 3 + .../en-US/polyfills.2068f3f22a496426465b.js | 1 - .../dist/en-US/polyfills.4b60b22744014b0b.js | 1 + ...g => prometheus_logo.8057911d27be9bb1.svg} | 0 .../dist/en-US/runtime.4fd39655e7ea619b.js | 1 + .../en-US/runtime.ab6c27cac6d7501e18e8.js | 1 - .../en-US/scripts.6bda3fa7e09a87cd4228.js | 7 - .../dist/en-US/scripts.cfd741a72b67f696.js | 1 + .../dist/en-US/styles.8b6796664b673424.css | 17 + .../en-US/styles.ffb7f665775e3c191fa3.css | 20 - .../mgr/dashboard/frontend/jest.config.cjs | 44 + .../mgr/dashboard/frontend/package-lock.json | 26525 +++++++--------- .../mgr/dashboard/frontend/package.json | 109 +- .../iscsi-tabs/iscsi-tabs.component.html | 20 +- .../block/iscsi-tabs/iscsi-tabs.component.ts | 5 +- ...scsi-target-discovery-modal.component.html | 28 +- .../iscsi-target-form.component.html | 178 +- .../iscsi-target-form.component.ts | 4 +- ...target-image-settings-modal.component.html | 2 +- .../app/ceph/block/iscsi/iscsi.component.html | 18 +- .../bootstrap-create-modal.component.html | 4 +- .../image-list/image-list.component.html | 22 +- .../ceph/block/mirroring/mirroring.module.ts | 5 +- .../overview/overview.component.html | 96 +- .../mirroring/overview/overview.component.ts | 3 +- .../pool-edit-mode-modal.component.html | 2 +- .../pool-list/pool-list.component.html | 10 + .../pool-list/pool-list.component.ts | 18 +- .../rbd-configuration-form.component.html | 22 +- .../rbd-details/rbd-details.component.html | 33 +- .../block/rbd-form/rbd-form.component.html | 22 +- .../block/rbd-form/rbd-form.component.spec.ts | 23 + .../ceph/block/rbd-form/rbd-form.component.ts | 38 +- .../block/rbd-list/rbd-list.component.html | 10 +- .../ceph/block/rbd-list/rbd-list.component.ts | 10 + .../rbd-namespace-form-modal.component.html | 2 +- .../rbd-performance.component.html | 4 +- .../rbd-snapshot-list.component.spec.ts | 4 + .../rbd-snapshot-list.component.ts | 14 + .../block/rbd-tabs/rbd-tabs.component.html | 38 +- .../ceph/block/rbd-tabs/rbd-tabs.component.ts | 3 +- .../cephfs-detail.component.html | 3 +- .../cephfs-directories.component.html | 2 +- .../cephfs-directories.component.ts | 27 +- .../cephfs-tabs/cephfs-tabs.component.html | 34 +- .../configuration-details.component.html | 4 +- .../configuration-form.component.html | 2 +- .../configuration-form.component.spec.ts | 8 +- .../create-cluster.component.html | 14 +- .../host-details/host-details.component.html | 40 +- .../ceph/cluster/hosts/hosts.component.html | 36 +- .../cluster/hosts/hosts.component.spec.ts | 20 +- .../app/ceph/cluster/hosts/hosts.component.ts | 41 +- .../inventory-devices.component.spec.ts | 2 +- .../inventory-devices.component.ts | 12 + .../app/ceph/cluster/logs/logs.component.html | 90 +- .../mgr-module-form.component.html | 2 +- .../cluster/monitor/monitor.component.html | 78 +- .../osd-details/osd-details.component.html | 46 +- ...sd-devices-selection-groups.component.html | 4 +- .../osd-devices-selection-groups.component.ts | 13 +- ...osd-devices-selection-modal.component.html | 5 +- .../osd-devices-selection-modal.component.ts | 12 +- .../osd-flags-indiv-modal.component.html | 2 +- .../osd/osd-form/osd-form.component.html | 86 +- .../osd/osd-form/osd-form.component.ts | 1 + .../osd/osd-list/osd-list.component.html | 74 +- .../osd/osd-list/osd-list.component.ts | 3 + .../osd-recv-speed-modal.component.html | 2 +- .../active-alert-list.component.ts | 2 +- .../prometheus-tabs.component.html | 34 +- .../prometheus-tabs.component.ts | 3 +- .../rules-list/rules-list.component.ts | 2 +- .../silence-form/silence-form.component.html | 60 +- .../silence-form.component.spec.ts | 43 +- .../silence-form/silence-form.component.ts | 49 +- .../silence-list.component.spec.ts | 9 + .../silence-list/silence-list.component.ts | 38 +- .../silence-matcher-modal.component.html | 6 +- .../service-daemon-list.component.html | 18 +- .../service-daemon-list.component.spec.ts | 19 +- .../service-daemon-list.component.ts | 29 +- .../service-form/service-form.component.html | 39 +- .../service-form.component.spec.ts | 29 +- .../service-form/service-form.component.ts | 211 +- .../cluster/services/services.component.html | 2 + .../services/services.component.spec.ts | 14 +- .../cluster/services/services.component.ts | 5 +- .../dashboard/dashboard.component.html | 26 +- .../dashboard/dashboard.component.scss | 2 +- .../health-pie/health-pie.component.html | 1 - .../dashboard/health/health.component.html | 17 +- .../dashboard/health/health.component.scss | 4 + .../dashboard/health/health.component.spec.ts | 4 +- .../info-card/info-card-popover.scss | 12 + .../info-card/info-card.component.scss | 4 + .../info-group/info-group.component.html | 27 +- .../info-group/info-group.component.scss | 8 +- .../info-group/info-group.component.spec.ts | 3 +- .../nfs-details/nfs-details.component.html | 18 +- .../nfs-details/nfs-details.component.spec.ts | 2 +- .../nfs-form-client.component.html | 10 +- .../ceph/nfs/nfs-form/nfs-form.component.html | 10 +- .../crush-rule-form-modal.component.html | 6 +- ...ure-code-profile-form-modal.component.html | 34 +- .../pool-details/pool-details.component.html | 36 +- .../pool/pool-form/pool-form.component.html | 166 +- .../pool/pool-list/pool-list.component.html | 23 +- .../ceph/rgw/models/rgw-bucket-encryption.ts | 7 + .../rgw-bucket-details.component.html | 5 + .../rgw-bucket-form.component.html | 111 +- .../rgw-bucket-form.component.ts | 94 +- .../rgw-config-modal.component.html | 237 + .../rgw-config-modal.component.scss | 0 .../rgw-config-modal.component.spec.ts | 38 + .../rgw-config-modal.component.ts | 136 + .../rgw-daemon-details.component.html | 28 +- .../rgw-daemon-list.component.html | 32 +- .../rgw-daemon-list.component.spec.ts | 2 +- .../rgw-user-capability-modal.component.html | 4 +- .../rgw-user-details.component.html | 20 +- .../rgw-user-form.component.html | 186 +- .../rgw-user-s3-key-modal.component.html | 28 +- .../rgw-user-subuser-modal.component.html | 16 +- .../rgw-user-swift-key-modal.component.html | 14 +- .../frontend/src/app/ceph/rgw/rgw.module.ts | 4 +- .../device-list/device-list.component.html | 31 +- .../device-list/device-list.component.ts | 13 +- .../shared/feedback/feedback.component.html | 2 +- .../smart-list/smart-list.component.html | 22 +- .../login-password-form.component.html | 26 +- .../app/core/auth/login/login.component.html | 16 +- .../app/core/auth/login/login.component.scss | 2 +- .../auth/user-form/user-form.component.html | 32 +- .../user-password-form.component.html | 26 +- .../auth/user-tabs/user-tabs.component.html | 22 +- .../app/core/context/context.component.html | 4 +- .../login-layout/login-layout.component.scss | 2 +- .../navigation/about/about.component.html | 3 +- .../administration.component.html | 3 +- .../dashboard-help.component.html | 13 +- .../identity/identity.component.html | 5 +- .../navigation/navigation.component.html | 26 +- .../navigation/navigation.component.scss | 7 +- .../navigation/navigation.component.spec.ts | 58 +- .../navigation/navigation.component.ts | 2 +- .../notifications.component.scss | 4 +- .../app/shared/api/ceph-service.service.ts | 25 +- .../src/app/shared/api/host.service.ts | 2 +- .../src/app/shared/api/osd.service.ts | 3 + .../src/app/shared/api/paginate.model.ts | 16 + .../app/shared/api/rbd-mirroring.service.ts | 4 + .../app/shared/api/rgw-bucket.service.spec.ts | 32 +- .../src/app/shared/api/rgw-bucket.service.ts | 83 +- .../cd-label/cd-label.component.html | 11 + .../cd-label/cd-label.component.scss | 0 .../cd-label/cd-label.component.spec.ts | 25 + .../components/cd-label/cd-label.component.ts | 11 + .../cd-label/color-class-from-text.pipe.ts | 28 + .../shared/components/components.module.ts | 9 +- .../config-option.component.html | 22 +- .../form-modal/form-modal.component.html | 2 +- .../components/grafana/grafana.component.html | 27 +- .../grafana/grafana.component.spec.ts | 1 + .../components/grafana/grafana.component.ts | 2 + .../components/helper/helper.component.html | 2 +- .../components/helper/helper.component.ts | 3 + .../language-selector.component.html | 9 +- .../loading-panel.component.html | 2 +- .../components/modal/modal.component.html | 5 +- .../notifications-sidebar.component.html | 29 +- .../notifications-sidebar.component.scss | 4 - .../notifications-sidebar.component.spec.ts | 40 +- .../notifications-sidebar.component.ts | 61 + .../refresh-selector.component.html | 12 +- .../select-badges.component.html | 6 +- .../components/select/select.component.html | 8 +- .../telemetry-notification.component.scss | 6 + .../usage-bar/usage-bar.component.html | 4 + .../usage-bar/usage-bar.component.ts | 2 + .../app/shared/datatable/datatable.module.ts | 16 +- .../table-actions.component.html | 6 +- .../table-actions.component.scss | 11 + .../table-key-value.component.spec.ts | 3 +- .../table-pagination.component.html | 58 + .../table-pagination.component.scss | 21 + .../table-pagination.component.spec.ts | 54 + .../table-pagination.component.ts | 110 + .../datatable/table/table.component.html | 68 +- .../datatable/table/table.component.scss | 6 +- .../datatable/table/table.component.spec.ts | 25 +- .../shared/datatable/table/table.component.ts | 13 +- .../directives/auth-storage.directive.ts | 2 +- .../shared/directives/autofocus.directive.ts | 2 +- .../directives/form-loading.directive.spec.ts | 2 +- .../directives/form-loading.directive.ts | 25 +- .../cd-form-control.directive.ts | 2 +- .../cd-form-group.directive.ts | 2 +- .../cd-form-validation.directive.ts | 2 +- .../src/app/shared/enum/health-label.enum.ts | 5 + .../src/app/shared/enum/icons.enum.ts | 1 + .../app/shared/models/alertmanager-silence.ts | 3 + .../src/app/shared/models/cd-notification.ts | 2 + .../models/cd-table-fetch-data-context.ts | 7 + .../frontend/src/app/shared/models/devices.ts | 1 + .../shared/pipes/health-label.pipe.spec.ts | 24 + .../src/app/shared/pipes/health-label.pipe.ts | 12 + .../src/app/shared/pipes/pipes.module.ts | 3 + .../services/notification.service.spec.ts | 6 +- .../shared/services/notification.service.ts | 2 +- .../prometheus-silence-matcher.service.ts | 5 +- .../pybind/mgr/dashboard/frontend/src/main.ts | 2 +- .../mgr/dashboard/frontend/src/polyfills.ts | 9 - .../mgr/dashboard/frontend/src/setupJest.ts | 2 +- .../mgr/dashboard/frontend/src/styles.scss | 81 + .../src/styles/bootstrap-extends.scss | 16 +- .../src/styles/ceph-custom/_basics.scss | 35 +- .../src/styles/ceph-custom/_buttons.scss | 34 +- .../src/styles/ceph-custom/_forms.scss | 21 +- .../styles/defaults/_bootstrap-defaults.scss | 43 +- .../frontend/src/testing/unit-test-helper.ts | 10 +- .../mgr/dashboard/frontend/tsconfig.json | 1 + .../pybind/mgr/dashboard/frontend/tslint.json | 118 - ceph/src/pybind/mgr/dashboard/module.py | 78 +- ceph/src/pybind/mgr/dashboard/openapi.yaml | 226 +- .../mgr/dashboard/services/_paginate.py | 71 + .../mgr/dashboard/services/ceph_service.py | 113 +- .../mgr/dashboard/services/orchestrator.py | 19 +- ceph/src/pybind/mgr/dashboard/services/rbd.py | 62 +- .../mgr/dashboard/services/rgw_client.py | 47 +- .../mgr/dashboard/services/tcmu_service.py | 1 + .../pybind/mgr/dashboard/tests/__init__.py | 20 +- .../pybind/mgr/dashboard/tests/test_host.py | 110 +- .../mgr/dashboard/tests/test_rbd_mirroring.py | 23 +- ceph/src/pybind/mgr/dashboard/tox.ini | 8 +- ceph/src/pybind/mgr/mgr_module.py | 12 +- ceph/src/pybind/mgr/mgr_util.py | 16 +- ceph/src/pybind/mgr/nfs/cluster.py | 2 + ceph/src/pybind/mgr/nfs/export.py | 8 +- ceph/src/pybind/mgr/nfs/ganesha_conf.py | 37 +- ceph/src/pybind/mgr/nfs/module.py | 33 +- ceph/src/pybind/mgr/nfs/tests/test_nfs.py | 114 +- ceph/src/pybind/mgr/object_format.py | 97 +- .../src/pybind/mgr/orchestrator/_interface.py | 9 + ceph/src/pybind/mgr/orchestrator/module.py | 17 +- ceph/src/pybind/mgr/pg_autoscaler/module.py | 3 +- .../tests/test_cal_final_pg_target.py | 2 +- .../tests/test_overlapping_roots.py | 2 +- ceph/src/pybind/mgr/progress/test_progress.py | 4 +- ceph/src/pybind/mgr/prometheus/module.py | 71 +- .../rbd_support/mirror_snapshot_schedule.py | 6 +- ceph/src/pybind/mgr/rbd_support/schedule.py | 23 +- ceph/src/pybind/mgr/rbd_support/task.py | 21 +- .../mgr/rbd_support/trash_purge_schedule.py | 6 +- ceph/src/pybind/mgr/requirements-required.txt | 5 +- ceph/src/pybind/mgr/rook/rook_cluster.py | 59 +- .../mgr/snap_schedule/fs/schedule_client.py | 1 + ceph/src/pybind/mgr/snap_schedule/module.py | 57 +- ceph/src/pybind/mgr/stats/fs/perf_stats.py | 179 +- ceph/src/pybind/mgr/telemetry/module.py | 19 +- .../mgr/test_orchestrator/dummy_data.json | 20 +- .../pybind/mgr/tests/test_object_format.py | 166 +- ceph/src/pybind/mgr/tox.ini | 143 +- ceph/src/pybind/mgr/volumes/fs/volume.py | 21 +- ceph/src/pybind/mgr/volumes/module.py | 6 +- .../ceph/deployment/drive_group.py | 24 +- .../deployment/drive_selection/matchers.py | 6 +- .../deployment/drive_selection/selector.py | 11 + .../python-common/ceph/deployment/hostspec.py | 8 +- .../ceph/deployment/inventory.py | 16 +- .../ceph/deployment/service_spec.py | 122 +- .../ceph/deployment/translate.py | 196 +- .../ceph/tests/test_disk_selector.py | 6 + .../ceph/tests/test_drive_group.py | 250 +- .../ceph/tests/test_inventory.py | 52 +- .../ceph/tests/test_service_spec.py | 4 +- ceph/src/rgw/CMakeLists.txt | 10 +- ceph/src/rgw/cls_fifo_legacy.cc | 420 +- ceph/src/rgw/cls_fifo_legacy.h | 21 +- ceph/src/rgw/librgw.cc | 2 +- ceph/src/rgw/rgw_admin.cc | 60 +- ceph/src/rgw/rgw_asio_client.cc | 1 + ceph/src/rgw/rgw_asio_client.h | 3 + ceph/src/rgw/rgw_asio_frontend.cc | 26 +- ceph/src/rgw/rgw_asio_frontend_timer.h | 3 +- ceph/src/rgw/rgw_auth_s3.cc | 16 +- ceph/src/rgw/rgw_auth_s3.h | 2 +- ceph/src/rgw/rgw_bucket.cc | 173 +- ceph/src/rgw/rgw_bucket.h | 21 +- ceph/src/rgw/rgw_common.cc | 18 + ceph/src/rgw/rgw_common.h | 1 + ceph/src/rgw/rgw_coroutine.cc | 5 +- ceph/src/rgw/rgw_coroutine.h | 1 + ceph/src/rgw/rgw_gc.cc | 52 +- ceph/src/rgw/rgw_gc.h | 3 +- ceph/src/rgw/rgw_lc.cc | 11 +- ceph/src/rgw/rgw_lc_tier.cc | 28 +- ceph/src/rgw/rgw_lc_tier.h | 2 +- ceph/src/rgw/rgw_log.cc | 29 +- ceph/src/rgw/rgw_log.h | 73 +- ceph/src/rgw/rgw_lua_request.cc | 10 +- ceph/src/rgw/rgw_lua_request.h | 4 +- ceph/src/rgw/rgw_metadata.h | 2 + ceph/src/rgw/rgw_op.cc | 81 +- ceph/src/rgw/rgw_op.h | 5 + ceph/src/rgw/rgw_process.cc | 6 +- ceph/src/rgw/rgw_putobj_processor.cc | 13 +- ceph/src/rgw/rgw_rados.cc | 499 +- ceph/src/rgw/rgw_rados.h | 6 +- ceph/src/rgw/rgw_reshard.cc | 10 +- ceph/src/rgw/rgw_rest_client.cc | 58 +- ceph/src/rgw/rgw_rest_client.h | 2 +- ceph/src/rgw/rgw_rest_conn.cc | 25 +- ceph/src/rgw/rgw_rest_conn.h | 3 + ceph/src/rgw/rgw_rest_iam.cc | 18 +- ceph/src/rgw/rgw_rest_iam.h | 6 +- ceph/src/rgw/rgw_rest_role.cc | 278 +- ceph/src/rgw/rgw_rest_role.h | 21 +- ceph/src/rgw/rgw_rest_s3.cc | 58 +- ceph/src/rgw/rgw_role.cc | 370 +- ceph/src/rgw/rgw_role.h | 183 +- ceph/src/rgw/rgw_sal.h | 13 +- ceph/src/rgw/rgw_sal_dbstore.cc | 25 +- ceph/src/rgw/rgw_sal_dbstore.h | 8 +- ceph/src/rgw/rgw_sal_rados.cc | 266 +- ceph/src/rgw/rgw_sal_rados.h | 15 +- ceph/src/rgw/rgw_service.cc | 25 +- ceph/src/rgw/rgw_service.h | 9 +- ceph/src/rgw/rgw_sync.cc | 1 + ceph/src/rgw/rgw_sync_trace.cc | 1 + ceph/src/rgw/rgw_sync_trace.h | 1 + ceph/src/rgw/rgw_tools.cc | 15 + ceph/src/rgw/rgw_tools.h | 5 + ceph/src/rgw/rgw_user.cc | 3 +- ceph/src/rgw/rgw_zone.cc | 2 + ceph/src/rgw/rgw_zone.h | 11 +- ceph/src/rgw/services/svc_meta_be.cc | 5 +- ceph/src/rgw/services/svc_meta_be.h | 6 +- ceph/src/rgw/services/svc_meta_be_otp.cc | 3 +- ceph/src/rgw/services/svc_meta_be_otp.h | 3 +- ceph/src/rgw/services/svc_meta_be_sobj.cc | 10 +- ceph/src/rgw/services/svc_meta_be_sobj.h | 3 +- ceph/src/rgw/services/svc_role_rados.cc | 82 + ceph/src/rgw/services/svc_role_rados.h | 50 + ceph/src/rgw/services/svc_sys_obj_cache.cc | 2 + ceph/src/rgw/services/svc_user_rados.cc | 1 - ceph/src/script/cpatch | 2 +- ceph/src/test/CMakeLists.txt | 1 - ceph/src/test/admin_socket.cc | 3 + ceph/src/test/cli/radosgw-admin/help.t | 3 + ceph/src/test/cli/rbd/help.t | 27 +- ceph/src/test/client/CMakeLists.txt | 1 + ceph/src/test/client/TestClient.h | 71 +- ceph/src/test/client/ops.cc | 45 + ceph/src/test/cls_fifo/CMakeLists.txt | 34 - ceph/src/test/cls_fifo/bench_cls_fifo.cc | 464 - ceph/src/test/cls_fifo/test_cls_fifo.cc | 741 - ceph/src/test/cls_rbd/test_cls_rbd.cc | 44 +- ceph/src/test/cls_rgw/test_cls_rgw.cc | 88 + ceph/src/test/libcephfs/CMakeLists.txt | 14 + ceph/src/test/libcephfs/acl.cc | 54 + ceph/src/test/libcephfs/newops.cc | 86 + ceph/src/test/libcephfs/test.cc | 186 +- .../test/librbd/io/test_mock_ImageRequest.cc | 173 + ceph/src/test/librbd/test_librbd.cc | 2398 +- ceph/src/test/librbd/test_mock_Journal.cc | 58 + ceph/src/test/mon/test_election.cc | 14 +- ceph/src/test/objectstore/store_test.cc | 291 +- ceph/src/test/objectstore/test_bluefs.cc | 224 +- .../test/objectstore/test_bluestore_types.cc | 26 +- ceph/src/test/osd/TestMClockScheduler.cc | 8 +- ceph/src/test/pybind/test_rados.py | 6 +- ceph/src/test/rgw/rgw_multi/conn.py | 13 +- ceph/src/test/rgw/rgw_multi/multisite.py | 11 +- ceph/src/test/rgw/rgw_multi/tests.py | 39 + ceph/src/test/rgw/rgw_multi/tests_ps.py | 2593 -- ceph/src/test/rgw/rgw_multi/zone_cloud.py | 3 + ceph/src/test/rgw/rgw_multi/zone_es.py | 3 + ceph/src/test/rgw/rgw_multi/zone_ps.py | 390 - ceph/src/test/rgw/rgw_multi/zone_rados.py | 25 + ceph/src/test/rgw/test_cls_fifo_legacy.cc | 1 - ceph/src/test/rgw/test_multi.md | 3 - ceph/src/test/rgw/test_multi.py | 28 +- ceph/src/test/rgw/test_rgw_lua.cc | 48 +- ceph/src/tools/ceph-dencoder/CMakeLists.txt | 4 +- ceph/src/tools/cephfs/DataScan.cc | 27 + ceph/src/tools/cephfs/top/cephfs-top | 860 +- ceph/src/tools/cephfs_mirror/FSMirror.cc | 1 + ceph/src/tools/cephfs_mirror/PeerReplayer.cc | 1 + ceph/src/tools/crushtool.cc | 2 +- ceph/src/tools/rbd/Utils.cc | 51 + ceph/src/tools/rbd/Utils.h | 6 + ceph/src/tools/rbd/action/Device.cc | 10 +- ceph/src/tools/rbd/action/Ggate.cc | 69 +- ceph/src/tools/rbd/action/Nbd.cc | 139 +- ceph/src/tools/rbd/action/Wnbd.cc | 75 +- ceph/src/tools/rbd_mirror/ImageDeleter.cc | 1 + ceph/src/tools/rbd_mirror/ImageReplayer.cc | 1 + ceph/src/tools/rbd_mirror/Mirror.cc | 1 + ceph/src/tools/rbd_mirror/PoolReplayer.cc | 1 + .../image_replayer/snapshot/Replayer.cc | 27 +- .../image_replayer/snapshot/Replayer.h | 3 + ceph/src/tools/rbd_nbd/rbd-nbd.cc | 50 +- ceph/src/tools/rbd_wnbd/wnbd_handler.cc | 19 +- ceph/src/tools/rbd_wnbd/wnbd_handler.h | 3 +- 1051 files changed, 52573 insertions(+), 76572 deletions(-) delete mode 100755 ceph/admin/rtd-checkout-main delete mode 100644 ceph/doc/dev/deduplication.rst delete mode 100644 ceph/doc/releases/argonaut.rst delete mode 100644 ceph/doc/releases/bobtail.rst delete mode 100644 ceph/doc/releases/cuttlefish.rst delete mode 100644 ceph/doc/releases/dumpling.rst delete mode 100644 ceph/doc/releases/emperor.rst delete mode 100644 ceph/doc/releases/firefly.rst delete mode 100644 ceph/doc/releases/general.rst delete mode 100644 ceph/doc/releases/giant.rst delete mode 100644 ceph/doc/releases/hammer.rst delete mode 100644 ceph/doc/releases/index.rst delete mode 100644 ceph/doc/releases/infernalis.rst delete mode 100644 ceph/doc/releases/jewel.rst delete mode 100644 ceph/doc/releases/kraken.rst delete mode 100644 ceph/doc/releases/luminous.rst delete mode 100644 ceph/doc/releases/mimic.rst delete mode 100644 ceph/doc/releases/nautilus.rst delete mode 100644 ceph/doc/releases/octopus.rst delete mode 100644 ceph/doc/releases/pacific.rst delete mode 100644 ceph/doc/releases/quincy.rst create mode 100644 ceph/doc/security/CVE-2022-0670.rst rename ceph/qa/cephfs/overrides/{whitelist_health.yaml => ignorelist_health.yaml} (100%) rename ceph/qa/cephfs/overrides/{whitelist_wrongly_marked_down.yaml => ignorelist_wrongly_marked_down.yaml} (100%) create mode 100644 ceph/qa/distros/all/ubuntu_22.04.yaml rename ceph/qa/overrides/{whitelist_wrongly_marked_down.yaml => ignorelist_wrongly_marked_down.yaml} (100%) rename ceph/qa/standalone/misc/{test-mclock-profile-switch.sh => mclock-config.sh} (59%) mode change 100644 => 100755 create mode 120000 ceph/qa/suites/fs/32bits/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/32bits/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/32bits/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/32bits/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/bugs/client_trim_caps/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/bugs/client_trim_caps/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/bugs/client_trim_caps/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/bugs/client_trim_caps/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/cephadm/renamevolume/overrides/ignorelist_health.yaml delete mode 120000 ceph/qa/suites/fs/cephadm/renamevolume/overrides/whitelist_health.yaml create mode 120000 ceph/qa/suites/fs/full/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/full/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/full/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/full/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/functional/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/functional/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/functional/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/functional/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/libcephfs/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/libcephfs/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/libcephfs/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/libcephfs/overrides/whitelist_wrongly_marked_down.yaml create mode 100644 ceph/qa/suites/fs/mixed-clients/distro/$ create mode 120000 ceph/qa/suites/fs/mixed-clients/distro/.qa create mode 120000 ceph/qa/suites/fs/mixed-clients/distro/centos_8.yaml create mode 120000 ceph/qa/suites/fs/mixed-clients/distro/rhel_8.yaml create mode 100644 ceph/qa/suites/fs/mixed-clients/distro/ubuntu/+ create mode 120000 ceph/qa/suites/fs/mixed-clients/distro/ubuntu/.qa create mode 120000 ceph/qa/suites/fs/mixed-clients/distro/ubuntu/latest.yaml create mode 100644 ceph/qa/suites/fs/mixed-clients/distro/ubuntu/overrides.yaml create mode 120000 ceph/qa/suites/fs/mixed-clients/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/mixed-clients/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/mixed-clients/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/mixed-clients/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/multiclient/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/multiclient/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/multiclient/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/multiclient/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/multifs/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/multifs/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/multifs/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/multifs/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/permission/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/permission/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/permission/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/permission/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/shell/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/shell/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/shell/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/shell/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/snaps/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/snaps/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/snaps/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/snaps/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/thrash/multifs/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/thrash/multifs/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/thrash/multifs/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/thrash/multifs/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/thrash/workloads/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/thrash/workloads/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/thrash/workloads/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/thrash/workloads/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/top/overrides/ignorelist_health.yaml delete mode 120000 ceph/qa/suites/fs/top/overrides/whitelist_health.yaml create mode 120000 ceph/qa/suites/fs/traceless/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/traceless/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/traceless/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/traceless/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/old_client/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/old_client/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/old_client/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/old_client/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/featureful_client/upgraded_client/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/mds_upgrade_sequence/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/upgrade/nofs/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/upgrade/nofs/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/nofs/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/upgrade/nofs/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/.qa create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/% create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/.qa create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/bluestore-bitmap.yaml create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/centos_latest.yaml create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/clusters/% create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/clusters/.qa create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/clusters/1-mds-1-client-micro.yaml create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/conf create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/% create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/.qa create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/ignorelist_wrongly_marked_down.yaml create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/overrides/pg-warn.yaml create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/% create mode 120000 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/.qa create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/0-nautilus.yaml create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/1-client-upgrade.yaml create mode 100644 ceph/qa/suites/fs/upgrade/upgraded_client/from_nautilus/tasks/2-client-sanity.yaml create mode 120000 ceph/qa/suites/fs/verify/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/verify/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/verify/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/verify/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/volumes/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/volumes/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/volumes/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/volumes/overrides/whitelist_wrongly_marked_down.yaml create mode 120000 ceph/qa/suites/fs/workload/overrides/ignorelist_health.yaml create mode 120000 ceph/qa/suites/fs/workload/overrides/ignorelist_wrongly_marked_down.yaml delete mode 120000 ceph/qa/suites/fs/workload/overrides/whitelist_health.yaml delete mode 120000 ceph/qa/suites/fs/workload/overrides/whitelist_wrongly_marked_down.yaml create mode 100644 ceph/qa/suites/orch/cephadm/with-work/tasks/rotate-keys.yaml create mode 100644 ceph/qa/suites/orch/cephadm/workunits/task/test_iscsi_pids_limit.yaml create mode 100644 ceph/qa/suites/orch/cephadm/workunits/task/test_orch_cli_mon.yaml rename ceph/qa/suites/powercycle/osd/{whitelist_health.yaml => ignorelist_health.yaml} (100%) create mode 100644 ceph/qa/suites/rgw/cloud-transition/+ create mode 120000 ceph/qa/suites/rgw/cloud-transition/.qa create mode 100644 ceph/qa/suites/rgw/cloud-transition/cluster.yaml create mode 100644 ceph/qa/suites/rgw/cloud-transition/overrides.yaml create mode 120000 ceph/qa/suites/rgw/cloud-transition/supported-random-distro$ create mode 100644 ceph/qa/suites/rgw/cloud-transition/tasks/cloud_transition_s3tests.yaml rename ceph/qa/suites/rgw/multisite/realms/{three-zone-plus-pubsub.yaml => three-zones.yaml} (80%) create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/.qa create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/% create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/.qa create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/0-cluster/+ create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/0-cluster/.qa create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/0-cluster/openstack.yaml create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/0-cluster/start.yaml create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/1-install/.qa create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/1-install/quincy-client-x.yaml create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/2-workload/.qa create mode 100644 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/2-workload/rbd_notification_tests.yaml create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/supported/.qa create mode 120000 ceph/qa/suites/upgrade-clients/client-upgrade-quincy-reef/quincy-client-x/rbd/supported/ubuntu_20.04.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-parallel/% create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-parallel/point-to-point-upgrade.yaml create mode 120000 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-parallel/supported-all-distro/centos_8.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-parallel/supported-all-distro/ubuntu_latest.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/% create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/0-cluster/+ create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/0-cluster/openstack.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/0-cluster/start.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/1-ceph-install/quincy.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/1.1.short_pg_log.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/2-partial-upgrade/firsthalf.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/3-thrash/default.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/+ create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/fsx.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/radosbench.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/rbd-cls.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/rbd-import-export.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/rbd_api.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/readwrite.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/4-workload/snaps-few-objects.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/5-finish-upgrade.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/6-final-workload/+ create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/6-final-workload/rbd-python.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/6-final-workload/snaps-many-objects.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/objectstore/bluestore-bitmap.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/objectstore/bluestore-comp.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/objectstore/bluestore-stupid.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/objectstore/filestore-xfs.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/supported-all-distro/ubuntu_latest.yaml create mode 100644 ceph/qa/suites/upgrade/quincy-p2p/quincy-p2p-stress-split/thrashosds-health.yaml create mode 100644 ceph/qa/tasks/cephadm_cases/test_cli_mon.py create mode 100644 ceph/qa/tasks/cephfs/test_newops.py create mode 100644 ceph/qa/tasks/rgw_cloudtier.py create mode 100755 ceph/qa/workunits/cephadm/test_iscsi_pids_limit.sh create mode 100755 ceph/qa/workunits/mon/auth_key_rotation.sh create mode 100644 ceph/qa/workunits/windows/run-tests.ps1 create mode 100644 ceph/qa/workunits/windows/test_rbd_wnbd.py create mode 100644 ceph/src/messages/MMonUsedPendingKeys.h delete mode 100644 ceph/src/neorados/cls/fifo.cc delete mode 100644 ceph/src/neorados/cls/fifo.h create mode 100644 ceph/src/pybind/mgr/dashboard/controllers/_paginate.py create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/.eslintrc.json create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/cypress/integration/a11y/dashboard.e2e-spec.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/cypress/integration/a11y/navigation.e2e-spec.ts delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/281.7c1918629ff8b413cc76.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/43.819b1fed46aadf1b.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/437.7720eaff4a1def1b.js delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/483.43ef92bcd845cb24eae3.js delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/585.7d0bcf3a0ac0c40fef3b.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/95.1ae8f43a396d3fea.js rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{Ceph_Logo.487a0001b327fa7f5232.svg => Ceph_Logo.beb815b55d2e7363.svg} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{ceph_background.e82dd79127290ddbe8cb.gif => ceph_background.3fbdf95cd52530d7.gif} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{forkawesome-webfont.2dfb5f36fc148e26e398.woff => forkawesome-webfont.23671bdbd055fa7b.woff} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{forkawesome-webfont.86541105409e56d17291.svg => forkawesome-webfont.3217b1b06e001045.svg} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{forkawesome-webfont.ee4d8bfd0af89fc714a2.ttf => forkawesome-webfont.3b3951dce6cf5d60.ttf} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{forkawesome-webfont.e182ad6df04f9177b326.eot => forkawesome-webfont.c0fee260bb6fd5fd.eot} (100%) rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{forkawesome-webfont.7c20758e3e7c7dff7c8d.woff2 => forkawesome-webfont.d0a4ad9e6369d510.woff2} (100%) delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/main.86799889c70942fa9a19.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/main.ddd4de0999172734.js delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/polyfills.2068f3f22a496426465b.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/polyfills.4b60b22744014b0b.js rename ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/{prometheus_logo.8b3183e5a2db0e87bb2b.svg => prometheus_logo.8057911d27be9bb1.svg} (100%) create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/runtime.4fd39655e7ea619b.js delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/runtime.ab6c27cac6d7501e18e8.js delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/scripts.6bda3fa7e09a87cd4228.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/scripts.cfd741a72b67f696.js create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/styles.8b6796664b673424.css delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/dist/en-US/styles.ffb7f665775e3c191fa3.css create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/jest.config.cjs create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/models/rgw-bucket-encryption.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-config-modal/rgw-config-modal.component.html create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-config-modal/rgw-config-modal.component.scss create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-config-modal/rgw-config-modal.component.spec.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-config-modal/rgw-config-modal.component.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/api/paginate.model.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.html create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.scss create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.spec.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/cd-label.component.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/components/cd-label/color-class-from-text.pipe.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table-pagination/table-pagination.component.html create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table-pagination/table-pagination.component.scss create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table-pagination/table-pagination.component.spec.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/datatable/table-pagination/table-pagination.component.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/enum/health-label.enum.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/pipes/health-label.pipe.spec.ts create mode 100644 ceph/src/pybind/mgr/dashboard/frontend/src/app/shared/pipes/health-label.pipe.ts delete mode 100644 ceph/src/pybind/mgr/dashboard/frontend/tslint.json create mode 100644 ceph/src/pybind/mgr/dashboard/services/_paginate.py create mode 100644 ceph/src/rgw/services/svc_role_rados.cc create mode 100644 ceph/src/rgw/services/svc_role_rados.h create mode 100644 ceph/src/test/client/ops.cc delete mode 100644 ceph/src/test/cls_fifo/CMakeLists.txt delete mode 100644 ceph/src/test/cls_fifo/bench_cls_fifo.cc delete mode 100644 ceph/src/test/cls_fifo/test_cls_fifo.cc create mode 100644 ceph/src/test/libcephfs/newops.cc delete mode 100644 ceph/src/test/rgw/rgw_multi/tests_ps.py delete mode 100644 ceph/src/test/rgw/rgw_multi/zone_ps.py diff --git a/ceph/.github/CODEOWNERS b/ceph/.github/CODEOWNERS index f80f4b748..cda890f59 100644 --- a/ceph/.github/CODEOWNERS +++ b/ceph/.github/CODEOWNERS @@ -110,6 +110,7 @@ README* @ceph/doc-writers /qa/workunits/cls/test_cls_lock.sh @ceph/rbd /qa/workunits/cls/test_cls_rbd.sh @ceph/rbd /qa/workunits/rbd @ceph/rbd +/qa/workunits/windows @ceph/rbd /src/ceph-rbdnamer @ceph/rbd /src/cls/journal @ceph/rbd /src/cls/lock @ceph/rbd diff --git a/ceph/.readthedocs.yml b/ceph/.readthedocs.yml index 361c664fa..f51969084 100644 --- a/ceph/.readthedocs.yml +++ b/ceph/.readthedocs.yml @@ -1,10 +1,6 @@ --- # Read the Docs configuration file # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details -# -# The pre_build command checks if we're building a named branch (i.e., not a PR). -# If so, check out doc/releases from the main branch before building so -# it's always up to date on docs.ceph.com/en/*. version: 2 formats: [] @@ -14,9 +10,7 @@ build: python: "3.8" apt_packages: - ditaa - jobs: - pre_build: - - bash admin/rtd-checkout-main + - graphviz python: install: - requirements: admin/doc-requirements.txt diff --git a/ceph/CMakeLists.txt b/ceph/CMakeLists.txt index 7ffe1065d..b52e8f82f 100644 --- a/ceph/CMakeLists.txt +++ b/ceph/CMakeLists.txt @@ -1,7 +1,7 @@ cmake_minimum_required(VERSION 3.16) project(ceph - VERSION 17.2.5 + VERSION 17.2.6 LANGUAGES CXX C ASM) cmake_policy(SET CMP0028 NEW) @@ -344,9 +344,11 @@ and then jemalloc. If neither of then is found. use the one in libc.") if(ALLOCATOR) if(${ALLOCATOR} MATCHES "tcmalloc(_minimal)?") find_package(gperftools 2.6.2 REQUIRED) + set(ALLOC_LIBS gperftools::${ALLOCATOR}) set(HAVE_LIBTCMALLOC ON) elseif(${ALLOCATOR} STREQUAL "jemalloc") find_package(JeMalloc REQUIRED) + set(ALLOC_LIBS JeMalloc::JeMalloc) set(HAVE_JEMALLOC 1) elseif(NOT ALLOCATOR STREQUAL "libc") message(FATAL_ERROR "Unsupported allocator selected: ${ALLOCATOR}") @@ -359,8 +361,10 @@ else(ALLOCATOR) endif() if(gperftools_FOUND) set(ALLOCATOR tcmalloc) + set(ALLOC_LIBS gperftools::tcmalloc) elseif(JeMalloc_FOUND) set(ALLOCATOR jemalloc) + set(ALLOC_LIBS JeMalloc::JeMalloc) else() if(NOT FREEBSD) # FreeBSD already has jemalloc as its default allocator @@ -369,6 +373,13 @@ else(ALLOCATOR) set(ALLOCATOR "libc") endif(gperftools_FOUND) endif(ALLOCATOR) +if(NOT ALLOCATOR STREQUAL "libc") + add_compile_options( + $<$:-fno-builtin-malloc> + $<$:-fno-builtin-calloc> + $<$:-fno-builtin-realloc> + $<$:-fno-builtin-free>) +endif() # Mingw generates incorrect entry points when using "-pie". if(WIN32 OR (HAVE_LIBTCMALLOC AND WITH_STATIC_LIBSTDCXX)) diff --git a/ceph/CodingStyle b/ceph/CodingStyle index 8b819bf89..659298f0e 100644 --- a/ceph/CodingStyle +++ b/ceph/CodingStyle @@ -156,9 +156,9 @@ For Angular code, we follow the official Angular style guide: https://angular.io/guide/styleguide To check whether your code is conformant with the style guide, we use a -combination of TSLint, Codelyzer and Prettier: +combination of ESLint, Codelyzer and Prettier: - https://palantir.github.io/tslint/ + https://eslint.org/ http://codelyzer.com/ https://prettier.io/ diff --git a/ceph/PendingReleaseNotes b/ceph/PendingReleaseNotes index 873c3e7ca..be4de4ead 100644 --- a/ceph/PendingReleaseNotes +++ b/ceph/PendingReleaseNotes @@ -1,3 +1,34 @@ +>=17.2.6 +-------- + +* `ceph mgr dump` command now outputs `last_failure_osd_epoch` and + `active_clients` fields at the top level. Previously, these fields were + output under `always_on_modules` field. + +>=17.2.5 +-------- + +* RBD: The semantics of compare-and-write C++ API (`Image::compare_and_write` + and `Image::aio_compare_and_write` methods) now match those of C API. Both + compare and write steps operate only on `len` bytes even if the respective + buffers are larger. The previous behavior of comparing up to the size of + the compare buffer was prone to subtle breakage upon straddling a stripe + unit boundary. +* RBD: compare-and-write operation is no longer limited to 512-byte sectors. + Assuming proper alignment, it now allows operating on stripe units (4M by + default). +* RBD: New `rbd_aio_compare_and_writev` API method to support scatter/gather + on both compare and write buffers. This compliments existing `rbd_aio_readv` + and `rbd_aio_writev` methods. +* RBD: `rbd device unmap` command gained `--namespace` option. Support for + namespaces was added to RBD in Nautilus 14.2.0 and it has been possible to + map and unmap images in namespaces using the `image-spec` syntax since then + but the corresponding option available in most other commands was missing. +* CEPHFS: Rename the `mds_max_retries_on_remount_failure` option to + `client_max_retries_on_remount_failure` and move it from mds.yaml.in to + mds-client.yaml.in because this option was only used by MDS client from its + birth. + >=17.2.4 -------- @@ -8,6 +39,15 @@ * OSD: The issue of high CPU utilization during recovery/backfill operations has been fixed. For more details, see: https://tracker.ceph.com/issues/56530. +* Trimming of PGLog dups is now controlled by the size instead of the version. + This fixes the PGLog inflation issue that was happening when the on-line + (in OSD) trimming got jammed after a PG split operation. Also, a new off-line + mechanism has been added: `ceph-objectstore-tool` got `trim-pg-log-dups` op + that targets situations where OSD is unable to boot due to those inflated dups. + If that is the case, in OSD logs the "You can be hit by THE DUPS BUG" warning + will be visible. + Relevant tracker: https://tracker.ceph.com/issues/53729 + >=17.2.1 * The "BlueStore zero block detection" feature (first introduced to Quincy in diff --git a/ceph/admin/rtd-checkout-main b/ceph/admin/rtd-checkout-main deleted file mode 100755 index 829d7c384..000000000 --- a/ceph/admin/rtd-checkout-main +++ /dev/null @@ -1,10 +0,0 @@ -# See .readthedocs.yml -set -ex -re='^[0-9]+$' -if [[ $READTHEDOCS_VERSION =~ $re ]]; then - echo "Building docs for PR $READTHEDOCS_VERSION. Will not check out doc/releases from main branch." -else - echo "Building docs for $READTHEDOCS_VERSION branch. Will check out doc/releases from main branch." - git checkout origin/main -- doc/releases -fi -git status diff --git a/ceph/ceph.spec b/ceph/ceph.spec index 0e19cd654..afe6fe92f 100644 --- a/ceph/ceph.spec +++ b/ceph/ceph.spec @@ -61,7 +61,11 @@ %global _remote_tarball_prefix https://download.ceph.com/tarballs/ %endif %if 0%{?suse_version} +%ifarch s390x +%bcond_with system_pmdk +%else %bcond_without system_pmdk +%endif %bcond_with amqp_endpoint %bcond_with cephfs_java %bcond_with kafka_endpoint @@ -162,7 +166,7 @@ # main package definition ################################################################################# Name: ceph -Version: 17.2.5 +Version: 17.2.6 Release: 0%{?dist} %if 0%{?fedora} || 0%{?rhel} Epoch: 2 @@ -178,7 +182,7 @@ License: LGPL-2.1 and LGPL-3.0 and CC-BY-SA-3.0 and GPL-2.0 and BSL-1.0 and BSD- Group: System/Filesystems %endif URL: http://ceph.com/ -Source0: %{?_remote_tarball_prefix}ceph-17.2.5.tar.bz2 +Source0: %{?_remote_tarball_prefix}ceph-17.2.6.tar.bz2 %if 0%{?suse_version} # _insert_obs_source_lines_here ExclusiveArch: x86_64 aarch64 ppc64le s390x @@ -202,9 +206,12 @@ BuildRequires: selinux-policy-devel BuildRequires: gperf BuildRequires: cmake > 3.5 BuildRequires: fuse-devel -%if 0%{?fedora} || 0%{?suse_version} || 0%{?rhel} == 9 +%if 0%{?fedora} || 0%{?suse_version} > 1500 || 0%{?rhel} == 9 BuildRequires: gcc-c++ >= 11 %endif +%if 0%{?suse_version} == 1500 +BuildRequires: gcc11-c++ +%endif %if 0%{?rhel} == 8 BuildRequires: %{gts_prefix}-gcc-c++ BuildRequires: %{gts_prefix}-build @@ -648,6 +655,7 @@ Requires: python%{python3_pkgversion}-pecan Requires: python%{python3_pkgversion}-pyOpenSSL Requires: python%{python3_pkgversion}-requests Requires: python%{python3_pkgversion}-dateutil +Requires: python%{python3_pkgversion}-setuptools %if 0%{?fedora} || 0%{?rhel} >= 8 Requires: python%{python3_pkgversion}-cherrypy Requires: python%{python3_pkgversion}-pyyaml @@ -1266,7 +1274,7 @@ This package provides Ceph default alerts for Prometheus. # common ################################################################################# %prep -%autosetup -p1 -n ceph-17.2.5 +%autosetup -p1 -n ceph-17.2.6 %build # Disable lto on systems that do not support symver attribute @@ -1307,6 +1315,10 @@ env | sort mkdir -p %{_vpath_builddir} pushd %{_vpath_builddir} cmake .. \ +%if 0%{?suse_version} == 1500 + -DCMAKE_C_COMPILER=gcc-11 \ + -DCMAKE_CXX_COMPILER=g++-11 \ +%endif -DCMAKE_INSTALL_PREFIX=%{_prefix} \ -DCMAKE_INSTALL_LIBDIR:PATH=%{_libdir} \ -DCMAKE_INSTALL_LIBEXECDIR:PATH=%{_libexecdir} \ @@ -1461,7 +1473,7 @@ touch %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys chmod 0600 %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys # firewall templates and /sbin/mount.ceph symlink -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 mkdir -p %{buildroot}/sbin ln -sf %{_sbindir}/mount.ceph %{buildroot}/sbin/mount.ceph %endif @@ -1637,7 +1649,7 @@ exit 0 %{_bindir}/rbd-replay-many %{_bindir}/rbdmap %{_sbindir}/mount.ceph -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 /sbin/mount.ceph %endif %if %{with lttng} diff --git a/ceph/ceph.spec.in b/ceph/ceph.spec.in index 5c5e390f4..e1575c066 100644 --- a/ceph/ceph.spec.in +++ b/ceph/ceph.spec.in @@ -61,7 +61,11 @@ %global _remote_tarball_prefix https://download.ceph.com/tarballs/ %endif %if 0%{?suse_version} +%ifarch s390x +%bcond_with system_pmdk +%else %bcond_without system_pmdk +%endif %bcond_with amqp_endpoint %bcond_with cephfs_java %bcond_with kafka_endpoint @@ -202,9 +206,12 @@ BuildRequires: selinux-policy-devel BuildRequires: gperf BuildRequires: cmake > 3.5 BuildRequires: fuse-devel -%if 0%{?fedora} || 0%{?suse_version} || 0%{?rhel} == 9 +%if 0%{?fedora} || 0%{?suse_version} > 1500 || 0%{?rhel} == 9 BuildRequires: gcc-c++ >= 11 %endif +%if 0%{?suse_version} == 1500 +BuildRequires: gcc11-c++ +%endif %if 0%{?rhel} == 8 BuildRequires: %{gts_prefix}-gcc-c++ BuildRequires: %{gts_prefix}-build @@ -648,6 +655,7 @@ Requires: python%{python3_pkgversion}-pecan Requires: python%{python3_pkgversion}-pyOpenSSL Requires: python%{python3_pkgversion}-requests Requires: python%{python3_pkgversion}-dateutil +Requires: python%{python3_pkgversion}-setuptools %if 0%{?fedora} || 0%{?rhel} >= 8 Requires: python%{python3_pkgversion}-cherrypy Requires: python%{python3_pkgversion}-pyyaml @@ -1307,6 +1315,10 @@ env | sort mkdir -p %{_vpath_builddir} pushd %{_vpath_builddir} cmake .. \ +%if 0%{?suse_version} == 1500 + -DCMAKE_C_COMPILER=gcc-11 \ + -DCMAKE_CXX_COMPILER=g++-11 \ +%endif -DCMAKE_INSTALL_PREFIX=%{_prefix} \ -DCMAKE_INSTALL_LIBDIR:PATH=%{_libdir} \ -DCMAKE_INSTALL_LIBEXECDIR:PATH=%{_libexecdir} \ @@ -1461,7 +1473,7 @@ touch %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys chmod 0600 %{buildroot}%{_sharedstatedir}/cephadm/.ssh/authorized_keys # firewall templates and /sbin/mount.ceph symlink -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 mkdir -p %{buildroot}/sbin ln -sf %{_sbindir}/mount.ceph %{buildroot}/sbin/mount.ceph %endif @@ -1637,7 +1649,7 @@ exit 0 %{_bindir}/rbd-replay-many %{_bindir}/rbdmap %{_sbindir}/mount.ceph -%if 0%{?suse_version} && !0%{?usrmerged} +%if 0%{?suse_version} && 0%{?suse_version} < 1550 /sbin/mount.ceph %endif %if %{with lttng} diff --git a/ceph/changelog.upstream b/ceph/changelog.upstream index 011092298..d4a23a078 100644 --- a/ceph/changelog.upstream +++ b/ceph/changelog.upstream @@ -1,3 +1,9 @@ +ceph (17.2.6-1) stable; urgency=medium + + * New upstream release + + -- Ceph Release Team Wed, 05 Apr 2023 15:09:49 +0000 + ceph (17.2.5-1) stable; urgency=medium * New upstream release diff --git a/ceph/cmake/modules/BuildFIO.cmake b/ceph/cmake/modules/BuildFIO.cmake index 481f3edf0..3a0694b54 100644 --- a/ceph/cmake/modules/BuildFIO.cmake +++ b/ceph/cmake/modules/BuildFIO.cmake @@ -2,8 +2,14 @@ function(build_fio) # we use an external project and copy the sources to bin directory to ensure # that object files are built outside of the source tree. include(ExternalProject) - if(ALLOCATOR) - set(FIO_EXTLIBS EXTLIBS=-l${ALLOCATOR}) + if(ALLOC_LIBS) + get_target_property(alloc_lib_path + ${ALLOC_LIBS} IMPORTED_LOCATION) + get_filename_component(alloc_lib_dir + ${alloc_lib_path} DIRECTORY) + get_filename_component(alloc_lib_name + ${alloc_lib_path} NAME) + set(FIO_EXTLIBS "EXTLIBS='-L${alloc_lib_dir} -l:${alloc_lib_name}'") endif() include(FindMake) @@ -20,7 +26,7 @@ function(build_fio) SOURCE_DIR ${source_dir} BUILD_IN_SOURCE 1 CONFIGURE_COMMAND /configure - BUILD_COMMAND ${make_cmd} fio EXTFLAGS=-Wno-format-truncation ${FIO_EXTLIBS} + BUILD_COMMAND ${make_cmd} fio EXTFLAGS=-Wno-format-truncation "${FIO_EXTLIBS}" INSTALL_COMMAND cp /fio ${CMAKE_BINARY_DIR}/bin LOG_CONFIGURE ON LOG_BUILD ON diff --git a/ceph/debian/control b/ceph/debian/control index cbb5ccaa4..bc5ac8dd0 100644 --- a/ceph/debian/control +++ b/ceph/debian/control @@ -93,6 +93,7 @@ Build-Depends: automake, tox , python3-coverage , python3-dateutil , + python3-pkg-resources , python3-openssl , python3-prettytable , python3-requests , diff --git a/ceph/doc/_static/css/custom.css b/ceph/doc/_static/css/custom.css index c44ccb450..2a37cab99 100644 --- a/ceph/doc/_static/css/custom.css +++ b/ceph/doc/_static/css/custom.css @@ -1,3 +1,23 @@ +dt { + scroll-margin-top: 3em; +} + +h2 { + scroll-margin-top: 4em; +} + +h3 { + scroll-margin-top: 4em; +} + +section { + scroll-margin-top: 4em; +} + +span { + scroll-margin-top: 2em; +} + ul.simple > li > ul > li:last-child { margin-block-end : 1em; } diff --git a/ceph/doc/architecture.rst b/ceph/doc/architecture.rst index 33558c0a8..46be74603 100644 --- a/ceph/doc/architecture.rst +++ b/ceph/doc/architecture.rst @@ -13,6 +13,7 @@ replicate and redistribute data dynamically. .. image:: images/stack.png +.. _arch-ceph-storage-cluster: The Ceph Storage Cluster ======================== @@ -59,7 +60,7 @@ service interfaces built on top of ``librados``. Storing Data ------------ -The Ceph Storage Cluster receives data from :term:`Ceph Clients`--whether it +The Ceph Storage Cluster receives data from :term:`Ceph Client`\s--whether it comes through a :term:`Ceph Block Device`, :term:`Ceph Object Storage`, the :term:`Ceph File System` or a custom implementation you create using ``librados``-- which is stored as RADOS objects. Each object is stored on an @@ -80,7 +81,7 @@ stored in a monolithic database-like fashion. Ceph OSD Daemons store data as objects in a flat namespace (e.g., no hierarchy of directories). An object has an identifier, binary data, and metadata consisting of a set of name/value pairs. The semantics are completely -up to :term:`Ceph Clients`. For example, CephFS uses metadata to store file +up to :term:`Ceph Client`\s. For example, CephFS uses metadata to store file attributes such as the file owner, created date, last modified date, and so forth. @@ -135,6 +136,8 @@ Placement of Replicated Data`_. .. index:: architecture; cluster map +.. _architecture_cluster_map: + Cluster Map ~~~~~~~~~~~ @@ -581,7 +584,7 @@ objects. Peering and Sets ~~~~~~~~~~~~~~~~ -In previous sections, we noted that Ceph OSD Daemons check each others +In previous sections, we noted that Ceph OSD Daemons check each other's heartbeats and report back to the Ceph Monitor. Another thing Ceph OSD daemons do is called 'peering', which is the process of bringing all of the OSDs that store a Placement Group (PG) into agreement about the state of all of the @@ -1619,13 +1622,13 @@ instance for high availability. -.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf +.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science) .. _Monitor Config Reference: ../rados/configuration/mon-config-ref .. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg .. _Heartbeats: ../rados/configuration/mon-osd-interaction .. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds -.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.io/assets/pdfs/weil-crush-sc06.pdf .. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing .. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure .. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure diff --git a/ceph/doc/ceph-volume/lvm/activate.rst b/ceph/doc/ceph-volume/lvm/activate.rst index eef5a0101..9faf1f74e 100644 --- a/ceph/doc/ceph-volume/lvm/activate.rst +++ b/ceph/doc/ceph-volume/lvm/activate.rst @@ -2,7 +2,7 @@ ``activate`` ============ - + Once :ref:`ceph-volume-lvm-prepare` is completed, and all the various steps that entails are done, the volume is ready to get "activated". @@ -13,7 +13,7 @@ understand what OSD is enabled and needs to be mounted. .. note:: The execution of this call is fully idempotent, and there is no side-effects when running multiple times -For OSDs deployed by cephadm, please refer to :ref:cephadm-osd-activate: +For OSDs deployed by cephadm, please refer to :ref:`cephadm-osd-activate` instead. New OSDs @@ -29,7 +29,7 @@ need to be supplied. For example:: Activating all OSDs ------------------- -.. note:: For OSDs deployed by cephadm, please refer to :ref:cephadm-osd-activate: +.. note:: For OSDs deployed by cephadm, please refer to :ref:`cephadm-osd-activate` instead. It is possible to activate all existing OSDs at once by using the ``--all`` diff --git a/ceph/doc/ceph-volume/lvm/encryption.rst b/ceph/doc/ceph-volume/lvm/encryption.rst index 1483ef32e..66f4ee182 100644 --- a/ceph/doc/ceph-volume/lvm/encryption.rst +++ b/ceph/doc/ceph-volume/lvm/encryption.rst @@ -4,45 +4,41 @@ Encryption ========== Logical volumes can be encrypted using ``dmcrypt`` by specifying the -``--dmcrypt`` flag when creating OSDs. Encryption can be done in different ways, -specially with LVM. ``ceph-volume`` is somewhat opinionated with the way it -sets up encryption with logical volumes so that the process is consistent and +``--dmcrypt`` flag when creating OSDs. When using LVM, logical volumes can be +encrypted in different ways. ``ceph-volume`` does not offer as many options as +LVM does, but it encrypts logical volumes in a way that is consistent and robust. -In this case, ``ceph-volume lvm`` follows these constraints: +In this case, ``ceph-volume lvm`` follows this constraint: -* only LUKS (version 1) is used -* Logical Volumes are encrypted, while their underlying PVs (physical volumes) - aren't -* Non-LVM devices like partitions are also encrypted with the same OSD key +* Non-LVM devices (such as partitions) are encrypted with the same OSD key. LUKS ---- -There are currently two versions of LUKS, 1 and 2. Version 2 is a bit easier -to implement but not widely available in all distros Ceph supports. LUKS 1 is -not going to be deprecated in favor of LUKS 2, so in order to have as wide -support as possible, ``ceph-volume`` uses LUKS version 1. +There are currently two versions of LUKS, 1 and 2. Version 2 is a bit easier to +implement but not widely available in all Linux distributions supported by +Ceph. -.. note:: Version 1 of LUKS is just referenced as "LUKS" whereas version 2 is - referred to as LUKS2 +.. note:: Version 1 of LUKS is referred to in this documentation as "LUKS". + Version 2 is of LUKS is referred to in this documentation as "LUKS2". LUKS on LVM ----------- -Encryption is done on top of existing logical volumes (unlike encrypting the -physical device). Any single logical volume can be encrypted while other -volumes can remain unencrypted. This method also allows for flexible logical +Encryption is done on top of existing logical volumes (this is not the same as +encrypting the physical device). Any single logical volume can be encrypted, +leaving other volumes unencrypted. This method also allows for flexible logical volume setups, since encryption will happen once the LV is created. Workflow -------- -When setting up the OSD, a secret key will be created, that will be passed -along to the monitor in JSON format as ``stdin`` to prevent the key from being +When setting up the OSD, a secret key is created. That secret key is passed +to the monitor in JSON format as ``stdin`` to prevent the key from being captured in the logs. -The JSON payload looks something like:: +The JSON payload looks something like this:: { "cephx_secret": CEPHX_SECRET, @@ -51,36 +47,38 @@ The JSON payload looks something like:: } The naming convention for the keys is **strict**, and they are named like that -for the hardcoded (legacy) names ceph-disk used. +for the hardcoded (legacy) names used by ceph-disk. * ``cephx_secret`` : The cephx key used to authenticate * ``dmcrypt_key`` : The secret (or private) key to unlock encrypted devices * ``cephx_lockbox_secret`` : The authentication key used to retrieve the ``dmcrypt_key``. It is named *lockbox* because ceph-disk used to have an - unencrypted partition named after it, used to store public keys and other - OSD metadata. + unencrypted partition named after it, which was used to store public keys and + other OSD metadata. The naming convention is strict because Monitors supported the naming -convention by ceph-disk, which used these key names. In order to keep -compatibility and prevent ceph-disk from breaking, ceph-volume will use the same -naming convention *although they don't make sense for the new encryption +convention of ceph-disk, which used these key names. In order to maintain +compatibility and prevent ceph-disk from breaking, ceph-volume uses the same +naming convention *although it does not make sense for the new encryption workflow*. -After the common steps of setting up the OSD during the prepare stage, either -with :term:`filestore` or :term:`bluestore`, the logical volume is left ready -to be activated, regardless of the state of the device (encrypted or decrypted). +After the common steps of setting up the OSD during the "prepare stage" (either +with :term:`filestore` or :term:`bluestore`), the logical volume is left ready +to be activated, regardless of the state of the device (encrypted or +decrypted). -At activation time, the logical volume will get decrypted and the OSD started -once the process completes correctly. +At the time of its activation, the logical volume is decrypted. The OSD starts +after the process completes correctly. -Summary of the encryption workflow for creating a new OSD: +Summary of the encryption workflow for creating a new OSD +---------------------------------------------------------- -#. OSD is created, both lockbox and dmcrypt keys are created, and sent along - with JSON to the monitors, indicating an encrypted OSD. +#. OSD is created. Both lockbox and dmcrypt keys are created and sent to the + monitors in JSON format, indicating an encrypted OSD. #. All complementary devices (like journal, db, or wal) get created and encrypted with the same OSD key. Key is stored in the LVM metadata of the - OSD + OSD. #. Activation continues by ensuring devices are mounted, retrieving the dmcrypt - secret key from the monitors and decrypting before the OSD gets started. + secret key from the monitors, and decrypting before the OSD gets started. diff --git a/ceph/doc/ceph-volume/lvm/prepare.rst b/ceph/doc/ceph-volume/lvm/prepare.rst index 21cae4ee5..ae6aac414 100644 --- a/ceph/doc/ceph-volume/lvm/prepare.rst +++ b/ceph/doc/ceph-volume/lvm/prepare.rst @@ -2,25 +2,22 @@ ``prepare`` =========== -This subcommand allows a :term:`filestore` or :term:`bluestore` setup. It is -recommended to pre-provision a logical volume before using it with -``ceph-volume lvm``. +Before you run ``ceph-volume lvm prepare``, we recommend that you provision a +logical volume. Then you can run ``prepare`` on that logical volume. -Logical volumes are not altered except for adding extra metadata. +``prepare`` adds metadata to logical volumes but does not alter them in any +other way. -.. note:: This is part of a two step process to deploy an OSD. If looking for - a single-call way, please see :ref:`ceph-volume-lvm-create` +.. note:: This is part of a two-step process to deploy an OSD. If you prefer + to deploy an OSD by using only one command, see :ref:`ceph-volume-lvm-create`. -To help identify volumes, the process of preparing a volume (or volumes) to -work with Ceph, the tool will assign a few pieces of metadata information using -:term:`LVM tags`. - -:term:`LVM tags` makes volumes easy to discover later, and help identify them as -part of a Ceph system, and what role they have (journal, filestore, bluestore, -etc...) - -Although :term:`bluestore` is the default, the back end can be specified with: +``prepare`` uses :term:`LVM tags` to assign several pieces of metadata to a +logical volume. Volumes tagged in this way are easier to identify and easier to +use with Ceph. :term:`LVM tags` identify logical volumes by the role that they +play in the Ceph cluster (for example: BlueStore data or BlueStore WAL+DB). +:term:`BlueStore` is the default backend. Ceph permits changing +the backend, which can be done by using the following flags and arguments: * :ref:`--filestore ` * :ref:`--bluestore ` @@ -29,50 +26,58 @@ Although :term:`bluestore` is the default, the back end can be specified with: ``bluestore`` ------------- -The :term:`bluestore` objectstore is the default for new OSDs. It offers a bit -more flexibility for devices compared to :term:`filestore`. -Bluestore supports the following configurations: - -* A block device, a block.wal, and a block.db device -* A block device and a block.wal device -* A block device and a block.db device -* A single block device - -The bluestore subcommand accepts physical block devices, partitions on -physical block devices or logical volumes as arguments for the various device parameters -If a physical device is provided, a logical volume will be created. A volume group will -either be created or reused it its name begins with ``ceph``. -This allows a simpler approach at using LVM but at the cost of flexibility: -there are no options or configurations to change how the LV is created. +:term:`Bluestore` is the default backend for new OSDs. It +offers more flexibility for devices than :term:`filestore` does. Bluestore +supports the following configurations: + +* a block device, a block.wal device, and a block.db device +* a block device and a block.wal device +* a block device and a block.db device +* a single block device + +The ``bluestore`` subcommand accepts physical block devices, partitions on physical +block devices, or logical volumes as arguments for the various device +parameters. If a physical block device is provided, a logical volume will be +created. If the provided volume group's name begins with `ceph`, it will be +created if it does not yet exist and it will be clobbered and reused if it +already exists. This allows for a simpler approach to using LVM but at the +cost of flexibility: no option or configuration can be used to change how the +logical volume is created. The ``block`` is specified with the ``--data`` flag, and in its simplest use -case it looks like:: +case it looks like: + +.. prompt:: bash # ceph-volume lvm prepare --bluestore --data vg/lv -A raw device can be specified in the same way:: +A raw device can be specified in the same way: + +.. prompt:: bash # ceph-volume lvm prepare --bluestore --data /path/to/device -For enabling :ref:`encryption `, the ``--dmcrypt`` flag is required:: +For enabling :ref:`encryption `, the ``--dmcrypt`` flag is required: + +.. prompt:: bash # ceph-volume lvm prepare --bluestore --dmcrypt --data vg/lv -If a ``block.db`` or a ``block.wal`` is needed (they are optional for -bluestore) they can be specified with ``--block.db`` and ``--block.wal`` -accordingly. These can be a physical device, a partition or -a logical volume. +If a ``block.db`` device or a ``block.wal`` device is needed, it can be +specified with ``--block.db`` or ``--block.wal``. These can be physical +devices, partitions, or logical volumes. ``block.db`` and ``block.wal`` are +optional for bluestore. -For both ``block.db`` and ``block.wal`` partitions aren't made logical volumes -because they can be used as-is. +For both ``block.db`` and ``block.wal``, partitions can be used as-is, and +therefore are not made into logical volumes. -While creating the OSD directory, the process will use a ``tmpfs`` mount to -place all the files needed for the OSD. These files are initially created by -``ceph-osd --mkfs`` and are fully ephemeral. +While creating the OSD directory, the process uses a ``tmpfs`` mount to hold +the files needed for the OSD. These files are created by ``ceph-osd --mkfs`` +and are ephemeral. -A symlink is always created for the ``block`` device, and optionally for -``block.db`` and ``block.wal``. For a cluster with a default name, and an OSD -id of 0, the directory could look like:: +A symlink is created for the ``block`` device, and is optional for ``block.db`` +and ``block.wal``. For a cluster with a default name and an OSD ID of 0, the +directory looks like this:: # ls -l /var/lib/ceph/osd/ceph-0 lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block -> /dev/ceph-be2b6fbd-bcf2-4c51-b35d-a35a162a02f0/osd-block-25cf0a05-2bc6-44ef-9137-79d65bd7ad62 @@ -85,11 +90,11 @@ id of 0, the directory could look like:: -rw-------. 1 ceph ceph 10 Oct 20 13:05 type -rw-------. 1 ceph ceph 2 Oct 20 13:05 whoami -In the above case, a device was used for ``block`` so ``ceph-volume`` create -a volume group and a logical volume using the following convention: +In the above case, a device was used for ``block``, so ``ceph-volume`` created +a volume group and a logical volume using the following conventions: -* volume group name: ``ceph-{cluster fsid}`` or if the vg exists already - ``ceph-{random uuid}`` +* volume group name: ``ceph-{cluster fsid}`` (or if the volume group already + exists: ``ceph-{random uuid}``) * logical volume name: ``osd-block-{osd_fsid}`` @@ -98,78 +103,100 @@ a volume group and a logical volume using the following convention: ``filestore`` ------------- -This is the OSD backend that allows preparation of logical volumes for -a :term:`filestore` objectstore OSD. +``Filestore`` is the OSD backend that prepares logical volumes for a +:term:`filestore`-backed object-store OSD. + + +``Filestore`` uses a logical volume to store OSD data and it uses +physical devices, partitions, or logical volumes to store the journal. If a +physical device is used to create a filestore backend, a logical volume will be +created on that physical device. If the provided volume group's name begins +with `ceph`, it will be created if it does not yet exist and it will be +clobbered and reused if it already exists. No special preparation is needed for +these volumes, but be sure to meet the minimum size requirements for OSD data and +for the journal. + +Use the following command to create a basic filestore OSD: -It can use a logical volume for the OSD data and a physical device, a partition -or logical volume for the journal. A physical device will have a logical volume -created on it. A volume group will either be created or reused it its name begins -with ``ceph``. No special preparation is needed for these volumes other than -following the minimum size requirements for data and journal. +.. prompt:: bash # -The CLI call looks like this of a basic standalone filestore OSD:: + ceph-volume lvm prepare --filestore --data - ceph-volume lvm prepare --filestore --data +Use this command to deploy filestore with an external journal: -To deploy file store with an external journal:: +.. prompt:: bash # - ceph-volume lvm prepare --filestore --data --journal + ceph-volume lvm prepare --filestore --data --journal -For enabling :ref:`encryption `, the ``--dmcrypt`` flag is required:: +Use this command to enable :ref:`encryption `, and note that the ``--dmcrypt`` flag is required: - ceph-volume lvm prepare --filestore --dmcrypt --data --journal +.. prompt:: bash # -Both the journal and data block device can take three forms: + ceph-volume lvm prepare --filestore --dmcrypt --data --journal + +The data block device and the journal can each take one of three forms: * a physical block device * a partition on a physical block device * a logical volume -When using logical volumes the value *must* be of the format -``volume_group/logical_volume``. Since logical volume names -are not enforced for uniqueness, this prevents accidentally -choosing the wrong volume. +If you use a logical volume to deploy filestore, the value that you pass in the +command *must* be of the format ``volume_group/logical_volume_name``. Since logical +volume names are not enforced for uniqueness, using this format is an important +safeguard against accidentally choosing the wrong volume (and clobbering its data). + +If you use a partition to deploy filestore, the partition *must* contain a +``PARTUUID`` that can be discovered by ``blkid``. This ensures that the +partition can be identified correctly regardless of the device's name (or path). -When using a partition, it *must* contain a ``PARTUUID``, that can be -discovered by ``blkid``. THis ensure it can later be identified correctly -regardless of the device name (or path). +For example, to use a logical volume for OSD data and a partition +(``/dev/sdc1``) for the journal, run a command of this form: -For example: passing a logical volume for data and a partition ``/dev/sdc1`` for -the journal:: +.. prompt:: bash # - ceph-volume lvm prepare --filestore --data volume_group/lv_name --journal /dev/sdc1 + ceph-volume lvm prepare --filestore --data volume_group/logical_volume_name --journal /dev/sdc1 -Passing a bare device for data and a logical volume ias the journal:: +Or, to use a bare device for data and a logical volume for the journal: - ceph-volume lvm prepare --filestore --data /dev/sdc --journal volume_group/journal_lv +.. prompt:: bash # -A generated uuid is used to ask the cluster for a new OSD. These two pieces are -crucial for identifying an OSD and will later be used throughout the -:ref:`ceph-volume-lvm-activate` process. + ceph-volume lvm prepare --filestore --data /dev/sdc --journal volume_group/journal_lv + +A generated UUID is used when asking the cluster for a new OSD. These two +pieces of information (the OSD ID and the OSD UUID) are necessary for +identifying a given OSD and will later be used throughout the +:ref:`activation` process. The OSD data directory is created using the following convention:: /var/lib/ceph/osd/- -At this point the data volume is mounted at this location, and the journal -volume is linked:: +To link the journal volume to the mounted data volume, use this command: + +.. prompt:: bash # + + ln -s /path/to/journal /var/lib/ceph/osd/-/journal + +To fetch the monmap by using the bootstrap key from the OSD, use this command: + +.. prompt:: bash # - ln -s /path/to/journal /var/lib/ceph/osd/-/journal + /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring + /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o + /var/lib/ceph/osd/-/activate.monmap -The monmap is fetched using the bootstrap key from the OSD:: +To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command: +.. prompt:: bash # - /usr/bin/ceph --cluster ceph --name client.bootstrap-osd - --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring - mon getmap -o /var/lib/ceph/osd/-/activate.monmap + ceph-osd --cluster ceph --mkfs --mkkey -i \ --monmap + /var/lib/ceph/osd/-/activate.monmap --osd-data \ + /var/lib/ceph/osd/- --osd-journal + /var/lib/ceph/osd/-/journal \ --osd-uuid + --keyring /var/lib/ceph/osd/-/keyring \ --setuser ceph + --setgroup ceph -``ceph-osd`` will be called to populate the OSD directory, that is already -mounted, re-using all the pieces of information from the initial steps:: +All of the information from the previous steps is used in the above command. - ceph-osd --cluster ceph --mkfs --mkkey -i \ - --monmap /var/lib/ceph/osd/-/activate.monmap --osd-data \ - /var/lib/ceph/osd/- --osd-journal /var/lib/ceph/osd/-/journal \ - --osd-uuid --keyring /var/lib/ceph/osd/-/keyring \ - --setuser ceph --setgroup ceph .. _ceph-volume-lvm-partitions: diff --git a/ceph/doc/cephadm/adoption.rst b/ceph/doc/cephadm/adoption.rst index 6d9ec1251..2b38d42d2 100644 --- a/ceph/doc/cephadm/adoption.rst +++ b/ceph/doc/cephadm/adoption.rst @@ -113,15 +113,15 @@ Adoption process ssh-copy-id -f -i ~/ceph.pub root@ .. note:: - It is also possible to import an existing ssh key. See - :ref:`ssh errors ` in the troubleshooting + It is also possible to import an existing SSH key. See + :ref:`SSH errors ` in the troubleshooting document for instructions that describe how to import existing - ssh keys. + SSH keys. .. note:: - It is also possible to have cephadm use a non-root user to ssh + It is also possible to have cephadm use a non-root user to SSH into cluster hosts. This user needs to have passwordless sudo access. - Use ``ceph cephadm set-user `` and copy the ssh key to that user. + Use ``ceph cephadm set-user `` and copy the SSH key to that user. See :ref:`cephadm-ssh-user` #. Tell cephadm which hosts to manage: diff --git a/ceph/doc/cephadm/compatibility.rst b/ceph/doc/cephadm/compatibility.rst index 7c75b7445..46ab62a62 100644 --- a/ceph/doc/cephadm/compatibility.rst +++ b/ceph/doc/cephadm/compatibility.rst @@ -8,27 +8,40 @@ Compatibility and Stability Compatibility with Podman Versions ---------------------------------- -Podman and Ceph have different end-of-life strategies that -might make it challenging to find compatible Podman and Ceph -versions +Podman and Ceph have different end-of-life strategies. This means that care +must be taken in finding a version of Podman that is compatible with Ceph. -Those versions are expected to work: +This table shows which version pairs are expected to work or not work together: -+-----------+---------------------------------------+ -| Ceph | Podman | -+-----------+-------+-------+-------+-------+-------+ -| | 1.9 | 2.0 | 2.1 | 2.2 | 3.0 | -+===========+=======+=======+=======+=======+=======+ -| <= 15.2.5 | True | False | False | False | False | -+-----------+-------+-------+-------+-------+-------+ -| >= 15.2.6 | True | True | True | False | False | -+-----------+-------+-------+-------+-------+-------+ -| >= 16.2.1 | False | True | True | False | True | -+-----------+-------+-------+-------+-------+-------+ ++-----------+-----------------------------------------------+ +| Ceph | Podman | ++-----------+-------+-------+-------+-------+-------+-------+ +| | 1.9 | 2.0 | 2.1 | 2.2 | 3.0 | > 3.0 | ++===========+=======+=======+=======+=======+=======+=======+ +| <= 15.2.5 | True | False | False | False | False | False | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 15.2.6 | True | True | True | False | False | False | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 16.2.1 | False | True | True | False | True | True | ++-----------+-------+-------+-------+-------+-------+-------+ +| >= 17.2.0 | False | True | True | False | True | True | ++-----------+-------+-------+-------+-------+-------+-------+ + +.. note:: + + While not all podman versions have been actively tested against + all Ceph versions, there are no known issues with using podman + version 3.0 or greater with Ceph Quincy and later releases. .. warning:: - Only podman versions that are 2.0.0 and higher work with Ceph Pacific, with the exception of podman version 2.2.1, which does not work with Ceph Pacific. kubic stable is known to work with Ceph Pacific, but it must be run with a newer kernel. + + To use Podman with Ceph Pacific, you must use **a version of Podman that + is 2.0.0 or higher**. However, **Podman version 2.2.1 does not work with + Ceph Pacific**. + + "Kubic stable" is known to work with Ceph Pacific, but it must be run + with a newer kernel. .. _cephadm-stability: @@ -36,19 +49,18 @@ Those versions are expected to work: Stability --------- -Cephadm is actively in development. Please be aware that some -functionality is still rough around the edges. Especially the -following components are working with cephadm, but the -documentation is not as complete as we would like, and there may be some -changes in the near future: - -- RGW +Cephadm is relatively stable but new functionality is still being +added and bugs are occasionally discovered. If issues are found, please +open a tracker issue under the Orchestrator component (https://tracker.ceph.com/projects/orchestrator/issues) -Cephadm support for the following features is still under development and may see breaking -changes in future releases: +Cephadm support remains under development for the following features: -- Ingress -- Cephadm exporter daemon -- cephfs-mirror +- ceph-exporter deployment +- stretch mode integration +- monitoring stack (moving towards prometheus service discover and providing TLS) +- RGW multisite deployment support (requires lots of manual steps currently) +- cephadm agent -In case you encounter issues, see also :ref:`cephadm-pause`. +If a cephadm command fails or a service stops running properly, see +:ref:`cephadm-pause` for instructions on how to pause the Ceph cluster's +background activity and how to disable cephadm. diff --git a/ceph/doc/cephadm/host-management.rst b/ceph/doc/cephadm/host-management.rst index fee286e3a..b2c514c8c 100644 --- a/ceph/doc/cephadm/host-management.rst +++ b/ceph/doc/cephadm/host-management.rst @@ -4,17 +4,26 @@ Host Management =============== -To list hosts associated with the cluster: +Listing Hosts +============= + +Run a command of this form to list hosts associated with the cluster: .. prompt:: bash # - ceph orch host ls [--format yaml] [--host-pattern ] [--label