By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
equivalent , we get the same data structure with more information per
OSD. One of them is the number of PGs stored on that OSD.
The number of PGs per OSD is an important number, for example when
trying to figure out why the performance is not as good as expected.
Therefore, adding it to the OSD overview visible by default should
reduce the number of times, one needs to access the CLI.
Comparing runtime cost on a 3 node ceph cluster with 4 OSDs each doing 50k
iterations gives:
Rate osd-df-tree osd-tree
osd-df-tree 9141/s -- -25%
osd-tree 12136/s 33% --
So, while definitively a bit slower, but it's still in the µs range,
and as such below HTTP in TLS in TCP connection setup for most users,
so worth the extra useful information.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ TL: slight rewording of subject and add benchmark data ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
PVE::Ceph::Tools::check_ceph_inited();
my $rados = PVE::RADOS->new();
- my $res = $rados->mon_command({ prefix => 'osd tree' });
+ my $res = $rados->mon_command({ prefix => 'osd df', output_method => 'tree', });
die "no tree nodes found\n" if !($res && $res->{nodes});
type => $e->{type}
};
- foreach my $opt (qw(status crush_weight reweight device_class)) {
+ foreach my $opt (qw(status crush_weight reweight device_class pgs)) {
$new->{$opt} = $e->{$opt} if defined($e->{$opt});
}
renderer: 'render_osd_latency',
width: 120,
},
+ {
+ text: 'PGs',
+ dataIndex: 'pgs',
+ align: 'right',
+ renderer: 'render_osd_val',
+ width: 90,
+ },
],