]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | roles: |
2 | - - mon.a | |
3 | - mgr.x | |
4 | - osd.0 | |
5 | - osd.1 | |
6 | - osd.2 | |
7 | - client.0 | |
8 | openstack: | |
9 | - volumes: # attached to each instance | |
10 | count: 3 | |
11 | size: 10 # GB | |
12 | tasks: | |
13 | - install: | |
14 | - ceph: | |
15 | log-whitelist: | |
c07f9fc5 | 16 | - but it is still running |
224ce89b WB |
17 | - slow request |
18 | - overall HEALTH_ | |
19 | - (OSDMAP_FLAGS) | |
20 | - (OSD_ | |
21 | - (PG_ | |
7c673cae FG |
22 | - exec: |
23 | client.0: | |
24 | - sudo ceph osd pool create foo 128 128 | |
c07f9fc5 | 25 | - sudo ceph osd pool application enable foo rados |
7c673cae FG |
26 | - sleep 5 |
27 | - sudo ceph tell osd.0 injectargs -- --osd-inject-failure-on-pg-removal | |
28 | - sudo ceph osd pool delete foo foo --yes-i-really-really-mean-it | |
29 | - ceph.wait_for_failure: [osd.0] | |
30 | - exec: | |
31 | client.0: | |
32 | - sudo ceph osd down 0 | |
33 | - ceph.restart: [osd.0] | |
34 | - ceph.healthy: |