replicated pool to the erasure-coded pool) if they have not been
accessed in a week.
-The erasure-coded pool crush ruleset targets hardware designed for
+The erasure-coded pool CRUSH rule targets hardware designed for
cold storage with high latency and slow access time. The replicated
-pool crush ruleset targets faster hardware to provide better response
+pool CRUSH rule targets faster hardware to provide better response
times.
Cheap multidatacenter storage
datacenter contains the same amount of storage with no power-supply
backup and no air-cooling system.
-An erasure-coded pool is created with a crush map ruleset that will
+An erasure-coded pool is created with a CRUSH rule that will
ensure no data loss if at most three datacenters fail
simultaneously. The overhead is 50% with erasure code configured to
split data in six (k=6) and create three coding chunks (m=3). With
$ ceph osd pool create ecpool 12 12 erasure
-Set up an erasure-coded pool and the associated crush ruleset::
+Set up an erasure-coded pool and the associated CRUSH rule ``ecrule``::
- $ ceph osd crush rule create-erasure ecruleset
+ $ ceph osd crush rule create-erasure ecrule
$ ceph osd pool create ecpool 12 12 erasure \
- default ecruleset
+ default ecrule
-Set the ruleset failure domain to osd (instead of the host which is the default)::
+Set the CRUSH failure domain to osd (instead of host, which is the default)::
$ ceph osd erasure-code-profile set myprofile \
crush-failure-domain=osd
$ ceph osd erasure-code-profile ls
default
-Set the ruleset to take ssd (instead of default)::
+Set the rule to ssd (instead of default)::
$ ceph osd erasure-code-profile set myprofile \
crush-root=ssd