1 =================================
2 Deploying a development cluster
3 =================================
5 In order to develop on ceph, a Ceph utility,
6 *vstart.sh*, allows you to deploy fake local cluster for development purpose.
11 It allows to deploy a fake local cluster on your machine for development purpose. It starts rgw, mon, osd and/or mds, or all of them if not specified.
13 To start your development cluster, type the following::
15 vstart.sh [OPTIONS]...
17 In order to stop the cluster, you can type::
24 .. option:: -b, --bluestore
26 Use bluestore as the objectstore backend for osds.
28 .. option:: --cache <pool>
30 Set a cache-tier for the specified pool.
32 .. option:: -d, --debug
38 Create an erasure pool.
40 .. option:: -f, --filestore
42 Use filestore as the osd objectstore backend.
44 .. option:: --hitset <pool> <hit_set_type>
46 Enable hitset tracking.
48 .. option:: -i ip_address
50 Bind to the specified *ip_address* instead of guessing and resolve from hostname.
54 Keep old configuration files instead of overwritting theses.
56 .. option:: -K, --kstore
58 Use kstore as the osd objectstore backend.
60 .. option:: -l, --localhost
62 Use localhost instead of hostname.
64 .. option:: -m ip[:port]
66 Specifies monitor *ip* address and *port*.
68 .. option:: --memstore
70 Use memstore as the objectstore backend for osds
72 .. option:: --multimds <count>
74 Allow multimds with maximum active count.
80 .. option:: -N, --not-new
82 Reuse existing cluster config (default).
84 .. option:: --nodaemon
86 Use ceph-run as wrapper for mon/osd/mds.
88 .. option:: --nolockdep
92 .. option:: -o <config>
94 Add *config* to all sections in the ceph configuration.
96 .. option:: --rgw_port <port>
98 Specify ceph rgw http listen port.
100 .. option:: --rgw_frontend <frontend>
102 Specify the rgw frontend configuration (default is civetweb).
104 .. option:: --rgw_compression <compression_type>
106 Specify the rgw compression plugin (default is disabled).
108 .. option:: --smallmds
110 Configure mds with small limit cache size.
114 Short object names only; necessary for ext4 dev
116 .. option:: --valgrind[_{osd,mds,mon}] 'valgrind_toolname [args...]'
118 Launch the osd/mds/mon/all the ceph binaries using valgrind with the specified tool and arguments.
120 .. option:: --without-dashboard
122 Do not run using mgr dashboard.
126 Enable cephx (on by default).
133 Environment variables
134 =====================
138 Theses environment variables will contains the number of instances of the desired ceph process you want to start.
142 OSD=3 MON=3 RGW=1 vstart.sh
145 ============================================================
146 Deploying multiple development clusters on the same machine
147 ============================================================
149 In order to bring up multiple ceph clusters on the same machine, *mstart.sh* a
150 small wrapper around the above *vstart* can help.
155 To start multiple clusters, you would run mstart for each cluster you would want
156 to deploy, and it will start monitors, rgws for each cluster on different ports
157 allowing you to run multiple mons, rgws etc. on the same cluster. Invoke it in
160 mstart.sh <cluster-name> <vstart options>
164 ./mstart.sh cluster1 -n
167 For stopping the cluster, you do::
169 ./mstop.sh <cluster-name>