]> git.proxmox.com Git - ceph.git/blame - ceph/doc/dev/dev_cluster_deployement.rst
import ceph quincy 17.2.6
[ceph.git] / ceph / doc / dev / dev_cluster_deployement.rst
CommitLineData
7c673cae
FG
1=================================
2 Deploying a development cluster
3=================================
4
5In order to develop on ceph, a Ceph utility,
6*vstart.sh*, allows you to deploy fake local cluster for development purpose.
7
8Usage
9=====
10
11It allows to deploy a fake local cluster on your machine for development purpose. It starts rgw, mon, osd and/or mds, or all of them if not specified.
12
13To start your development cluster, type the following::
14
15 vstart.sh [OPTIONS]...
16
17In order to stop the cluster, you can type::
18
19 ./stop.sh
20
21Options
22=======
23
11fdf7f2
TL
24.. option:: -b, --bluestore
25
26 Use bluestore as the objectstore backend for osds.
27
28.. option:: --cache <pool>
29
30 Set a cache-tier for the specified pool.
31
32.. option:: -d, --debug
33
34 Launch in debug mode.
35
36.. option:: -e
37
38 Create an erasure pool.
39
40.. option:: -f, --filestore
41
42 Use filestore as the osd objectstore backend.
43
44.. option:: --hitset <pool> <hit_set_type>
45
46 Enable hitset tracking.
47
7c673cae
FG
48.. option:: -i ip_address
49
50 Bind to the specified *ip_address* instead of guessing and resolve from hostname.
51
52.. option:: -k
53
20effc67 54 Keep old configuration files instead of overwriting these.
7c673cae 55
11fdf7f2
TL
56.. option:: -K, --kstore
57
58 Use kstore as the osd objectstore backend.
59
7c673cae
FG
60.. option:: -l, --localhost
61
9f95a23c 62 Use localhost instead of hostname.
7c673cae
FG
63
64.. option:: -m ip[:port]
65
66 Specifies monitor *ip* address and *port*.
67
11fdf7f2
TL
68.. option:: --memstore
69
70 Use memstore as the objectstore backend for osds
71
72.. option:: --multimds <count>
73
74 Allow multimds with maximum active count.
75
7c673cae
FG
76.. option:: -n, --new
77
78 Create a new cluster.
79
11fdf7f2 80.. option:: -N, --not-new
7c673cae 81
11fdf7f2 82 Reuse existing cluster config (default).
7c673cae 83
7c673cae
FG
84.. option:: --nodaemon
85
86 Use ceph-run as wrapper for mon/osd/mds.
87
11fdf7f2 88.. option:: --nolockdep
7c673cae 89
11fdf7f2 90 Disable lockdep
7c673cae 91
11fdf7f2 92.. option:: -o <config>
7c673cae 93
11fdf7f2 94 Add *config* to all sections in the ceph configuration.
7c673cae 95
11fdf7f2 96.. option:: --rgw_port <port>
7c673cae 97
11fdf7f2 98 Specify ceph rgw http listen port.
7c673cae 99
11fdf7f2
TL
100.. option:: --rgw_frontend <frontend>
101
102 Specify the rgw frontend configuration (default is civetweb).
7c673cae 103
11fdf7f2
TL
104.. option:: --rgw_compression <compression_type>
105
106 Specify the rgw compression plugin (default is disabled).
107
108.. option:: --smallmds
109
110 Configure mds with small limit cache size.
111
112.. option:: --short
113
114 Short object names only; necessary for ext4 dev
7c673cae
FG
115
116.. option:: --valgrind[_{osd,mds,mon}] 'valgrind_toolname [args...]'
117
118 Launch the osd/mds/mon/all the ceph binaries using valgrind with the specified tool and arguments.
119
11fdf7f2 120.. option:: --without-dashboard
7c673cae 121
11fdf7f2 122 Do not run using mgr dashboard.
7c673cae 123
11fdf7f2 124.. option:: -x
7c673cae 125
11fdf7f2 126 Enable cephx (on by default).
7c673cae 127
11fdf7f2 128.. option:: -X
7c673cae 129
11fdf7f2 130 Disable cephx.
7c673cae
FG
131
132
133Environment variables
134=====================
135
136{OSD,MDS,MON,RGW}
137
20effc67 138These environment variables will contains the number of instances of the desired ceph process you want to start.
7c673cae
FG
139
140Example: ::
141
142 OSD=3 MON=3 RGW=1 vstart.sh
143
144
145============================================================
146 Deploying multiple development clusters on the same machine
147============================================================
148
149In order to bring up multiple ceph clusters on the same machine, *mstart.sh* a
150small wrapper around the above *vstart* can help.
151
152Usage
153=====
154
155To start multiple clusters, you would run mstart for each cluster you would want
156to deploy, and it will start monitors, rgws for each cluster on different ports
157allowing you to run multiple mons, rgws etc. on the same cluster. Invoke it in
158the following way::
159
160 mstart.sh <cluster-name> <vstart options>
161
162For eg::
163
11fdf7f2 164 ./mstart.sh cluster1 -n
7c673cae
FG
165
166
167For stopping the cluster, you do::
168
169 ./mstop.sh <cluster-name>