]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/quick-ceph-deploy.rst
e2bf659753c910b1c8e242dba7184d9eb7c604e5
[ceph.git] / ceph / doc / start / quick-ceph-deploy.rst
1 =============================
2 Storage Cluster Quick Start
3 =============================
4
5 If you haven't completed your `Preflight Checklist`_, do that first. This
6 **Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7 on your admin node. Create a three Ceph Node cluster so you can
8 explore Ceph functionality.
9
10 .. include:: quick-common.rst
11
12 As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
13 Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
14 by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
15 For best results, create a directory on your admin node for maintaining the
16 configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21 The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22 are in this directory when executing ``ceph-deploy``.
23
24 .. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
28 .. topic:: Disable ``requiretty``
29
30 On some distributions (e.g., CentOS), you may receive an error while trying
31 to execute ``ceph-deploy`` commands. If ``requiretty`` is set
32 by default, disable it by executing ``sudo visudo`` and locate the
33 ``Defaults requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` to
34 ensure that ``ceph-deploy`` can connect using the ``ceph`` user and execute
35 commands with ``sudo``.
36
37 Create a Cluster
38 ================
39
40 If at any point you run into trouble and you want to start over, execute
41 the following to purge the Ceph packages, and erase all its data and configuration::
42
43 ceph-deploy purge {ceph-node} [{ceph-node}]
44 ceph-deploy purgedata {ceph-node} [{ceph-node}]
45 ceph-deploy forgetkeys
46
47 If you execute ``purge``, you must re-install Ceph.
48
49 On your admin node from the directory you created for holding your
50 configuration details, perform the following steps using ``ceph-deploy``.
51
52 #. Create the cluster. ::
53
54 ceph-deploy new {initial-monitor-node(s)}
55
56 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
57
58 ceph-deploy new node1
59
60 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
61 directory. You should see a Ceph configuration file, a monitor secret
62 keyring, and a log file for the new cluster. See `ceph-deploy new -h`_
63 for additional details.
64
65 #. Change the default number of replicas in the Ceph configuration file from
66 ``3`` to ``2`` so that Ceph can achieve an ``active + clean`` state with
67 just two Ceph OSDs. Add the following line under the ``[global]`` section::
68
69 osd pool default size = 2
70
71 #. If you have more than one network interface, add the ``public network``
72 setting under the ``[global]`` section of your Ceph configuration file.
73 See the `Network Configuration Reference`_ for details. ::
74
75 public network = {ip-address}/{netmask}
76
77 #. Install Ceph. ::
78
79 ceph-deploy install {ceph-node}[{ceph-node} ...]
80
81 For example::
82
83 ceph-deploy install admin-node node1 node2 node3
84
85 The ``ceph-deploy`` utility will install Ceph on each node.
86 **NOTE**: If you use ``ceph-deploy purge``, you must re-execute this step
87 to re-install Ceph.
88
89
90 #. Add the initial monitor(s) and gather the keys::
91
92 ceph-deploy mon create-initial
93
94 Once you complete the process, your local directory should have the following
95 keyrings:
96
97 - ``{cluster-name}.client.admin.keyring``
98 - ``{cluster-name}.bootstrap-osd.keyring``
99 - ``{cluster-name}.bootstrap-mds.keyring``
100 - ``{cluster-name}.bootstrap-rgw.keyring``
101
102 .. note:: The bootstrap-rgw keyring is only created during installation of clusters
103 running Hammer or newer
104
105 .. note:: If this process fails with a message similar to
106 "Unable to find /etc/ceph/ceph.client.admin.keyring", please ensure that the IP
107 listed for the monitor node in ceph.conf is the Public IP, not the Private IP.
108
109 #. Add two OSDs. For fast setup, this quick start uses a directory rather
110 than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for
111 details on using separate disks/partitions for OSDs and journals.
112 Login to the Ceph Nodes and create a directory for
113 the Ceph OSD Daemon. ::
114
115 ssh node2
116 sudo mkdir /var/local/osd0
117 exit
118
119 ssh node3
120 sudo mkdir /var/local/osd1
121 exit
122
123 Then, from your admin node, use ``ceph-deploy`` to prepare the OSDs. ::
124
125 ceph-deploy osd prepare {ceph-node}:/path/to/directory
126
127 For example::
128
129 ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
130
131 Finally, activate the OSDs. ::
132
133 ceph-deploy osd activate {ceph-node}:/path/to/directory
134
135 For example::
136
137 ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
138
139
140 #. Use ``ceph-deploy`` to copy the configuration file and admin key to
141 your admin node and your Ceph Nodes so that you can use the ``ceph``
142 CLI without having to specify the monitor address and
143 ``ceph.client.admin.keyring`` each time you execute a command. ::
144
145 ceph-deploy admin {admin-node} {ceph-node}
146
147 For example::
148
149 ceph-deploy admin admin-node node1 node2 node3
150
151
152 When ``ceph-deploy`` is talking to the local admin host (``admin-node``),
153 it must be reachable by its hostname. If necessary, modify ``/etc/hosts``
154 to add the name of the admin host.
155
156 #. Ensure that you have the correct permissions for the
157 ``ceph.client.admin.keyring``. ::
158
159 sudo chmod +r /etc/ceph/ceph.client.admin.keyring
160
161 #. Check your cluster's health. ::
162
163 ceph health
164
165 Your cluster should return an ``active + clean`` state when it
166 has finished peering.
167
168
169 Operating Your Cluster
170 ======================
171
172 Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
173 To operate the cluster daemons with Debian/Ubuntu distributions, see
174 `Running Ceph with Upstart`_. To operate the cluster daemons with CentOS,
175 Red Hat, Fedora, and SLES distributions, see `Running Ceph with sysvinit`_.
176
177 To learn more about peering and cluster health, see `Monitoring a Cluster`_.
178 To learn more about Ceph OSD Daemon and placement group health, see
179 `Monitoring OSDs and PGs`_. To learn more about managing users, see
180 `User Management`_.
181
182 Once you deploy a Ceph cluster, you can try out some of the administration
183 functionality, the ``rados`` object store command line, and then proceed to
184 Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object
185 Gateway.
186
187
188 Expanding Your Cluster
189 ======================
190
191 Once you have a basic cluster up and running, the next step is to expand
192 cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to ``node1``.
193 Then add a Ceph Monitor to ``node2`` and ``node3`` to establish a
194 quorum of Ceph Monitors.
195
196 .. ditaa::
197 /------------------\ /----------------\
198 | ceph-deploy | | node1 |
199 | Admin Node | | cCCC |
200 | +-------->+ mon.node1 |
201 | | | osd.2 |
202 | | | mds.node1 |
203 \---------+--------/ \----------------/
204 |
205 | /----------------\
206 | | node2 |
207 | | cCCC |
208 +----------------->+ |
209 | | osd.0 |
210 | | mon.node2 |
211 | \----------------/
212 |
213 | /----------------\
214 | | node3 |
215 | | cCCC |
216 +----------------->+ |
217 | osd.1 |
218 | mon.node3 |
219 \----------------/
220
221 Adding an OSD
222 -------------
223
224 Since you are running a 3-node cluster for demonstration purposes, add the OSD
225 to the monitor node. ::
226
227 ssh node1
228 sudo mkdir /var/local/osd2
229 exit
230
231 Then, from your ``ceph-deploy`` node, prepare the OSD. ::
232
233 ceph-deploy osd prepare {ceph-node}:/path/to/directory
234
235 For example::
236
237 ceph-deploy osd prepare node1:/var/local/osd2
238
239 Finally, activate the OSDs. ::
240
241 ceph-deploy osd activate {ceph-node}:/path/to/directory
242
243 For example::
244
245 ceph-deploy osd activate node1:/var/local/osd2
246
247
248 Once you have added your new OSD, Ceph will begin rebalancing the cluster by
249 migrating placement groups to your new OSD. You can observe this process with
250 the ``ceph`` CLI. ::
251
252 ceph -w
253
254 You should see the placement group states change from ``active+clean`` to active
255 with some degraded objects, and finally ``active+clean`` when migration
256 completes. (Control-c to exit.)
257
258
259 Add a Metadata Server
260 ---------------------
261
262 To use CephFS, you need at least one metadata server. Execute the following to
263 create a metadata server::
264
265 ceph-deploy mds create {ceph-node}
266
267 For example::
268
269 ceph-deploy mds create node1
270
271
272 .. note:: Currently Ceph runs in production with one metadata server only. You
273 may use more, but there is currently no commercial support for a cluster
274 with multiple metadata servers.
275
276
277 Add an RGW Instance
278 -------------------
279
280 To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
281 instance of :term:`RGW`. Execute the following to create an new instance of
282 RGW::
283
284 ceph-deploy rgw create {gateway-node}
285
286 For example::
287
288 ceph-deploy rgw create node1
289
290 .. note:: This functionality is new with the **Hammer** release, and also with
291 ``ceph-deploy`` v1.5.23.
292
293 By default, the :term:`RGW` instance will listen on port 7480. This can be
294 changed by editing ceph.conf on the node running the :term:`RGW` as follows:
295
296 .. code-block:: ini
297
298 [client]
299 rgw frontends = civetweb port=80
300
301 To use an IPv6 address, use:
302
303 .. code-block:: ini
304
305 [client]
306 rgw frontends = civetweb port=[::]:80
307
308
309 Adding Monitors
310 ---------------
311
312 A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high
313 availability, Ceph Storage Clusters typically run multiple Ceph
314 Monitors so that the failure of a single Ceph Monitor will not bring down the
315 Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority
316 of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.
317
318 Add two Ceph Monitors to your cluster. ::
319
320 ceph-deploy mon add {ceph-node}
321
322 For example::
323
324 ceph-deploy mon add node2
325 ceph-deploy mon add node3
326
327 Once you have added your new Ceph Monitors, Ceph will begin synchronizing
328 the monitors and form a quorum. You can check the quorum status by executing
329 the following::
330
331 ceph quorum_status --format json-pretty
332
333
334 .. tip:: When you run Ceph with multiple monitors, you SHOULD install and
335 configure NTP on each monitor host. Ensure that the
336 monitors are NTP peers.
337
338
339 Storing/Retrieving Object Data
340 ==============================
341
342 To store object data in the Ceph Storage Cluster, a Ceph client must:
343
344 #. Set an object name
345 #. Specify a `pool`_
346
347 The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
348 calculates how to map the object to a `placement group`_, and then calculates
349 how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
350 object location, all you need is the object name and the pool name. For
351 example::
352
353 ceph osd map {poolname} {object-name}
354
355 .. topic:: Exercise: Locate an Object
356
357 As an exercise, lets create an object. Specify an object name, a path to
358 a test file containing some object data and a pool name using the
359 ``rados put`` command on the command line. For example::
360
361 echo {Test-data} > testfile.txt
362 rados mkpool data
363 rados put {object-name} {file-path} --pool=data
364 rados put test-object-1 testfile.txt --pool=data
365
366 To verify that the Ceph Storage Cluster stored the object, execute
367 the following::
368
369 rados -p data ls
370
371 Now, identify the object location::
372
373 ceph osd map {pool-name} {object-name}
374 ceph osd map data test-object-1
375
376 Ceph should output the object's location. For example::
377
378 osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
379
380 To remove the test object, simply delete it using the ``rados rm``
381 command. For example::
382
383 rados rm test-object-1 --pool=data
384
385 As the cluster evolves, the object location may change dynamically. One benefit
386 of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
387 the migration manually.
388
389
390 .. _Preflight Checklist: ../quick-start-preflight
391 .. _Ceph Deploy: ../../rados/deployment
392 .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
393 .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
394 .. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
395 .. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
396 .. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
397 .. _CRUSH Map: ../../rados/operations/crush-map
398 .. _pool: ../../rados/operations/pools
399 .. _placement group: ../../rados/operations/placement-groups
400 .. _Monitoring a Cluster: ../../rados/operations/monitoring
401 .. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
402 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
403 .. _User Management: ../../rados/operations/user-management