]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ============================= |
2 | Storage Cluster Quick Start | |
3 | ============================= | |
4 | ||
5 | If you haven't completed your `Preflight Checklist`_, do that first. This | |
6 | **Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy`` | |
7 | on your admin node. Create a three Ceph Node cluster so you can | |
8 | explore Ceph functionality. | |
9 | ||
10 | .. include:: quick-common.rst | |
11 | ||
12 | As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two | |
13 | Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it | |
14 | by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors. | |
15 | For best results, create a directory on your admin node for maintaining the | |
16 | configuration files and keys that ``ceph-deploy`` generates for your cluster. :: | |
17 | ||
18 | mkdir my-cluster | |
19 | cd my-cluster | |
20 | ||
21 | The ``ceph-deploy`` utility will output files to the current directory. Ensure you | |
22 | are in this directory when executing ``ceph-deploy``. | |
23 | ||
24 | .. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root`` | |
25 | if you are logged in as a different user, because it will not issue ``sudo`` | |
26 | commands needed on the remote host. | |
27 | ||
28 | .. topic:: Disable ``requiretty`` | |
29 | ||
30 | On some distributions (e.g., CentOS), you may receive an error while trying | |
31 | to execute ``ceph-deploy`` commands. If ``requiretty`` is set | |
32 | by default, disable it by executing ``sudo visudo`` and locate the | |
33 | ``Defaults requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` to | |
34 | ensure that ``ceph-deploy`` can connect using the ``ceph`` user and execute | |
35 | commands with ``sudo``. | |
36 | ||
37 | Create a Cluster | |
38 | ================ | |
39 | ||
40 | If at any point you run into trouble and you want to start over, execute | |
41 | the following to purge the configuration:: | |
42 | ||
43 | ceph-deploy purgedata {ceph-node} [{ceph-node}] | |
44 | ceph-deploy forgetkeys | |
45 | ||
46 | To purge the Ceph packages too, you may also execute:: | |
47 | ||
48 | ceph-deploy purge {ceph-node} [{ceph-node}] | |
49 | ||
50 | If you execute ``purge``, you must re-install Ceph. | |
51 | ||
52 | On your admin node from the directory you created for holding your | |
53 | configuration details, perform the following steps using ``ceph-deploy``. | |
54 | ||
55 | #. Create the cluster. :: | |
56 | ||
57 | ceph-deploy new {initial-monitor-node(s)} | |
58 | ||
59 | Specify node(s) as hostname, fqdn or hostname:fqdn. For example:: | |
60 | ||
61 | ceph-deploy new node1 | |
62 | ||
63 | Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current | |
64 | directory. You should see a Ceph configuration file, a monitor secret | |
65 | keyring, and a log file for the new cluster. See `ceph-deploy new -h`_ | |
66 | for additional details. | |
67 | ||
68 | #. Change the default number of replicas in the Ceph configuration file from | |
69 | ``3`` to ``2`` so that Ceph can achieve an ``active + clean`` state with | |
70 | just two Ceph OSDs. Add the following line under the ``[global]`` section:: | |
71 | ||
72 | osd pool default size = 2 | |
73 | ||
74 | #. If you have more than one network interface, add the ``public network`` | |
75 | setting under the ``[global]`` section of your Ceph configuration file. | |
76 | See the `Network Configuration Reference`_ for details. :: | |
77 | ||
78 | public network = {ip-address}/{netmask} | |
79 | ||
80 | #. Install Ceph. :: | |
81 | ||
82 | ceph-deploy install {ceph-node}[{ceph-node} ...] | |
83 | ||
84 | For example:: | |
85 | ||
86 | ceph-deploy install admin-node node1 node2 node3 | |
87 | ||
88 | The ``ceph-deploy`` utility will install Ceph on each node. | |
89 | **NOTE**: If you use ``ceph-deploy purge``, you must re-execute this step | |
90 | to re-install Ceph. | |
91 | ||
92 | ||
93 | #. Add the initial monitor(s) and gather the keys:: | |
94 | ||
95 | ceph-deploy mon create-initial | |
96 | ||
97 | Once you complete the process, your local directory should have the following | |
98 | keyrings: | |
99 | ||
100 | - ``{cluster-name}.client.admin.keyring`` | |
101 | - ``{cluster-name}.bootstrap-osd.keyring`` | |
102 | - ``{cluster-name}.bootstrap-mds.keyring`` | |
103 | - ``{cluster-name}.bootstrap-rgw.keyring`` | |
104 | ||
105 | .. note:: The bootstrap-rgw keyring is only created during installation of clusters | |
106 | running Hammer or newer | |
107 | ||
108 | .. note:: If this process fails with a message similar to | |
109 | "Unable to find /etc/ceph/ceph.client.admin.keyring", please ensure that the IP | |
110 | listed for the monitor node in ceph.conf is the Public IP, not the Private IP. | |
111 | ||
112 | #. Add two OSDs. For fast setup, this quick start uses a directory rather | |
113 | than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for | |
114 | details on using separate disks/partitions for OSDs and journals. | |
115 | Login to the Ceph Nodes and create a directory for | |
116 | the Ceph OSD Daemon. :: | |
117 | ||
118 | ssh node2 | |
119 | sudo mkdir /var/local/osd0 | |
120 | exit | |
121 | ||
122 | ssh node3 | |
123 | sudo mkdir /var/local/osd1 | |
124 | exit | |
125 | ||
126 | Then, from your admin node, use ``ceph-deploy`` to prepare the OSDs. :: | |
127 | ||
128 | ceph-deploy osd prepare {ceph-node}:/path/to/directory | |
129 | ||
130 | For example:: | |
131 | ||
132 | ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1 | |
133 | ||
134 | Finally, activate the OSDs. :: | |
135 | ||
136 | ceph-deploy osd activate {ceph-node}:/path/to/directory | |
137 | ||
138 | For example:: | |
139 | ||
140 | ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1 | |
141 | ||
142 | ||
143 | #. Use ``ceph-deploy`` to copy the configuration file and admin key to | |
144 | your admin node and your Ceph Nodes so that you can use the ``ceph`` | |
145 | CLI without having to specify the monitor address and | |
146 | ``ceph.client.admin.keyring`` each time you execute a command. :: | |
147 | ||
148 | ceph-deploy admin {admin-node} {ceph-node} | |
149 | ||
150 | For example:: | |
151 | ||
152 | ceph-deploy admin admin-node node1 node2 node3 | |
153 | ||
154 | ||
155 | When ``ceph-deploy`` is talking to the local admin host (``admin-node``), | |
156 | it must be reachable by its hostname. If necessary, modify ``/etc/hosts`` | |
157 | to add the name of the admin host. | |
158 | ||
159 | #. Ensure that you have the correct permissions for the | |
160 | ``ceph.client.admin.keyring``. :: | |
161 | ||
162 | sudo chmod +r /etc/ceph/ceph.client.admin.keyring | |
163 | ||
164 | #. Check your cluster's health. :: | |
165 | ||
166 | ceph health | |
167 | ||
168 | Your cluster should return an ``active + clean`` state when it | |
169 | has finished peering. | |
170 | ||
171 | ||
172 | Operating Your Cluster | |
173 | ====================== | |
174 | ||
175 | Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster. | |
176 | To operate the cluster daemons with Debian/Ubuntu distributions, see | |
177 | `Running Ceph with Upstart`_. To operate the cluster daemons with CentOS, | |
178 | Red Hat, Fedora, and SLES distributions, see `Running Ceph with sysvinit`_. | |
179 | ||
180 | To learn more about peering and cluster health, see `Monitoring a Cluster`_. | |
181 | To learn more about Ceph OSD Daemon and placement group health, see | |
182 | `Monitoring OSDs and PGs`_. To learn more about managing users, see | |
183 | `User Management`_. | |
184 | ||
185 | Once you deploy a Ceph cluster, you can try out some of the administration | |
186 | functionality, the ``rados`` object store command line, and then proceed to | |
187 | Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object | |
188 | Gateway. | |
189 | ||
190 | ||
191 | Expanding Your Cluster | |
192 | ====================== | |
193 | ||
194 | Once you have a basic cluster up and running, the next step is to expand | |
195 | cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to ``node1``. | |
196 | Then add a Ceph Monitor to ``node2`` and ``node3`` to establish a | |
197 | quorum of Ceph Monitors. | |
198 | ||
199 | .. ditaa:: | |
200 | /------------------\ /----------------\ | |
201 | | ceph-deploy | | node1 | | |
202 | | Admin Node | | cCCC | | |
203 | | +-------->+ mon.node1 | | |
204 | | | | osd.2 | | |
205 | | | | mds.node1 | | |
206 | \---------+--------/ \----------------/ | |
207 | | | |
208 | | /----------------\ | |
209 | | | node2 | | |
210 | | | cCCC | | |
211 | +----------------->+ | | |
212 | | | osd.0 | | |
213 | | | mon.node2 | | |
214 | | \----------------/ | |
215 | | | |
216 | | /----------------\ | |
217 | | | node3 | | |
218 | | | cCCC | | |
219 | +----------------->+ | | |
220 | | osd.1 | | |
221 | | mon.node3 | | |
222 | \----------------/ | |
223 | ||
224 | Adding an OSD | |
225 | ------------- | |
226 | ||
227 | Since you are running a 3-node cluster for demonstration purposes, add the OSD | |
228 | to the monitor node. :: | |
229 | ||
230 | ssh node1 | |
231 | sudo mkdir /var/local/osd2 | |
232 | exit | |
233 | ||
234 | Then, from your ``ceph-deploy`` node, prepare the OSD. :: | |
235 | ||
236 | ceph-deploy osd prepare {ceph-node}:/path/to/directory | |
237 | ||
238 | For example:: | |
239 | ||
240 | ceph-deploy osd prepare node1:/var/local/osd2 | |
241 | ||
242 | Finally, activate the OSDs. :: | |
243 | ||
244 | ceph-deploy osd activate {ceph-node}:/path/to/directory | |
245 | ||
246 | For example:: | |
247 | ||
248 | ceph-deploy osd activate node1:/var/local/osd2 | |
249 | ||
250 | ||
251 | Once you have added your new OSD, Ceph will begin rebalancing the cluster by | |
252 | migrating placement groups to your new OSD. You can observe this process with | |
253 | the ``ceph`` CLI. :: | |
254 | ||
255 | ceph -w | |
256 | ||
257 | You should see the placement group states change from ``active+clean`` to active | |
258 | with some degraded objects, and finally ``active+clean`` when migration | |
259 | completes. (Control-c to exit.) | |
260 | ||
261 | ||
262 | Add a Metadata Server | |
263 | --------------------- | |
264 | ||
265 | To use CephFS, you need at least one metadata server. Execute the following to | |
266 | create a metadata server:: | |
267 | ||
268 | ceph-deploy mds create {ceph-node} | |
269 | ||
270 | For example:: | |
271 | ||
272 | ceph-deploy mds create node1 | |
273 | ||
274 | ||
275 | .. note:: Currently Ceph runs in production with one metadata server only. You | |
276 | may use more, but there is currently no commercial support for a cluster | |
277 | with multiple metadata servers. | |
278 | ||
279 | ||
280 | Add an RGW Instance | |
281 | ------------------- | |
282 | ||
283 | To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an | |
284 | instance of :term:`RGW`. Execute the following to create an new instance of | |
285 | RGW:: | |
286 | ||
287 | ceph-deploy rgw create {gateway-node} | |
288 | ||
289 | For example:: | |
290 | ||
291 | ceph-deploy rgw create node1 | |
292 | ||
293 | .. note:: This functionality is new with the **Hammer** release, and also with | |
294 | ``ceph-deploy`` v1.5.23. | |
295 | ||
296 | By default, the :term:`RGW` instance will listen on port 7480. This can be | |
297 | changed by editing ceph.conf on the node running the :term:`RGW` as follows: | |
298 | ||
299 | .. code-block:: ini | |
300 | ||
301 | [client] | |
302 | rgw frontends = civetweb port=80 | |
303 | ||
304 | To use an IPv6 address, use: | |
305 | ||
306 | .. code-block:: ini | |
307 | ||
308 | [client] | |
309 | rgw frontends = civetweb port=[::]:80 | |
310 | ||
311 | ||
312 | Adding Monitors | |
313 | --------------- | |
314 | ||
315 | A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high | |
316 | availability, Ceph Storage Clusters typically run multiple Ceph | |
317 | Monitors so that the failure of a single Ceph Monitor will not bring down the | |
318 | Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority | |
319 | of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum. | |
320 | ||
321 | Add two Ceph Monitors to your cluster. :: | |
322 | ||
323 | ceph-deploy mon add {ceph-node} | |
324 | ||
325 | For example:: | |
326 | ||
327 | ceph-deploy mon add node2 | |
328 | ceph-deploy mon add node3 | |
329 | ||
330 | Once you have added your new Ceph Monitors, Ceph will begin synchronizing | |
331 | the monitors and form a quorum. You can check the quorum status by executing | |
332 | the following:: | |
333 | ||
334 | ceph quorum_status --format json-pretty | |
335 | ||
336 | ||
337 | .. tip:: When you run Ceph with multiple monitors, you SHOULD install and | |
338 | configure NTP on each monitor host. Ensure that the | |
339 | monitors are NTP peers. | |
340 | ||
341 | ||
342 | Storing/Retrieving Object Data | |
343 | ============================== | |
344 | ||
345 | To store object data in the Ceph Storage Cluster, a Ceph client must: | |
346 | ||
347 | #. Set an object name | |
348 | #. Specify a `pool`_ | |
349 | ||
350 | The Ceph Client retrieves the latest cluster map and the CRUSH algorithm | |
351 | calculates how to map the object to a `placement group`_, and then calculates | |
352 | how to assign the placement group to a Ceph OSD Daemon dynamically. To find the | |
353 | object location, all you need is the object name and the pool name. For | |
354 | example:: | |
355 | ||
356 | ceph osd map {poolname} {object-name} | |
357 | ||
358 | .. topic:: Exercise: Locate an Object | |
359 | ||
360 | As an exercise, lets create an object. Specify an object name, a path to | |
361 | a test file containing some object data and a pool name using the | |
362 | ``rados put`` command on the command line. For example:: | |
363 | ||
364 | echo {Test-data} > testfile.txt | |
365 | rados mkpool data | |
366 | rados put {object-name} {file-path} --pool=data | |
367 | rados put test-object-1 testfile.txt --pool=data | |
368 | ||
369 | To verify that the Ceph Storage Cluster stored the object, execute | |
370 | the following:: | |
371 | ||
372 | rados -p data ls | |
373 | ||
374 | Now, identify the object location:: | |
375 | ||
376 | ceph osd map {pool-name} {object-name} | |
377 | ceph osd map data test-object-1 | |
378 | ||
379 | Ceph should output the object's location. For example:: | |
380 | ||
381 | osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0] | |
382 | ||
383 | To remove the test object, simply delete it using the ``rados rm`` | |
384 | command. For example:: | |
385 | ||
386 | rados rm test-object-1 --pool=data | |
387 | ||
388 | As the cluster evolves, the object location may change dynamically. One benefit | |
389 | of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform | |
390 | the migration manually. | |
391 | ||
392 | ||
393 | .. _Preflight Checklist: ../quick-start-preflight | |
394 | .. _Ceph Deploy: ../../rados/deployment | |
395 | .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install | |
396 | .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new | |
397 | .. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd | |
398 | .. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart | |
399 | .. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit | |
400 | .. _CRUSH Map: ../../rados/operations/crush-map | |
401 | .. _pool: ../../rados/operations/pools | |
402 | .. _placement group: ../../rados/operations/placement-groups | |
403 | .. _Monitoring a Cluster: ../../rados/operations/monitoring | |
404 | .. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg | |
405 | .. _Network Configuration Reference: ../../rados/configuration/network-config-ref | |
406 | .. _User Management: ../../rados/operations/user-management |