]> git.proxmox.com Git - ceph.git/blob - ceph/doc/install/ceph-deploy/quick-ceph-deploy.rst
Import ceph 15.2.8
[ceph.git] / ceph / doc / install / ceph-deploy / quick-ceph-deploy.rst
1 =============================
2 Storage Cluster Quick Start
3 =============================
4
5 If you haven't completed your `Preflight Checklist`_, do that first. This
6 **Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7 on your admin node. Create a three Ceph Node cluster so you can
8 explore Ceph functionality.
9
10 .. include:: quick-common.rst
11
12 As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
13 Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
14 by adding a fourth Ceph OSD Daemon, and two more Ceph Monitors.
15 For best results, create a directory on your admin node for maintaining the
16 configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21 The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22 are in this directory when executing ``ceph-deploy``.
23
24 .. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
28
29 Starting over
30 =============
31
32 If at any point you run into trouble and you want to start over, execute
33 the following to purge the Ceph packages, and erase all its data and configuration::
34
35 ceph-deploy purge {ceph-node} [{ceph-node}]
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
38 rm ceph.*
39
40 If you execute ``purge``, you must re-install Ceph. The last ``rm``
41 command removes any files that were written out by ceph-deploy locally
42 during a previous installation.
43
44
45 Create a Cluster
46 ================
47
48 On your admin node from the directory you created for holding your
49 configuration details, perform the following steps using ``ceph-deploy``.
50
51 #. Create the cluster. ::
52
53 ceph-deploy new {initial-monitor-node(s)}
54
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
56
57 ceph-deploy new node1
58
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
63 additional details.
64
65 Note for users of Ubuntu 18.04: Python 2 is a prerequisite of Ceph.
66 Install the ``python-minimal`` package on Ubuntu 18.04 to provide
67 Python 2::
68
69 [Ubuntu 18.04] $ sudo apt install python-minimal
70
71 #. If you have more than one network interface, add the ``public network``
72 setting under the ``[global]`` section of your Ceph configuration file.
73 See the `Network Configuration Reference`_ for details. ::
74
75 public network = {ip-address}/{bits}
76
77 For example,::
78
79 public network = 10.1.2.0/24
80
81 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
82
83 #. If you are deploying in an IPv6 environment, add the following to
84 ``ceph.conf`` in the local directory::
85
86 echo ms bind ipv6 = true >> ceph.conf
87
88 #. Install Ceph packages.::
89
90 ceph-deploy install {ceph-node} [...]
91
92 For example::
93
94 ceph-deploy install node1 node2 node3
95
96 The ``ceph-deploy`` utility will install Ceph on each node.
97
98 #. Deploy the initial monitor(s) and gather the keys::
99
100 ceph-deploy mon create-initial
101
102 Once you complete the process, your local directory should have the following
103 keyrings:
104
105 - ``ceph.client.admin.keyring``
106 - ``ceph.bootstrap-mgr.keyring``
107 - ``ceph.bootstrap-osd.keyring``
108 - ``ceph.bootstrap-mds.keyring``
109 - ``ceph.bootstrap-rgw.keyring``
110 - ``ceph.bootstrap-rbd.keyring``
111 - ``ceph.bootstrap-rbd-mirror.keyring``
112
113 .. note:: If this process fails with a message similar to "Unable to
114 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
115 IP listed for the monitor node in ceph.conf is the Public IP, not
116 the Private IP.
117
118 #. Use ``ceph-deploy`` to copy the configuration file and admin key to
119 your admin node and your Ceph Nodes so that you can use the ``ceph``
120 CLI without having to specify the monitor address and
121 ``ceph.client.admin.keyring`` each time you execute a command. ::
122
123 ceph-deploy admin {ceph-node(s)}
124
125 For example::
126
127 ceph-deploy admin node1 node2 node3
128
129 #. Deploy a manager daemon. (Required only for luminous+ builds)::
130
131 ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
132
133 #. Add three OSDs. For the purposes of these instructions, we assume you have an
134 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
135
136 ceph-deploy osd create --data {device} {ceph-node}
137
138 For example::
139
140 ceph-deploy osd create --data /dev/vdb node1
141 ceph-deploy osd create --data /dev/vdb node2
142 ceph-deploy osd create --data /dev/vdb node3
143
144 .. note:: If you are creating an OSD on an LVM volume, the argument to
145 ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
146 the volume's block device.
147
148 #. Check your cluster's health. ::
149
150 ssh node1 sudo ceph health
151
152 Your cluster should report ``HEALTH_OK``. You can view a more complete
153 cluster status with::
154
155 ssh node1 sudo ceph -s
156
157
158 Expanding Your Cluster
159 ======================
160
161 Once you have a basic cluster up and running, the next step is to expand
162 cluster. Then add a Ceph Monitor and Ceph Manager to ``node2`` and ``node3``
163 to improve reliability and availability.
164
165 .. ditaa::
166
167 /------------------\ /----------------\
168 | ceph-deploy | | node1 |
169 | Admin Node | | cCCC |
170 | +-------->+ |
171 | | | mon.node1 |
172 | | | osd.0 |
173 | | | mgr.node1 |
174 \---------+--------/ \----------------/
175 |
176 | /----------------\
177 | | node2 |
178 | | cCCC |
179 +----------------->+ |
180 | | osd.1 |
181 | | mon.node2 |
182 | \----------------/
183 |
184 | /----------------\
185 | | node3 |
186 | | cCCC |
187 +----------------->+ |
188 | osd.2 |
189 | mon.node3 |
190 \----------------/
191
192 Adding Monitors
193 ---------------
194
195 A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
196 Manager to run. For high availability, Ceph Storage Clusters typically
197 run multiple Ceph Monitors so that the failure of a single Ceph
198 Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
199 Paxos algorithm, which requires a majority of monitors (i.e., greater
200 than *N/2* where *N* is the number of monitors) to form a quorum.
201 Odd numbers of monitors tend to be better, although this is not required.
202
203 .. tip: If you did not define the ``public network`` option above then
204 the new monitor will not know which IP address to bind to on the
205 new hosts. You can add this line to your ``ceph.conf`` by editing
206 it now and then push it out to each node with
207 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
208
209 Add two Ceph Monitors to your cluster::
210
211 ceph-deploy mon add {ceph-nodes}
212
213 For example::
214
215 ceph-deploy mon add node2 node3
216
217 Once you have added your new Ceph Monitors, Ceph will begin synchronizing
218 the monitors and form a quorum. You can check the quorum status by executing
219 the following::
220
221 ceph quorum_status --format json-pretty
222
223
224 .. tip:: When you run Ceph with multiple monitors, you SHOULD install and
225 configure NTP on each monitor host. Ensure that the
226 monitors are NTP peers.
227
228 Adding Managers
229 ---------------
230
231 The Ceph Manager daemons operate in an active/standby pattern. Deploying
232 additional manager daemons ensures that if one daemon or host fails, another
233 one can take over without interrupting service.
234
235 To deploy additional manager daemons::
236
237 ceph-deploy mgr create node2 node3
238
239 You should see the standby managers in the output from::
240
241 ssh node1 sudo ceph -s
242
243
244 Add an RGW Instance
245 -------------------
246
247 To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
248 instance of :term:`RGW`. Execute the following to create an new instance of
249 RGW::
250
251 ceph-deploy rgw create {gateway-node}
252
253 For example::
254
255 ceph-deploy rgw create node1
256
257 By default, the :term:`RGW` instance will listen on port 7480. This can be
258 changed by editing ceph.conf on the node running the :term:`RGW` as follows:
259
260 .. code-block:: ini
261
262 [client]
263 rgw frontends = civetweb port=80
264
265 To use an IPv6 address, use:
266
267 .. code-block:: ini
268
269 [client]
270 rgw frontends = civetweb port=[::]:80
271
272
273
274 Storing/Retrieving Object Data
275 ==============================
276
277 To store object data in the Ceph Storage Cluster, a Ceph client must:
278
279 #. Set an object name
280 #. Specify a `pool`_
281
282 The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
283 calculates how to map the object to a `placement group`_, and then calculates
284 how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
285 object location, all you need is the object name and the pool name. For
286 example::
287
288 ceph osd map {poolname} {object-name}
289
290 .. topic:: Exercise: Locate an Object
291
292 As an exercise, lets create an object. Specify an object name, a path to
293 a test file containing some object data and a pool name using the
294 ``rados put`` command on the command line. For example::
295
296 echo {Test-data} > testfile.txt
297 ceph osd pool create mytest
298 rados put {object-name} {file-path} --pool=mytest
299 rados put test-object-1 testfile.txt --pool=mytest
300
301 To verify that the Ceph Storage Cluster stored the object, execute
302 the following::
303
304 rados -p mytest ls
305
306 Now, identify the object location::
307
308 ceph osd map {pool-name} {object-name}
309 ceph osd map mytest test-object-1
310
311 Ceph should output the object's location. For example::
312
313 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
314
315 To remove the test object, simply delete it using the ``rados rm``
316 command.
317
318 For example::
319
320 rados rm test-object-1 --pool=mytest
321
322 To delete the ``mytest`` pool::
323
324 ceph osd pool rm mytest
325
326 (For safety reasons you will need to supply additional arguments as
327 prompted; deleting pools destroys data.)
328
329 As the cluster evolves, the object location may change dynamically. One benefit
330 of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
331 data migration or balancing manually.
332
333
334 .. _Preflight Checklist: ../quick-start-preflight
335 .. _Ceph Deploy: ../../rados/deployment
336 .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
337 .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
338 .. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
339 .. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
340 .. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
341 .. _CRUSH Map: ../../rados/operations/crush-map
342 .. _pool: ../../rados/operations/pools
343 .. _placement group: ../../rados/operations/placement-groups
344 .. _Monitoring a Cluster: ../../rados/operations/monitoring
345 .. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
346 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
347 .. _User Management: ../../rados/operations/user-management