]> git.proxmox.com Git - ceph.git/blame - ceph/doc/install/ceph-deploy/quick-ceph-deploy.rst
bump version to 15.2.6-pve1
[ceph.git] / ceph / doc / install / ceph-deploy / quick-ceph-deploy.rst
CommitLineData
7c673cae
FG
1=============================
2 Storage Cluster Quick Start
3=============================
4
5If you haven't completed your `Preflight Checklist`_, do that first. This
6**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7on your admin node. Create a three Ceph Node cluster so you can
8explore Ceph functionality.
9
10.. include:: quick-common.rst
11
224ce89b 12As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
7c673cae 13Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
9f95a23c 14by adding a fourth Ceph OSD Daemon, and two more Ceph Monitors.
7c673cae
FG
15For best results, create a directory on your admin node for maintaining the
16configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22are in this directory when executing ``ceph-deploy``.
23
24.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
7c673cae 28
224ce89b
WB
29Starting over
30=============
7c673cae
FG
31
32If at any point you run into trouble and you want to start over, execute
31f18b77 33the following to purge the Ceph packages, and erase all its data and configuration::
7c673cae 34
31f18b77 35 ceph-deploy purge {ceph-node} [{ceph-node}]
7c673cae
FG
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
224ce89b 38 rm ceph.*
7c673cae 39
224ce89b
WB
40If you execute ``purge``, you must re-install Ceph. The last ``rm``
41command removes any files that were written out by ceph-deploy locally
42during a previous installation.
43
44
45Create a Cluster
46================
7c673cae
FG
47
48On your admin node from the directory you created for holding your
49configuration details, perform the following steps using ``ceph-deploy``.
50
51#. Create the cluster. ::
52
224ce89b 53 ceph-deploy new {initial-monitor-node(s)}
7c673cae
FG
54
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
56
224ce89b 57 ceph-deploy new node1
7c673cae 58
224ce89b
WB
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
63 additional details.
7c673cae 64
9f95a23c
TL
65 Note for users of Ubuntu 18.04: Python 2 is a prerequisite of Ceph.
66 Install the ``python-minimal`` package on Ubuntu 18.04 to provide
67 Python 2::
68
69 [Ubuntu 18.04] $ sudo apt install python-minimal
70
7c673cae
FG
71#. If you have more than one network interface, add the ``public network``
72 setting under the ``[global]`` section of your Ceph configuration file.
73 See the `Network Configuration Reference`_ for details. ::
74
224ce89b 75 public network = {ip-address}/{bits}
7c673cae 76
224ce89b 77 For example,::
7c673cae 78
224ce89b 79 public network = 10.1.2.0/24
7c673cae 80
224ce89b 81 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
7c673cae 82
224ce89b
WB
83#. If you are deploying in an IPv6 environment, add the following to
84 ``ceph.conf`` in the local directory::
7c673cae 85
224ce89b 86 echo ms bind ipv6 = true >> ceph.conf
7c673cae 87
224ce89b 88#. Install Ceph packages.::
7c673cae 89
224ce89b 90 ceph-deploy install {ceph-node} [...]
7c673cae
FG
91
92 For example::
93
224ce89b 94 ceph-deploy install node1 node2 node3
7c673cae 95
224ce89b 96 The ``ceph-deploy`` utility will install Ceph on each node.
7c673cae 97
224ce89b 98#. Deploy the initial monitor(s) and gather the keys::
7c673cae 99
224ce89b
WB
100 ceph-deploy mon create-initial
101
102 Once you complete the process, your local directory should have the following
103 keyrings:
7c673cae 104
224ce89b
WB
105 - ``ceph.client.admin.keyring``
106 - ``ceph.bootstrap-mgr.keyring``
107 - ``ceph.bootstrap-osd.keyring``
108 - ``ceph.bootstrap-mds.keyring``
109 - ``ceph.bootstrap-rgw.keyring``
d2e6a577 110 - ``ceph.bootstrap-rbd.keyring``
11fdf7f2 111 - ``ceph.bootstrap-rbd-mirror.keyring``
7c673cae 112
11fdf7f2
TL
113 .. note:: If this process fails with a message similar to "Unable to
114 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
115 IP listed for the monitor node in ceph.conf is the Public IP, not
116 the Private IP.
7c673cae
FG
117
118#. Use ``ceph-deploy`` to copy the configuration file and admin key to
119 your admin node and your Ceph Nodes so that you can use the ``ceph``
120 CLI without having to specify the monitor address and
121 ``ceph.client.admin.keyring`` each time you execute a command. ::
122
224ce89b 123 ceph-deploy admin {ceph-node(s)}
7c673cae
FG
124
125 For example::
126
224ce89b 127 ceph-deploy admin node1 node2 node3
7c673cae 128
c07f9fc5 129#. Deploy a manager daemon. (Required only for luminous+ builds)::
7c673cae 130
c07f9fc5 131 ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
7c673cae 132
224ce89b 133#. Add three OSDs. For the purposes of these instructions, we assume you have an
f64942e4 134 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
7c673cae 135
11fdf7f2 136 ceph-deploy osd create --data {device} {ceph-node}
7c673cae 137
224ce89b 138 For example::
7c673cae 139
11fdf7f2
TL
140 ceph-deploy osd create --data /dev/vdb node1
141 ceph-deploy osd create --data /dev/vdb node2
142 ceph-deploy osd create --data /dev/vdb node3
143
144 .. note:: If you are creating an OSD on an LVM volume, the argument to
145 ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
146 the volume's block device.
7c673cae 147
224ce89b 148#. Check your cluster's health. ::
7c673cae 149
224ce89b 150 ssh node1 sudo ceph health
7c673cae 151
224ce89b
WB
152 Your cluster should report ``HEALTH_OK``. You can view a more complete
153 cluster status with::
7c673cae 154
224ce89b 155 ssh node1 sudo ceph -s
7c673cae
FG
156
157
158Expanding Your Cluster
159======================
160
9f95a23c
TL
161Once you have a basic cluster up and running, the next step is to expand
162cluster. Then add a Ceph Monitor and Ceph Manager to ``node2`` and ``node3``
163to improve reliability and availability.
7c673cae
FG
164
165.. ditaa::
166 /------------------\ /----------------\
167 | ceph-deploy | | node1 |
168 | Admin Node | | cCCC |
9f95a23c
TL
169 | +-------->+ |
170 | | | mon.node1 |
224ce89b
WB
171 | | | osd.0 |
172 | | | mgr.node1 |
7c673cae
FG
173 \---------+--------/ \----------------/
174 |
175 | /----------------\
176 | | node2 |
177 | | cCCC |
178 +----------------->+ |
11fdf7f2 179 | | osd.1 |
7c673cae
FG
180 | | mon.node2 |
181 | \----------------/
182 |
183 | /----------------\
184 | | node3 |
185 | | cCCC |
186 +----------------->+ |
11fdf7f2 187 | osd.2 |
7c673cae
FG
188 | mon.node3 |
189 \----------------/
190
224ce89b
WB
191Adding Monitors
192---------------
7c673cae 193
224ce89b
WB
194A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
195Manager to run. For high availability, Ceph Storage Clusters typically
196run multiple Ceph Monitors so that the failure of a single Ceph
197Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
11fdf7f2 198Paxos algorithm, which requires a majority of monitors (i.e., greater
224ce89b
WB
199than *N/2* where *N* is the number of monitors) to form a quorum.
200Odd numbers of monitors tend to be better, although this is not required.
7c673cae 201
224ce89b
WB
202.. tip: If you did not define the ``public network`` option above then
203 the new monitor will not know which IP address to bind to on the
204 new hosts. You can add this line to your ``ceph.conf`` by editing
205 it now and then push it out to each node with
206 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
7c673cae 207
224ce89b 208Add two Ceph Monitors to your cluster::
7c673cae 209
224ce89b 210 ceph-deploy mon add {ceph-nodes}
7c673cae 211
224ce89b 212For example::
7c673cae 213
224ce89b 214 ceph-deploy mon add node2 node3
7c673cae 215
224ce89b
WB
216Once you have added your new Ceph Monitors, Ceph will begin synchronizing
217the monitors and form a quorum. You can check the quorum status by executing
218the following::
7c673cae 219
224ce89b 220 ceph quorum_status --format json-pretty
7c673cae
FG
221
222
224ce89b
WB
223.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
224 configure NTP on each monitor host. Ensure that the
225 monitors are NTP peers.
7c673cae 226
224ce89b
WB
227Adding Managers
228---------------
7c673cae 229
224ce89b
WB
230The Ceph Manager daemons operate in an active/standby pattern. Deploying
231additional manager daemons ensures that if one daemon or host fails, another
232one can take over without interrupting service.
7c673cae 233
224ce89b 234To deploy additional manager daemons::
7c673cae 235
224ce89b 236 ceph-deploy mgr create node2 node3
7c673cae 237
224ce89b 238You should see the standby managers in the output from::
7c673cae 239
224ce89b 240 ssh node1 sudo ceph -s
7c673cae
FG
241
242
243Add an RGW Instance
244-------------------
245
246To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
247instance of :term:`RGW`. Execute the following to create an new instance of
248RGW::
249
250 ceph-deploy rgw create {gateway-node}
251
252For example::
253
254 ceph-deploy rgw create node1
255
7c673cae
FG
256By default, the :term:`RGW` instance will listen on port 7480. This can be
257changed by editing ceph.conf on the node running the :term:`RGW` as follows:
258
259.. code-block:: ini
260
261 [client]
262 rgw frontends = civetweb port=80
263
264To use an IPv6 address, use:
265
266.. code-block:: ini
267
268 [client]
269 rgw frontends = civetweb port=[::]:80
270
271
7c673cae
FG
272
273Storing/Retrieving Object Data
274==============================
275
276To store object data in the Ceph Storage Cluster, a Ceph client must:
277
278#. Set an object name
279#. Specify a `pool`_
280
281The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
282calculates how to map the object to a `placement group`_, and then calculates
283how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
284object location, all you need is the object name and the pool name. For
285example::
286
224ce89b 287 ceph osd map {poolname} {object-name}
7c673cae
FG
288
289.. topic:: Exercise: Locate an Object
290
224ce89b
WB
291 As an exercise, lets create an object. Specify an object name, a path to
292 a test file containing some object data and a pool name using the
293 ``rados put`` command on the command line. For example::
7c673cae 294
224ce89b 295 echo {Test-data} > testfile.txt
9f95a23c 296 ceph osd pool create mytest
224ce89b
WB
297 rados put {object-name} {file-path} --pool=mytest
298 rados put test-object-1 testfile.txt --pool=mytest
7c673cae 299
224ce89b
WB
300 To verify that the Ceph Storage Cluster stored the object, execute
301 the following::
7c673cae 302
224ce89b 303 rados -p mytest ls
7c673cae 304
224ce89b 305 Now, identify the object location::
7c673cae 306
224ce89b
WB
307 ceph osd map {pool-name} {object-name}
308 ceph osd map mytest test-object-1
309
310 Ceph should output the object's location. For example::
311
312 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
313
314 To remove the test object, simply delete it using the ``rados rm``
315 command.
316
317 For example::
7c673cae 318
224ce89b 319 rados rm test-object-1 --pool=mytest
7c673cae 320
224ce89b 321 To delete the ``mytest`` pool::
7c673cae 322
224ce89b 323 ceph osd pool rm mytest
7c673cae 324
224ce89b
WB
325 (For safety reasons you will need to supply additional arguments as
326 prompted; deleting pools destroys data.)
7c673cae
FG
327
328As the cluster evolves, the object location may change dynamically. One benefit
329of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
224ce89b 330data migration or balancing manually.
7c673cae
FG
331
332
333.. _Preflight Checklist: ../quick-start-preflight
334.. _Ceph Deploy: ../../rados/deployment
335.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
336.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
337.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
338.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
339.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
340.. _CRUSH Map: ../../rados/operations/crush-map
341.. _pool: ../../rados/operations/pools
342.. _placement group: ../../rados/operations/placement-groups
343.. _Monitoring a Cluster: ../../rados/operations/monitoring
344.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
345.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
346.. _User Management: ../../rados/operations/user-management