]> git.proxmox.com Git - ceph.git/blame - ceph/doc/start/quick-ceph-deploy.rst
update sources to v12.1.3
[ceph.git] / ceph / doc / start / quick-ceph-deploy.rst
CommitLineData
7c673cae
FG
1=============================
2 Storage Cluster Quick Start
3=============================
4
5If you haven't completed your `Preflight Checklist`_, do that first. This
6**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7on your admin node. Create a three Ceph Node cluster so you can
8explore Ceph functionality.
9
10.. include:: quick-common.rst
11
224ce89b 12As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
7c673cae 13Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
224ce89b 14by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
7c673cae
FG
15For best results, create a directory on your admin node for maintaining the
16configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22are in this directory when executing ``ceph-deploy``.
23
24.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
7c673cae 28
224ce89b
WB
29Starting over
30=============
7c673cae
FG
31
32If at any point you run into trouble and you want to start over, execute
31f18b77 33the following to purge the Ceph packages, and erase all its data and configuration::
7c673cae 34
31f18b77 35 ceph-deploy purge {ceph-node} [{ceph-node}]
7c673cae
FG
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
224ce89b 38 rm ceph.*
7c673cae 39
224ce89b
WB
40If you execute ``purge``, you must re-install Ceph. The last ``rm``
41command removes any files that were written out by ceph-deploy locally
42during a previous installation.
43
44
45Create a Cluster
46================
7c673cae
FG
47
48On your admin node from the directory you created for holding your
49configuration details, perform the following steps using ``ceph-deploy``.
50
51#. Create the cluster. ::
52
224ce89b 53 ceph-deploy new {initial-monitor-node(s)}
7c673cae
FG
54
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
56
224ce89b 57 ceph-deploy new node1
7c673cae 58
224ce89b
WB
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
63 additional details.
7c673cae
FG
64
65#. If you have more than one network interface, add the ``public network``
66 setting under the ``[global]`` section of your Ceph configuration file.
67 See the `Network Configuration Reference`_ for details. ::
68
224ce89b 69 public network = {ip-address}/{bits}
7c673cae 70
224ce89b 71 For example,::
7c673cae 72
224ce89b 73 public network = 10.1.2.0/24
7c673cae 74
224ce89b 75 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
7c673cae 76
224ce89b
WB
77#. If you are deploying in an IPv6 environment, add the following to
78 ``ceph.conf`` in the local directory::
7c673cae 79
224ce89b 80 echo ms bind ipv6 = true >> ceph.conf
7c673cae 81
224ce89b 82#. Install Ceph packages.::
7c673cae 83
224ce89b 84 ceph-deploy install {ceph-node} [...]
7c673cae
FG
85
86 For example::
87
224ce89b 88 ceph-deploy install node1 node2 node3
7c673cae 89
224ce89b 90 The ``ceph-deploy`` utility will install Ceph on each node.
7c673cae 91
224ce89b 92#. Deploy the initial monitor(s) and gather the keys::
7c673cae 93
224ce89b
WB
94 ceph-deploy mon create-initial
95
96 Once you complete the process, your local directory should have the following
97 keyrings:
7c673cae 98
224ce89b
WB
99 - ``ceph.client.admin.keyring``
100 - ``ceph.bootstrap-mgr.keyring``
101 - ``ceph.bootstrap-osd.keyring``
102 - ``ceph.bootstrap-mds.keyring``
103 - ``ceph.bootstrap-rgw.keyring``
d2e6a577 104 - ``ceph.bootstrap-rbd.keyring``
7c673cae 105
224ce89b
WB
106.. note:: If this process fails with a message similar to "Unable to
107 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
108 IP listed for the monitor node in ceph.conf is the Public IP, not
109 the Private IP.
7c673cae
FG
110
111#. Use ``ceph-deploy`` to copy the configuration file and admin key to
112 your admin node and your Ceph Nodes so that you can use the ``ceph``
113 CLI without having to specify the monitor address and
114 ``ceph.client.admin.keyring`` each time you execute a command. ::
115
224ce89b 116 ceph-deploy admin {ceph-node(s)}
7c673cae
FG
117
118 For example::
119
224ce89b 120 ceph-deploy admin node1 node2 node3
7c673cae 121
c07f9fc5 122#. Deploy a manager daemon. (Required only for luminous+ builds)::
7c673cae 123
c07f9fc5 124 ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
7c673cae 125
224ce89b
WB
126#. Add three OSDs. For the purposes of these instructions, we assume you have an
127 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.*
7c673cae 128
224ce89b 129 ceph-deploy osd create {ceph-node}:{device}
7c673cae 130
224ce89b 131 For example::
7c673cae 132
224ce89b 133 ceph-deploy osd create node1:vdb node2:vdb node3:vdb
7c673cae 134
224ce89b 135#. Check your cluster's health. ::
7c673cae 136
224ce89b 137 ssh node1 sudo ceph health
7c673cae 138
224ce89b
WB
139 Your cluster should report ``HEALTH_OK``. You can view a more complete
140 cluster status with::
7c673cae 141
224ce89b 142 ssh node1 sudo ceph -s
7c673cae
FG
143
144
145Expanding Your Cluster
146======================
147
224ce89b
WB
148Once you have a basic cluster up and running, the next step is to
149expand cluster. Add a Ceph Metadata Server to ``node1``. Then add a
150Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability.
7c673cae
FG
151
152.. ditaa::
153 /------------------\ /----------------\
154 | ceph-deploy | | node1 |
155 | Admin Node | | cCCC |
156 | +-------->+ mon.node1 |
224ce89b
WB
157 | | | osd.0 |
158 | | | mgr.node1 |
7c673cae
FG
159 | | | mds.node1 |
160 \---------+--------/ \----------------/
161 |
162 | /----------------\
163 | | node2 |
164 | | cCCC |
165 +----------------->+ |
166 | | osd.0 |
167 | | mon.node2 |
168 | \----------------/
169 |
170 | /----------------\
171 | | node3 |
172 | | cCCC |
173 +----------------->+ |
174 | osd.1 |
175 | mon.node3 |
176 \----------------/
177
224ce89b
WB
178Add a Metadata Server
179---------------------
7c673cae 180
224ce89b
WB
181To use CephFS, you need at least one metadata server. Execute the following to
182create a metadata server::
7c673cae 183
224ce89b 184 ceph-deploy mds create {ceph-node}
7c673cae
FG
185
186For example::
187
224ce89b
WB
188 ceph-deploy mds create node1
189
190Adding Monitors
191---------------
7c673cae 192
224ce89b
WB
193A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
194Manager to run. For high availability, Ceph Storage Clusters typically
195run multiple Ceph Monitors so that the failure of a single Ceph
196Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
197Paxos algorithm, which requires a majority of monitors (i.e., greather
198than *N/2* where *N* is the number of monitors) to form a quorum.
199Odd numbers of monitors tend to be better, although this is not required.
7c673cae 200
224ce89b
WB
201.. tip: If you did not define the ``public network`` option above then
202 the new monitor will not know which IP address to bind to on the
203 new hosts. You can add this line to your ``ceph.conf`` by editing
204 it now and then push it out to each node with
205 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
7c673cae 206
224ce89b 207Add two Ceph Monitors to your cluster::
7c673cae 208
224ce89b 209 ceph-deploy mon add {ceph-nodes}
7c673cae 210
224ce89b 211For example::
7c673cae 212
224ce89b 213 ceph-deploy mon add node2 node3
7c673cae 214
224ce89b
WB
215Once you have added your new Ceph Monitors, Ceph will begin synchronizing
216the monitors and form a quorum. You can check the quorum status by executing
217the following::
7c673cae 218
224ce89b 219 ceph quorum_status --format json-pretty
7c673cae
FG
220
221
224ce89b
WB
222.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
223 configure NTP on each monitor host. Ensure that the
224 monitors are NTP peers.
7c673cae 225
224ce89b
WB
226Adding Managers
227---------------
7c673cae 228
224ce89b
WB
229The Ceph Manager daemons operate in an active/standby pattern. Deploying
230additional manager daemons ensures that if one daemon or host fails, another
231one can take over without interrupting service.
7c673cae 232
224ce89b 233To deploy additional manager daemons::
7c673cae 234
224ce89b 235 ceph-deploy mgr create node2 node3
7c673cae 236
224ce89b 237You should see the standby managers in the output from::
7c673cae 238
224ce89b 239 ssh node1 sudo ceph -s
7c673cae
FG
240
241
242Add an RGW Instance
243-------------------
244
245To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
246instance of :term:`RGW`. Execute the following to create an new instance of
247RGW::
248
249 ceph-deploy rgw create {gateway-node}
250
251For example::
252
253 ceph-deploy rgw create node1
254
7c673cae
FG
255By default, the :term:`RGW` instance will listen on port 7480. This can be
256changed by editing ceph.conf on the node running the :term:`RGW` as follows:
257
258.. code-block:: ini
259
260 [client]
261 rgw frontends = civetweb port=80
262
263To use an IPv6 address, use:
264
265.. code-block:: ini
266
267 [client]
268 rgw frontends = civetweb port=[::]:80
269
270
7c673cae
FG
271
272Storing/Retrieving Object Data
273==============================
274
275To store object data in the Ceph Storage Cluster, a Ceph client must:
276
277#. Set an object name
278#. Specify a `pool`_
279
280The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
281calculates how to map the object to a `placement group`_, and then calculates
282how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
283object location, all you need is the object name and the pool name. For
284example::
285
224ce89b 286 ceph osd map {poolname} {object-name}
7c673cae
FG
287
288.. topic:: Exercise: Locate an Object
289
224ce89b
WB
290 As an exercise, lets create an object. Specify an object name, a path to
291 a test file containing some object data and a pool name using the
292 ``rados put`` command on the command line. For example::
7c673cae 293
224ce89b
WB
294 echo {Test-data} > testfile.txt
295 ceph osd pool create mytest 8
296 rados put {object-name} {file-path} --pool=mytest
297 rados put test-object-1 testfile.txt --pool=mytest
7c673cae 298
224ce89b
WB
299 To verify that the Ceph Storage Cluster stored the object, execute
300 the following::
7c673cae 301
224ce89b 302 rados -p mytest ls
7c673cae 303
224ce89b 304 Now, identify the object location::
7c673cae 305
224ce89b
WB
306 ceph osd map {pool-name} {object-name}
307 ceph osd map mytest test-object-1
308
309 Ceph should output the object's location. For example::
310
311 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
312
313 To remove the test object, simply delete it using the ``rados rm``
314 command.
315
316 For example::
7c673cae 317
224ce89b 318 rados rm test-object-1 --pool=mytest
7c673cae 319
224ce89b 320 To delete the ``mytest`` pool::
7c673cae 321
224ce89b 322 ceph osd pool rm mytest
7c673cae 323
224ce89b
WB
324 (For safety reasons you will need to supply additional arguments as
325 prompted; deleting pools destroys data.)
7c673cae
FG
326
327As the cluster evolves, the object location may change dynamically. One benefit
328of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
224ce89b 329data migration or balancing manually.
7c673cae
FG
330
331
332.. _Preflight Checklist: ../quick-start-preflight
333.. _Ceph Deploy: ../../rados/deployment
334.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
335.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
336.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
337.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
338.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
339.. _CRUSH Map: ../../rados/operations/crush-map
340.. _pool: ../../rados/operations/pools
341.. _placement group: ../../rados/operations/placement-groups
342.. _Monitoring a Cluster: ../../rados/operations/monitoring
343.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
344.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
345.. _User Management: ../../rados/operations/user-management