]> git.proxmox.com Git - ceph.git/blame - ceph/doc/start/quick-ceph-deploy.rst
update download target update for octopus release
[ceph.git] / ceph / doc / start / quick-ceph-deploy.rst
CommitLineData
7c673cae
FG
1=============================
2 Storage Cluster Quick Start
3=============================
4
5If you haven't completed your `Preflight Checklist`_, do that first. This
6**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7on your admin node. Create a three Ceph Node cluster so you can
8explore Ceph functionality.
9
10.. include:: quick-common.rst
11
224ce89b 12As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
7c673cae 13Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
224ce89b 14by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
7c673cae
FG
15For best results, create a directory on your admin node for maintaining the
16configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22are in this directory when executing ``ceph-deploy``.
23
24.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
7c673cae 28
224ce89b
WB
29Starting over
30=============
7c673cae
FG
31
32If at any point you run into trouble and you want to start over, execute
31f18b77 33the following to purge the Ceph packages, and erase all its data and configuration::
7c673cae 34
31f18b77 35 ceph-deploy purge {ceph-node} [{ceph-node}]
7c673cae
FG
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
224ce89b 38 rm ceph.*
7c673cae 39
224ce89b
WB
40If you execute ``purge``, you must re-install Ceph. The last ``rm``
41command removes any files that were written out by ceph-deploy locally
42during a previous installation.
43
44
45Create a Cluster
46================
7c673cae
FG
47
48On your admin node from the directory you created for holding your
49configuration details, perform the following steps using ``ceph-deploy``.
50
51#. Create the cluster. ::
52
224ce89b 53 ceph-deploy new {initial-monitor-node(s)}
7c673cae
FG
54
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
56
224ce89b 57 ceph-deploy new node1
7c673cae 58
224ce89b
WB
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
63 additional details.
7c673cae
FG
64
65#. If you have more than one network interface, add the ``public network``
66 setting under the ``[global]`` section of your Ceph configuration file.
67 See the `Network Configuration Reference`_ for details. ::
68
224ce89b 69 public network = {ip-address}/{bits}
7c673cae 70
224ce89b 71 For example,::
7c673cae 72
224ce89b 73 public network = 10.1.2.0/24
7c673cae 74
224ce89b 75 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
7c673cae 76
224ce89b
WB
77#. If you are deploying in an IPv6 environment, add the following to
78 ``ceph.conf`` in the local directory::
7c673cae 79
224ce89b 80 echo ms bind ipv6 = true >> ceph.conf
7c673cae 81
224ce89b 82#. Install Ceph packages.::
7c673cae 83
224ce89b 84 ceph-deploy install {ceph-node} [...]
7c673cae
FG
85
86 For example::
87
224ce89b 88 ceph-deploy install node1 node2 node3
7c673cae 89
224ce89b 90 The ``ceph-deploy`` utility will install Ceph on each node.
7c673cae 91
224ce89b 92#. Deploy the initial monitor(s) and gather the keys::
7c673cae 93
224ce89b
WB
94 ceph-deploy mon create-initial
95
96 Once you complete the process, your local directory should have the following
97 keyrings:
7c673cae 98
224ce89b
WB
99 - ``ceph.client.admin.keyring``
100 - ``ceph.bootstrap-mgr.keyring``
101 - ``ceph.bootstrap-osd.keyring``
102 - ``ceph.bootstrap-mds.keyring``
103 - ``ceph.bootstrap-rgw.keyring``
d2e6a577 104 - ``ceph.bootstrap-rbd.keyring``
11fdf7f2 105 - ``ceph.bootstrap-rbd-mirror.keyring``
7c673cae 106
11fdf7f2
TL
107 .. note:: If this process fails with a message similar to "Unable to
108 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
109 IP listed for the monitor node in ceph.conf is the Public IP, not
110 the Private IP.
7c673cae
FG
111
112#. Use ``ceph-deploy`` to copy the configuration file and admin key to
113 your admin node and your Ceph Nodes so that you can use the ``ceph``
114 CLI without having to specify the monitor address and
115 ``ceph.client.admin.keyring`` each time you execute a command. ::
116
224ce89b 117 ceph-deploy admin {ceph-node(s)}
7c673cae
FG
118
119 For example::
120
224ce89b 121 ceph-deploy admin node1 node2 node3
7c673cae 122
c07f9fc5 123#. Deploy a manager daemon. (Required only for luminous+ builds)::
7c673cae 124
c07f9fc5 125 ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
7c673cae 126
224ce89b 127#. Add three OSDs. For the purposes of these instructions, we assume you have an
f64942e4 128 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
7c673cae 129
11fdf7f2 130 ceph-deploy osd create --data {device} {ceph-node}
7c673cae 131
224ce89b 132 For example::
7c673cae 133
11fdf7f2
TL
134 ceph-deploy osd create --data /dev/vdb node1
135 ceph-deploy osd create --data /dev/vdb node2
136 ceph-deploy osd create --data /dev/vdb node3
137
138 .. note:: If you are creating an OSD on an LVM volume, the argument to
139 ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
140 the volume's block device.
7c673cae 141
224ce89b 142#. Check your cluster's health. ::
7c673cae 143
224ce89b 144 ssh node1 sudo ceph health
7c673cae 145
224ce89b
WB
146 Your cluster should report ``HEALTH_OK``. You can view a more complete
147 cluster status with::
7c673cae 148
224ce89b 149 ssh node1 sudo ceph -s
7c673cae
FG
150
151
152Expanding Your Cluster
153======================
154
224ce89b
WB
155Once you have a basic cluster up and running, the next step is to
156expand cluster. Add a Ceph Metadata Server to ``node1``. Then add a
157Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability.
7c673cae
FG
158
159.. ditaa::
160 /------------------\ /----------------\
161 | ceph-deploy | | node1 |
162 | Admin Node | | cCCC |
163 | +-------->+ mon.node1 |
224ce89b
WB
164 | | | osd.0 |
165 | | | mgr.node1 |
7c673cae
FG
166 | | | mds.node1 |
167 \---------+--------/ \----------------/
168 |
169 | /----------------\
170 | | node2 |
171 | | cCCC |
172 +----------------->+ |
11fdf7f2 173 | | osd.1 |
7c673cae
FG
174 | | mon.node2 |
175 | \----------------/
176 |
177 | /----------------\
178 | | node3 |
179 | | cCCC |
180 +----------------->+ |
11fdf7f2 181 | osd.2 |
7c673cae
FG
182 | mon.node3 |
183 \----------------/
184
224ce89b
WB
185Add a Metadata Server
186---------------------
7c673cae 187
224ce89b
WB
188To use CephFS, you need at least one metadata server. Execute the following to
189create a metadata server::
7c673cae 190
224ce89b 191 ceph-deploy mds create {ceph-node}
7c673cae
FG
192
193For example::
194
224ce89b
WB
195 ceph-deploy mds create node1
196
197Adding Monitors
198---------------
7c673cae 199
224ce89b
WB
200A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
201Manager to run. For high availability, Ceph Storage Clusters typically
202run multiple Ceph Monitors so that the failure of a single Ceph
203Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
11fdf7f2 204Paxos algorithm, which requires a majority of monitors (i.e., greater
224ce89b
WB
205than *N/2* where *N* is the number of monitors) to form a quorum.
206Odd numbers of monitors tend to be better, although this is not required.
7c673cae 207
224ce89b
WB
208.. tip: If you did not define the ``public network`` option above then
209 the new monitor will not know which IP address to bind to on the
210 new hosts. You can add this line to your ``ceph.conf`` by editing
211 it now and then push it out to each node with
212 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
7c673cae 213
224ce89b 214Add two Ceph Monitors to your cluster::
7c673cae 215
224ce89b 216 ceph-deploy mon add {ceph-nodes}
7c673cae 217
224ce89b 218For example::
7c673cae 219
224ce89b 220 ceph-deploy mon add node2 node3
7c673cae 221
224ce89b
WB
222Once you have added your new Ceph Monitors, Ceph will begin synchronizing
223the monitors and form a quorum. You can check the quorum status by executing
224the following::
7c673cae 225
224ce89b 226 ceph quorum_status --format json-pretty
7c673cae
FG
227
228
224ce89b
WB
229.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
230 configure NTP on each monitor host. Ensure that the
231 monitors are NTP peers.
7c673cae 232
224ce89b
WB
233Adding Managers
234---------------
7c673cae 235
224ce89b
WB
236The Ceph Manager daemons operate in an active/standby pattern. Deploying
237additional manager daemons ensures that if one daemon or host fails, another
238one can take over without interrupting service.
7c673cae 239
224ce89b 240To deploy additional manager daemons::
7c673cae 241
224ce89b 242 ceph-deploy mgr create node2 node3
7c673cae 243
224ce89b 244You should see the standby managers in the output from::
7c673cae 245
224ce89b 246 ssh node1 sudo ceph -s
7c673cae
FG
247
248
249Add an RGW Instance
250-------------------
251
252To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
253instance of :term:`RGW`. Execute the following to create an new instance of
254RGW::
255
256 ceph-deploy rgw create {gateway-node}
257
258For example::
259
260 ceph-deploy rgw create node1
261
7c673cae
FG
262By default, the :term:`RGW` instance will listen on port 7480. This can be
263changed by editing ceph.conf on the node running the :term:`RGW` as follows:
264
265.. code-block:: ini
266
267 [client]
268 rgw frontends = civetweb port=80
269
270To use an IPv6 address, use:
271
272.. code-block:: ini
273
274 [client]
275 rgw frontends = civetweb port=[::]:80
276
277
7c673cae
FG
278
279Storing/Retrieving Object Data
280==============================
281
282To store object data in the Ceph Storage Cluster, a Ceph client must:
283
284#. Set an object name
285#. Specify a `pool`_
286
287The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
288calculates how to map the object to a `placement group`_, and then calculates
289how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
290object location, all you need is the object name and the pool name. For
291example::
292
224ce89b 293 ceph osd map {poolname} {object-name}
7c673cae
FG
294
295.. topic:: Exercise: Locate an Object
296
224ce89b
WB
297 As an exercise, lets create an object. Specify an object name, a path to
298 a test file containing some object data and a pool name using the
299 ``rados put`` command on the command line. For example::
7c673cae 300
224ce89b
WB
301 echo {Test-data} > testfile.txt
302 ceph osd pool create mytest 8
303 rados put {object-name} {file-path} --pool=mytest
304 rados put test-object-1 testfile.txt --pool=mytest
7c673cae 305
224ce89b
WB
306 To verify that the Ceph Storage Cluster stored the object, execute
307 the following::
7c673cae 308
224ce89b 309 rados -p mytest ls
7c673cae 310
224ce89b 311 Now, identify the object location::
7c673cae 312
224ce89b
WB
313 ceph osd map {pool-name} {object-name}
314 ceph osd map mytest test-object-1
315
316 Ceph should output the object's location. For example::
317
318 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
319
320 To remove the test object, simply delete it using the ``rados rm``
321 command.
322
323 For example::
7c673cae 324
224ce89b 325 rados rm test-object-1 --pool=mytest
7c673cae 326
224ce89b 327 To delete the ``mytest`` pool::
7c673cae 328
224ce89b 329 ceph osd pool rm mytest
7c673cae 330
224ce89b
WB
331 (For safety reasons you will need to supply additional arguments as
332 prompted; deleting pools destroys data.)
7c673cae
FG
333
334As the cluster evolves, the object location may change dynamically. One benefit
335of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
224ce89b 336data migration or balancing manually.
7c673cae
FG
337
338
339.. _Preflight Checklist: ../quick-start-preflight
340.. _Ceph Deploy: ../../rados/deployment
341.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
342.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
343.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
344.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
345.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
346.. _CRUSH Map: ../../rados/operations/crush-map
347.. _pool: ../../rados/operations/pools
348.. _placement group: ../../rados/operations/placement-groups
349.. _Monitoring a Cluster: ../../rados/operations/monitoring
350.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
351.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
352.. _User Management: ../../rados/operations/user-management