]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/quick-ceph-deploy.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / start / quick-ceph-deploy.rst
1 =============================
2 Storage Cluster Quick Start
3 =============================
4
5 If you haven't completed your `Preflight Checklist`_, do that first. This
6 **Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7 on your admin node. Create a three Ceph Node cluster so you can
8 explore Ceph functionality.
9
10 .. include:: quick-common.rst
11
12 As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
13 Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
14 by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
15 For best results, create a directory on your admin node for maintaining the
16 configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
17
18 mkdir my-cluster
19 cd my-cluster
20
21 The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22 are in this directory when executing ``ceph-deploy``.
23
24 .. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
27
28
29 Starting over
30 =============
31
32 If at any point you run into trouble and you want to start over, execute
33 the following to purge the Ceph packages, and erase all its data and configuration::
34
35 ceph-deploy purge {ceph-node} [{ceph-node}]
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
38 rm ceph.*
39
40 If you execute ``purge``, you must re-install Ceph. The last ``rm``
41 command removes any files that were written out by ceph-deploy locally
42 during a previous installation.
43
44
45 Create a Cluster
46 ================
47
48 On your admin node from the directory you created for holding your
49 configuration details, perform the following steps using ``ceph-deploy``.
50
51 #. Create the cluster. ::
52
53 ceph-deploy new {initial-monitor-node(s)}
54
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
56
57 ceph-deploy new node1
58
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
63 additional details.
64
65 #. If you have more than one network interface, add the ``public network``
66 setting under the ``[global]`` section of your Ceph configuration file.
67 See the `Network Configuration Reference`_ for details. ::
68
69 public network = {ip-address}/{bits}
70
71 For example,::
72
73 public network = 10.1.2.0/24
74
75 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
76
77 #. If you are deploying in an IPv6 environment, add the following to
78 ``ceph.conf`` in the local directory::
79
80 echo ms bind ipv6 = true >> ceph.conf
81
82 #. Install Ceph packages.::
83
84 ceph-deploy install {ceph-node} [...]
85
86 For example::
87
88 ceph-deploy install node1 node2 node3
89
90 The ``ceph-deploy`` utility will install Ceph on each node.
91
92 #. Deploy the initial monitor(s) and gather the keys::
93
94 ceph-deploy mon create-initial
95
96 Once you complete the process, your local directory should have the following
97 keyrings:
98
99 - ``ceph.client.admin.keyring``
100 - ``ceph.bootstrap-mgr.keyring``
101 - ``ceph.bootstrap-osd.keyring``
102 - ``ceph.bootstrap-mds.keyring``
103 - ``ceph.bootstrap-rgw.keyring``
104
105 .. note:: If this process fails with a message similar to "Unable to
106 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
107 IP listed for the monitor node in ceph.conf is the Public IP, not
108 the Private IP.
109
110 #. Use ``ceph-deploy`` to copy the configuration file and admin key to
111 your admin node and your Ceph Nodes so that you can use the ``ceph``
112 CLI without having to specify the monitor address and
113 ``ceph.client.admin.keyring`` each time you execute a command. ::
114
115 ceph-deploy admin {ceph-node(s)}
116
117 For example::
118
119 ceph-deploy admin node1 node2 node3
120
121 #. Deploy a manager daemon.::
122
123 ceph-deploy mgr create node1
124
125 #. Add three OSDs. For the purposes of these instructions, we assume you have an
126 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.*
127
128 ceph-deploy osd create {ceph-node}:{device}
129
130 For example::
131
132 ceph-deploy osd create node1:vdb node2:vdb node3:vdb
133
134 #. Check your cluster's health. ::
135
136 ssh node1 sudo ceph health
137
138 Your cluster should report ``HEALTH_OK``. You can view a more complete
139 cluster status with::
140
141 ssh node1 sudo ceph -s
142
143
144 Expanding Your Cluster
145 ======================
146
147 Once you have a basic cluster up and running, the next step is to
148 expand cluster. Add a Ceph Metadata Server to ``node1``. Then add a
149 Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability.
150
151 .. ditaa::
152 /------------------\ /----------------\
153 | ceph-deploy | | node1 |
154 | Admin Node | | cCCC |
155 | +-------->+ mon.node1 |
156 | | | osd.0 |
157 | | | mgr.node1 |
158 | | | mds.node1 |
159 \---------+--------/ \----------------/
160 |
161 | /----------------\
162 | | node2 |
163 | | cCCC |
164 +----------------->+ |
165 | | osd.0 |
166 | | mon.node2 |
167 | \----------------/
168 |
169 | /----------------\
170 | | node3 |
171 | | cCCC |
172 +----------------->+ |
173 | osd.1 |
174 | mon.node3 |
175 \----------------/
176
177 Add a Metadata Server
178 ---------------------
179
180 To use CephFS, you need at least one metadata server. Execute the following to
181 create a metadata server::
182
183 ceph-deploy mds create {ceph-node}
184
185 For example::
186
187 ceph-deploy mds create node1
188
189 Adding Monitors
190 ---------------
191
192 A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
193 Manager to run. For high availability, Ceph Storage Clusters typically
194 run multiple Ceph Monitors so that the failure of a single Ceph
195 Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
196 Paxos algorithm, which requires a majority of monitors (i.e., greather
197 than *N/2* where *N* is the number of monitors) to form a quorum.
198 Odd numbers of monitors tend to be better, although this is not required.
199
200 .. tip: If you did not define the ``public network`` option above then
201 the new monitor will not know which IP address to bind to on the
202 new hosts. You can add this line to your ``ceph.conf`` by editing
203 it now and then push it out to each node with
204 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
205
206 Add two Ceph Monitors to your cluster::
207
208 ceph-deploy mon add {ceph-nodes}
209
210 For example::
211
212 ceph-deploy mon add node2 node3
213
214 Once you have added your new Ceph Monitors, Ceph will begin synchronizing
215 the monitors and form a quorum. You can check the quorum status by executing
216 the following::
217
218 ceph quorum_status --format json-pretty
219
220
221 .. tip:: When you run Ceph with multiple monitors, you SHOULD install and
222 configure NTP on each monitor host. Ensure that the
223 monitors are NTP peers.
224
225 Adding Managers
226 ---------------
227
228 The Ceph Manager daemons operate in an active/standby pattern. Deploying
229 additional manager daemons ensures that if one daemon or host fails, another
230 one can take over without interrupting service.
231
232 To deploy additional manager daemons::
233
234 ceph-deploy mgr create node2 node3
235
236 You should see the standby managers in the output from::
237
238 ssh node1 sudo ceph -s
239
240
241 Add an RGW Instance
242 -------------------
243
244 To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
245 instance of :term:`RGW`. Execute the following to create an new instance of
246 RGW::
247
248 ceph-deploy rgw create {gateway-node}
249
250 For example::
251
252 ceph-deploy rgw create node1
253
254 By default, the :term:`RGW` instance will listen on port 7480. This can be
255 changed by editing ceph.conf on the node running the :term:`RGW` as follows:
256
257 .. code-block:: ini
258
259 [client]
260 rgw frontends = civetweb port=80
261
262 To use an IPv6 address, use:
263
264 .. code-block:: ini
265
266 [client]
267 rgw frontends = civetweb port=[::]:80
268
269
270
271 Storing/Retrieving Object Data
272 ==============================
273
274 To store object data in the Ceph Storage Cluster, a Ceph client must:
275
276 #. Set an object name
277 #. Specify a `pool`_
278
279 The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
280 calculates how to map the object to a `placement group`_, and then calculates
281 how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
282 object location, all you need is the object name and the pool name. For
283 example::
284
285 ceph osd map {poolname} {object-name}
286
287 .. topic:: Exercise: Locate an Object
288
289 As an exercise, lets create an object. Specify an object name, a path to
290 a test file containing some object data and a pool name using the
291 ``rados put`` command on the command line. For example::
292
293 echo {Test-data} > testfile.txt
294 ceph osd pool create mytest 8
295 rados put {object-name} {file-path} --pool=mytest
296 rados put test-object-1 testfile.txt --pool=mytest
297
298 To verify that the Ceph Storage Cluster stored the object, execute
299 the following::
300
301 rados -p mytest ls
302
303 Now, identify the object location::
304
305 ceph osd map {pool-name} {object-name}
306 ceph osd map mytest test-object-1
307
308 Ceph should output the object's location. For example::
309
310 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
311
312 To remove the test object, simply delete it using the ``rados rm``
313 command.
314
315 For example::
316
317 rados rm test-object-1 --pool=mytest
318
319 To delete the ``mytest`` pool::
320
321 ceph osd pool rm mytest
322
323 (For safety reasons you will need to supply additional arguments as
324 prompted; deleting pools destroys data.)
325
326 As the cluster evolves, the object location may change dynamically. One benefit
327 of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
328 data migration or balancing manually.
329
330
331 .. _Preflight Checklist: ../quick-start-preflight
332 .. _Ceph Deploy: ../../rados/deployment
333 .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
334 .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
335 .. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
336 .. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
337 .. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
338 .. _CRUSH Map: ../../rados/operations/crush-map
339 .. _pool: ../../rados/operations/pools
340 .. _placement group: ../../rados/operations/placement-groups
341 .. _Monitoring a Cluster: ../../rados/operations/monitoring
342 .. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
343 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
344 .. _User Management: ../../rados/operations/user-management