]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph-deploy.rst
buildsys: switch source download to quincy
[ceph.git] / ceph / doc / man / 8 / ceph-deploy.rst
1 :orphan:
2
3 .. _ceph-deploy:
4
5 =====================================
6 ceph-deploy -- Ceph deployment tool
7 =====================================
8
9 .. program:: ceph-deploy
10
11 Synopsis
12 ========
13
14 | **ceph-deploy** **new** [*initial-monitor-node(s)*]
15
16 | **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...]
17
18 | **ceph-deploy** **mon** *create-initial*
19
20 | **ceph-deploy** **osd** *create* *--data* *device* *ceph-node*
21
22 | **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
23
24 | **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
25
26 | **ceph-deploy** **forgetkeys**
27
28 Description
29 ===========
30
31 :program:`ceph-deploy` is a tool which allows easy and quick deployment of a
32 Ceph cluster without involving complex and detailed manual configuration. It
33 uses ssh to gain access to other Ceph nodes from the admin node, sudo for
34 administrator privileges on them and the underlying Python scripts automates
35 the manual process of Ceph installation on each node from the admin node itself.
36 It can be easily run on an workstation and doesn't require servers, databases or
37 any other automated tools. With :program:`ceph-deploy`, it is really easy to set
38 up and take down a cluster. However, it is not a generic deployment tool. It is
39 a specific tool which is designed for those who want to get Ceph up and running
40 quickly with only the unavoidable initial configuration settings and without the
41 overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
42 who want to customize security settings, partitions or directory locations and
43 want to set up a cluster following detailed manual steps, should use other tools
44 i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
45
46 With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
47 create a cluster, add monitors, gather/forget keys, add OSDs and metadata
48 servers, configure admin hosts or take down the cluster.
49
50 Commands
51 ========
52
53 new
54 ---
55
56 Start deploying a new cluster and write a configuration file and keyring for it.
57 It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
58 node(s), validates host IP, creates a cluster with a new initial monitor node or
59 nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
60 a log file for the new cluster. It populates the newly created Ceph configuration
61 file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
62 members under ``[global]`` section.
63
64 Usage::
65
66 ceph-deploy new [MON][MON...]
67
68 Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``).
69
70 Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
71 :option:`--cluster-network` and :option:`--public-network` can also be used with
72 this command.
73
74 If more than one network interface is used, ``public network`` setting has to be
75 added under ``[global]`` section of Ceph configuration file. If the public subnet
76 is given, ``new`` command will choose the one IP from the remote host that exists
77 within the subnet range. Public network can also be added at runtime using
78 :option:`--public-network` option with the command as mentioned above.
79
80
81 install
82 -------
83
84 Install Ceph packages on remote hosts. As a first step it installs
85 ``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
86 so that Ceph packages from upstream repository get more priority. It then detects
87 the platform and distribution for the hosts and installs Ceph normally by
88 downloading distro compatible packages if adequate repo for Ceph is already added.
89 ``--release`` flag is used to get the latest release for installation. During
90 detection of platform and distribution before installation, if it finds the
91 ``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
92 installation with custom cluster name and uses the default name ``ceph`` for the
93 cluster.
94
95 If the user explicitly specifies a custom repo url with :option:`--repo-url` for
96 installation, anything detected from the configuration will be overridden and
97 the custom repository location will be used for installation of Ceph packages.
98 If required, valid custom repositories are also detected and installed. In case
99 of installation from a custom repo a boolean is used to determine the logic
100 needed to proceed with a custom repo installation. A custom repo install helper
101 is used that goes through config checks to retrieve repos (and any extra repos
102 defined) and installs them. ``cd_conf`` is the object built from ``argparse``
103 that holds the flags and information needed to determine what metadata from the
104 configuration is to be used.
105
106 A user can also opt to install only the repository without installing Ceph and
107 its dependencies by using :option:`--repo` option.
108
109 Usage::
110
111 ceph-deploy install [HOST][HOST...]
112
113 Here, [HOST] is/are the host node(s) where Ceph is to be installed.
114
115 An option ``--release`` is used to install a release known as CODENAME
116 (default: firefly).
117
118 Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`,
119 :option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
120 :option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
121
122
123 mds
124 ---
125
126 Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
127 the ``mds`` command is used to create one on the desired host node. It uses the
128 subcommand ``create`` to do so. ``create`` first gets the hostname and distro
129 information of the desired mds host. It then tries to read the ``bootstrap-mds``
130 key for the cluster and deploy it in the desired host. The key generally has a
131 format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
132 it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
133 host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
134 format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
135 ``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
136 commands based on ``distro.init`` to start the ``mds``.
137
138 Usage::
139
140 ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
141
142 The [DAEMON-NAME] is optional.
143
144
145 mon
146 ---
147
148 Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
149 to deploy Ceph monitors on other nodes.
150
151 Subcommand ``create-initial`` deploys for monitors defined in
152 ``mon initial members`` under ``[global]`` section in Ceph configuration file,
153 wait until they form quorum and then gatherkeys, reporting the monitor status
154 along the process. If monitors don't form quorum the command will eventually
155 time out.
156
157 Usage::
158
159 ceph-deploy mon create-initial
160
161 Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
162 the hosts which are desired to be made monitors. If no hosts are specified it
163 will default to use the ``mon initial members`` defined under ``[global]``
164 section of Ceph configuration file. ``create`` first detects platform and distro
165 for desired hosts and checks if hostname is compatible for deployment. It then
166 uses the monitor keyring initially created using ``new`` command and deploys the
167 monitor in desired host. If multiple hosts were specified during ``new`` command
168 i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
169 were created then a concatenated keyring is used for deployment of monitors. In
170 this process a keyring parser is used which looks for ``[entity]`` sections in
171 monitor keyrings and returns a list of those sections. A helper is then used to
172 collect all keyrings into a single blob that will be used to inject it to monitors
173 with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
174 in a directory ending with ``.keyring``. During this process the helper uses list
175 of sections returned by keyring parser to check if an entity is already present
176 in a keyring and if not, adds it. The concatenated keyring is used for deployment
177 of monitors to desired multiple hosts.
178
179 Usage::
180
181 ceph-deploy mon create [HOST] [HOST...]
182
183 Here, [HOST] is hostname of desired monitor host(s).
184
185 Subcommand ``add`` is used to add a monitor to an existing cluster. It first
186 detects platform and distro for desired host and checks if hostname is compatible
187 for deployment. It then uses the monitor keyring, ensures configuration for new
188 monitor host and adds the monitor to the cluster. If the section for the monitor
189 exists and defines a monitor address that will be used, otherwise it will fallback by
190 resolving the hostname to an IP. If :option:`--address` is used it will override
191 all other options. After adding the monitor to the cluster, it gives it some time
192 to start. It then looks for any monitor errors and checks monitor status. Monitor
193 errors arise if the monitor is not added in ``mon initial members``, if it doesn't
194 exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
195 were defined for monitors. Under such conditions, monitors may not be able to
196 form quorum. Monitor status tells if the monitor is up and running normally. The
197 status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
198 end which provides the output and returns a boolean status of what is going on.
199 ``False`` means a monitor that is not fine even if it is up and running, while
200 ``True`` means the monitor is up and running correctly.
201
202 Usage::
203
204 ceph-deploy mon add [HOST]
205
206 ceph-deploy mon add [HOST] --address [IP]
207
208 Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
209 node. Please note, unlike other ``mon`` subcommands, only one node can be
210 specified at a time.
211
212 Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
213 It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
214 daemon really stopped, creates an archive directory ``mon-remove`` under
215 ``/var/lib/ceph/``, archives old monitor directory in
216 ``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
217 cluster by running ``ceph remove...`` command.
218
219 Usage::
220
221 ceph-deploy mon destroy [HOST] [HOST...]
222
223 Here, [HOST] is hostname of monitor that is to be removed.
224
225
226 gatherkeys
227 ----------
228
229 Gather authentication keys for provisioning new nodes. It takes hostnames as
230 arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
231 and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
232 authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
233 cluster.
234
235 Usage::
236
237 ceph-deploy gatherkeys [HOST] [HOST...]
238
239 Here, [HOST] is hostname of the monitor from where keys are to be pulled.
240
241
242 disk
243 ----
244
245 Manage disks on a remote host. It actually triggers the ``ceph-volume`` utility
246 and its subcommands to manage disks.
247
248 Subcommand ``list`` lists disk partitions and Ceph OSDs.
249
250 Usage::
251
252 ceph-deploy disk list HOST
253
254
255 Subcommand ``zap`` zaps/erases/destroys a device's partition table and
256 contents. It actually uses ``ceph-volume lvm zap`` remotely, alternatively
257 allowing someone to remove the Ceph metadata from the logical volume.
258
259 osd
260 ---
261
262 Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
263 subcommands for managing OSDs.
264
265 Subcommand ``create`` prepares a device for Ceph OSD. It first checks against
266 multiple OSDs getting created and warns about the possibility of more than the
267 recommended which would cause issues with max allowed PIDs in a system. It then
268 reads the bootstrap-osd key for the cluster or writes the bootstrap key if not
269 found.
270 It then uses :program:`ceph-volume` utility's ``lvm create`` subcommand to
271 prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.
272 Once prepared, it gives some time to the OSD to start and checks for any
273 possible errors and if found, reports to the user.
274
275 Bluestore Usage::
276
277 ceph-deploy osd create --data DISK HOST
278
279 Filestore Usage::
280
281 ceph-deploy osd create --data DISK --journal JOURNAL HOST
282
283
284 .. note:: For other flags available, please see the man page or the --help menu
285 on ceph-deploy osd create
286
287 Subcommand ``list`` lists devices associated to Ceph as part of an OSD.
288 It uses the ``ceph-volume lvm list`` output that has a rich output, mapping
289 OSDs to devices and other interesting information about the OSD setup.
290
291 Usage::
292
293 ceph-deploy osd list HOST
294
295
296 admin
297 -----
298
299 Push configuration and ``client.admin`` key to a remote host. It takes
300 the ``{cluster}.client.admin.keyring`` from admin node and writes it under
301 ``/etc/ceph`` directory of desired node.
302
303 Usage::
304
305 ceph-deploy admin [HOST] [HOST...]
306
307 Here, [HOST] is desired host to be configured for Ceph administration.
308
309
310 config
311 ------
312
313 Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
314 to takes the configuration file from admin host and write it to remote host under
315 ``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
316 the configuration file under ``/etc/ceph`` directory of remote host to admin node.
317
318 Usage::
319
320 ceph-deploy config push [HOST] [HOST...]
321
322 ceph-deploy config pull [HOST] [HOST...]
323
324 Here, [HOST] is the hostname of the node where config file will be pushed to or
325 pulled from.
326
327
328 uninstall
329 ---------
330
331 Remove Ceph packages from remote hosts. It detects the platform and distro of
332 selected host and uninstalls Ceph packages from it. However, some dependencies
333 like ``librbd1`` and ``librados2`` will not be removed because they can cause
334 issues with ``qemu-kvm``.
335
336 Usage::
337
338 ceph-deploy uninstall [HOST] [HOST...]
339
340 Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
341
342
343 purge
344 -----
345
346 Remove Ceph packages from remote hosts and purge all data. It detects the
347 platform and distro of selected host, uninstalls Ceph packages and purges all
348 data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
349 removed because they can cause issues with ``qemu-kvm``.
350
351 Usage::
352
353 ceph-deploy purge [HOST] [HOST...]
354
355 Here, [HOST] is hostname of the node from where Ceph will be purged.
356
357
358 purgedata
359 ---------
360
361 Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
362 Once it detects the platform and distro of desired host, it first checks if Ceph
363 is still installed on the selected host and if installed, it won't purge data
364 from it. If Ceph is already uninstalled from the host, it tries to remove the
365 contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
366 and needs to be unmounted to continue. It unmount the OSDs and tries to remove
367 the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
368 contents of ``/etc/ceph``. Once all steps are successfully completed, all the
369 Ceph data from the selected host are removed.
370
371 Usage::
372
373 ceph-deploy purgedata [HOST] [HOST...]
374
375 Here, [HOST] is hostname of the node from where Ceph data will be purged.
376
377
378 forgetkeys
379 ----------
380
381 Remove authentication keys from the local directory. It removes all the
382 authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
383 and bootstrap-mds keyring from the node.
384
385 Usage::
386
387 ceph-deploy forgetkeys
388
389
390 pkg
391 ---
392
393 Manage packages on remote hosts. It is used for installing or removing packages
394 from remote hosts. The package names for installation or removal are to be
395 specified after the command. Two options :option:`--install` and
396 :option:`--remove` are used for this purpose.
397
398 Usage::
399
400 ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
401
402 ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
403
404 Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
405 remote node where packages are to be installed or removed from.
406
407
408 Options
409 =======
410
411 .. option:: --address
412
413 IP address of the host node to be added to the cluster.
414
415 .. option:: --adjust-repos
416
417 Install packages modifying source repos.
418
419 .. option:: --ceph-conf
420
421 Use (or reuse) a given ``ceph.conf`` file.
422
423 .. option:: --cluster
424
425 Name of the cluster.
426
427 .. option:: --dev
428
429 Install a bleeding edge built from Git branch or tag (default: master).
430
431 .. option:: --cluster-network
432
433 Specify the (internal) cluster network.
434
435 .. option:: --dmcrypt
436
437 Encrypt [data-path] and/or journal devices with ``dm-crypt``.
438
439 .. option:: --dmcrypt-key-dir
440
441 Directory where ``dm-crypt`` keys are stored.
442
443 .. option:: --install
444
445 Comma-separated package(s) to install on remote hosts.
446
447 .. option:: --fs-type
448
449 Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
450
451 .. option:: --fsid
452
453 Provide an alternate FSID for ``ceph.conf`` generation.
454
455 .. option:: --gpg-url
456
457 Specify a GPG key url to be used with custom repos (defaults to ceph.com).
458
459 .. option:: --keyrings
460
461 Concatenate multiple keyrings to be seeded on new monitors.
462
463 .. option:: --local-mirror
464
465 Fetch packages and push them to hosts for a local repo mirror.
466
467 .. option:: --mkfs
468
469 Inject keys to MONs on remote nodes.
470
471 .. option:: --no-adjust-repos
472
473 Install packages without modifying source repos.
474
475 .. option:: --no-ssh-copykey
476
477 Do not attempt to copy ssh keys.
478
479 .. option:: --overwrite-conf
480
481 Overwrite an existing conf file on remote host (if present).
482
483 .. option:: --public-network
484
485 Specify the public network for a cluster.
486
487 .. option:: --remove
488
489 Comma-separated package(s) to remove from remote hosts.
490
491 .. option:: --repo
492
493 Install repo files only (skips package installation).
494
495 .. option:: --repo-url
496
497 Specify a repo url that mirrors/contains Ceph packages.
498
499 .. option:: --testing
500
501 Install the latest development release.
502
503 .. option:: --username
504
505 The username to connect to the remote host.
506
507 .. option:: --version
508
509 The current installed version of :program:`ceph-deploy`.
510
511 .. option:: --zap-disk
512
513 Destroy the partition table and content of a disk.
514
515
516 Availability
517 ============
518
519 :program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
520 the documentation at https://ceph.com/ceph-deploy/docs for more information.
521
522
523 See also
524 ========
525
526 :doc:`ceph-mon <ceph-mon>`\(8),
527 :doc:`ceph-osd <ceph-osd>`\(8),
528 :doc:`ceph-volume <ceph-volume>`\(8),
529 :doc:`ceph-mds <ceph-mds>`\(8)