]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph-deploy.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / man / 8 / ceph-deploy.rst
CommitLineData
7c673cae
FG
1:orphan:
2
11fdf7f2
TL
3.. _ceph-deploy:
4
7c673cae
FG
5=====================================
6 ceph-deploy -- Ceph deployment tool
7=====================================
8
9.. program:: ceph-deploy
10
11Synopsis
12========
13
14| **ceph-deploy** **new** [*initial-monitor-node(s)*]
15
16| **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...]
17
18| **ceph-deploy** **mon** *create-initial*
19
11fdf7f2 20| **ceph-deploy** **osd** *create* *--data* *device* *ceph-node*
7c673cae
FG
21
22| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
23
24| **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
25
26| **ceph-deploy** **forgetkeys**
27
28Description
29===========
30
31:program:`ceph-deploy` is a tool which allows easy and quick deployment of a
32Ceph cluster without involving complex and detailed manual configuration. It
33uses ssh to gain access to other Ceph nodes from the admin node, sudo for
34administrator privileges on them and the underlying Python scripts automates
35the manual process of Ceph installation on each node from the admin node itself.
36It can be easily run on an workstation and doesn't require servers, databases or
37any other automated tools. With :program:`ceph-deploy`, it is really easy to set
38up and take down a cluster. However, it is not a generic deployment tool. It is
39a specific tool which is designed for those who want to get Ceph up and running
40quickly with only the unavoidable initial configuration settings and without the
41overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
42who want to customize security settings, partitions or directory locations and
43want to set up a cluster following detailed manual steps, should use other tools
44i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
45
46With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
47create a cluster, add monitors, gather/forget keys, add OSDs and metadata
48servers, configure admin hosts or take down the cluster.
49
50Commands
51========
52
53new
54---
55
56Start deploying a new cluster and write a configuration file and keyring for it.
57It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
58node(s), validates host IP, creates a cluster with a new initial monitor node or
59nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
60a log file for the new cluster. It populates the newly created Ceph configuration
61file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
62members under ``[global]`` section.
63
64Usage::
65
66 ceph-deploy new [MON][MON...]
67
68Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``).
69
70Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
71:option:`--cluster-network` and :option:`--public-network` can also be used with
72this command.
73
74If more than one network interface is used, ``public network`` setting has to be
75added under ``[global]`` section of Ceph configuration file. If the public subnet
76is given, ``new`` command will choose the one IP from the remote host that exists
77within the subnet range. Public network can also be added at runtime using
78:option:`--public-network` option with the command as mentioned above.
79
80
81install
82-------
83
84Install Ceph packages on remote hosts. As a first step it installs
85``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
86so that Ceph packages from upstream repository get more priority. It then detects
87the platform and distribution for the hosts and installs Ceph normally by
88downloading distro compatible packages if adequate repo for Ceph is already added.
89``--release`` flag is used to get the latest release for installation. During
90detection of platform and distribution before installation, if it finds the
91``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
92installation with custom cluster name and uses the default name ``ceph`` for the
93cluster.
94
95If the user explicitly specifies a custom repo url with :option:`--repo-url` for
96installation, anything detected from the configuration will be overridden and
97the custom repository location will be used for installation of Ceph packages.
98If required, valid custom repositories are also detected and installed. In case
99of installation from a custom repo a boolean is used to determine the logic
100needed to proceed with a custom repo installation. A custom repo install helper
101is used that goes through config checks to retrieve repos (and any extra repos
102defined) and installs them. ``cd_conf`` is the object built from ``argparse``
103that holds the flags and information needed to determine what metadata from the
104configuration is to be used.
105
106A user can also opt to install only the repository without installing Ceph and
107its dependencies by using :option:`--repo` option.
108
109Usage::
110
111 ceph-deploy install [HOST][HOST...]
112
113Here, [HOST] is/are the host node(s) where Ceph is to be installed.
114
115An option ``--release`` is used to install a release known as CODENAME
116(default: firefly).
117
118Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`,
119:option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
120:option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
121
122
123mds
124---
125
126Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
127the ``mds`` command is used to create one on the desired host node. It uses the
128subcommand ``create`` to do so. ``create`` first gets the hostname and distro
129information of the desired mds host. It then tries to read the ``bootstrap-mds``
130key for the cluster and deploy it in the desired host. The key generally has a
131format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
132it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
133host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
134format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
135``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
136commands based on ``distro.init`` to start the ``mds``.
137
138Usage::
139
140 ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
141
142The [DAEMON-NAME] is optional.
143
144
145mon
146---
147
148Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
149to deploy Ceph monitors on other nodes.
150
151Subcommand ``create-initial`` deploys for monitors defined in
152``mon initial members`` under ``[global]`` section in Ceph configuration file,
153wait until they form quorum and then gatherkeys, reporting the monitor status
154along the process. If monitors don't form quorum the command will eventually
155time out.
156
157Usage::
158
159 ceph-deploy mon create-initial
160
161Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
162the hosts which are desired to be made monitors. If no hosts are specified it
163will default to use the ``mon initial members`` defined under ``[global]``
164section of Ceph configuration file. ``create`` first detects platform and distro
165for desired hosts and checks if hostname is compatible for deployment. It then
166uses the monitor keyring initially created using ``new`` command and deploys the
167monitor in desired host. If multiple hosts were specified during ``new`` command
168i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
169were created then a concatenated keyring is used for deployment of monitors. In
170this process a keyring parser is used which looks for ``[entity]`` sections in
171monitor keyrings and returns a list of those sections. A helper is then used to
172collect all keyrings into a single blob that will be used to inject it to monitors
173with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
174in a directory ending with ``.keyring``. During this process the helper uses list
175of sections returned by keyring parser to check if an entity is already present
176in a keyring and if not, adds it. The concatenated keyring is used for deployment
177of monitors to desired multiple hosts.
178
179Usage::
180
181 ceph-deploy mon create [HOST] [HOST...]
182
183Here, [HOST] is hostname of desired monitor host(s).
184
185Subcommand ``add`` is used to add a monitor to an existing cluster. It first
186detects platform and distro for desired host and checks if hostname is compatible
187for deployment. It then uses the monitor keyring, ensures configuration for new
188monitor host and adds the monitor to the cluster. If the section for the monitor
11fdf7f2 189exists and defines a monitor address that will be used, otherwise it will fallback by
7c673cae
FG
190resolving the hostname to an IP. If :option:`--address` is used it will override
191all other options. After adding the monitor to the cluster, it gives it some time
192to start. It then looks for any monitor errors and checks monitor status. Monitor
193errors arise if the monitor is not added in ``mon initial members``, if it doesn't
194exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
195were defined for monitors. Under such conditions, monitors may not be able to
196form quorum. Monitor status tells if the monitor is up and running normally. The
197status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
198end which provides the output and returns a boolean status of what is going on.
199``False`` means a monitor that is not fine even if it is up and running, while
200``True`` means the monitor is up and running correctly.
201
202Usage::
203
204 ceph-deploy mon add [HOST]
205
206 ceph-deploy mon add [HOST] --address [IP]
207
208Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
209node. Please note, unlike other ``mon`` subcommands, only one node can be
210specified at a time.
211
212Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
213It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
214daemon really stopped, creates an archive directory ``mon-remove`` under
215``/var/lib/ceph/``, archives old monitor directory in
216``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
217cluster by running ``ceph remove...`` command.
218
219Usage::
220
221 ceph-deploy mon destroy [HOST] [HOST...]
222
223Here, [HOST] is hostname of monitor that is to be removed.
224
225
226gatherkeys
227----------
228
229Gather authentication keys for provisioning new nodes. It takes hostnames as
230arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
231and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
232authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
233cluster.
234
235Usage::
236
237 ceph-deploy gatherkeys [HOST] [HOST...]
238
239Here, [HOST] is hostname of the monitor from where keys are to be pulled.
240
241
242disk
243----
244
11fdf7f2
TL
245Manage disks on a remote host. It actually triggers the ``ceph-volume`` utility
246and its subcommands to manage disks.
7c673cae
FG
247
248Subcommand ``list`` lists disk partitions and Ceph OSDs.
249
250Usage::
251
11fdf7f2 252 ceph-deploy disk list HOST
7c673cae 253
7c673cae 254
11fdf7f2
TL
255Subcommand ``zap`` zaps/erases/destroys a device's partition table and
256contents. It actually uses ``ceph-volume lvm zap`` remotely, alternatively
257allowing someone to remove the Ceph metadata from the logical volume.
7c673cae
FG
258
259osd
260---
261
262Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
263subcommands for managing OSDs.
264
11fdf7f2
TL
265Subcommand ``create`` prepares a device for Ceph OSD. It first checks against
266multiple OSDs getting created and warns about the possibility of more than the
267recommended which would cause issues with max allowed PIDs in a system. It then
268reads the bootstrap-osd key for the cluster or writes the bootstrap key if not
269found.
270It then uses :program:`ceph-volume` utility's ``lvm create`` subcommand to
271prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.
272Once prepared, it gives some time to the OSD to start and checks for any
273possible errors and if found, reports to the user.
7c673cae 274
11fdf7f2 275Bluestore Usage::
7c673cae 276
11fdf7f2 277 ceph-deploy osd create --data DISK HOST
7c673cae 278
11fdf7f2 279Filestore Usage::
7c673cae 280
11fdf7f2 281 ceph-deploy osd create --data DISK --journal JOURNAL HOST
7c673cae 282
7c673cae 283
11fdf7f2
TL
284.. note:: For other flags available, please see the man page or the --help menu
285 on ceph-deploy osd create
7c673cae 286
11fdf7f2
TL
287Subcommand ``list`` lists devices associated to Ceph as part of an OSD.
288It uses the ``ceph-volume lvm list`` output that has a rich output, mapping
289OSDs to devices and other interesting information about the OSD setup.
7c673cae
FG
290
291Usage::
292
11fdf7f2 293 ceph-deploy osd list HOST
7c673cae
FG
294
295
296admin
297-----
298
299Push configuration and ``client.admin`` key to a remote host. It takes
300the ``{cluster}.client.admin.keyring`` from admin node and writes it under
301``/etc/ceph`` directory of desired node.
302
303Usage::
304
305 ceph-deploy admin [HOST] [HOST...]
306
307Here, [HOST] is desired host to be configured for Ceph administration.
308
309
310config
311------
312
313Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
314to takes the configuration file from admin host and write it to remote host under
315``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
316the configuration file under ``/etc/ceph`` directory of remote host to admin node.
317
318Usage::
319
320 ceph-deploy config push [HOST] [HOST...]
321
322 ceph-deploy config pull [HOST] [HOST...]
323
324Here, [HOST] is the hostname of the node where config file will be pushed to or
325pulled from.
326
327
328uninstall
329---------
330
331Remove Ceph packages from remote hosts. It detects the platform and distro of
332selected host and uninstalls Ceph packages from it. However, some dependencies
333like ``librbd1`` and ``librados2`` will not be removed because they can cause
334issues with ``qemu-kvm``.
335
336Usage::
337
338 ceph-deploy uninstall [HOST] [HOST...]
339
340Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
341
342
343purge
344-----
345
346Remove Ceph packages from remote hosts and purge all data. It detects the
347platform and distro of selected host, uninstalls Ceph packages and purges all
348data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
349removed because they can cause issues with ``qemu-kvm``.
350
351Usage::
352
353 ceph-deploy purge [HOST] [HOST...]
354
355Here, [HOST] is hostname of the node from where Ceph will be purged.
356
357
358purgedata
359---------
360
361Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
362Once it detects the platform and distro of desired host, it first checks if Ceph
363is still installed on the selected host and if installed, it won't purge data
364from it. If Ceph is already uninstalled from the host, it tries to remove the
365contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
366and needs to be unmounted to continue. It unmount the OSDs and tries to remove
367the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
368contents of ``/etc/ceph``. Once all steps are successfully completed, all the
369Ceph data from the selected host are removed.
370
371Usage::
372
373 ceph-deploy purgedata [HOST] [HOST...]
374
375Here, [HOST] is hostname of the node from where Ceph data will be purged.
376
377
378forgetkeys
379----------
380
381Remove authentication keys from the local directory. It removes all the
382authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
383and bootstrap-mds keyring from the node.
384
385Usage::
386
387 ceph-deploy forgetkeys
388
389
390pkg
391---
392
393Manage packages on remote hosts. It is used for installing or removing packages
394from remote hosts. The package names for installation or removal are to be
395specified after the command. Two options :option:`--install` and
396:option:`--remove` are used for this purpose.
397
398Usage::
399
400 ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
401
402 ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
403
404Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
405remote node where packages are to be installed or removed from.
406
407
7c673cae
FG
408Options
409=======
410
411.. option:: --address
412
413 IP address of the host node to be added to the cluster.
414
415.. option:: --adjust-repos
416
417 Install packages modifying source repos.
418
419.. option:: --ceph-conf
420
421 Use (or reuse) a given ``ceph.conf`` file.
422
423.. option:: --cluster
424
425 Name of the cluster.
426
427.. option:: --dev
428
429 Install a bleeding edge built from Git branch or tag (default: master).
430
431.. option:: --cluster-network
432
433 Specify the (internal) cluster network.
434
435.. option:: --dmcrypt
436
437 Encrypt [data-path] and/or journal devices with ``dm-crypt``.
438
439.. option:: --dmcrypt-key-dir
440
441 Directory where ``dm-crypt`` keys are stored.
442
443.. option:: --install
444
445 Comma-separated package(s) to install on remote hosts.
446
447.. option:: --fs-type
448
224ce89b 449 Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
7c673cae
FG
450
451.. option:: --fsid
452
453 Provide an alternate FSID for ``ceph.conf`` generation.
454
455.. option:: --gpg-url
456
457 Specify a GPG key url to be used with custom repos (defaults to ceph.com).
458
459.. option:: --keyrings
460
461 Concatenate multiple keyrings to be seeded on new monitors.
462
463.. option:: --local-mirror
464
465 Fetch packages and push them to hosts for a local repo mirror.
466
7c673cae
FG
467.. option:: --mkfs
468
469 Inject keys to MONs on remote nodes.
470
471.. option:: --no-adjust-repos
472
473 Install packages without modifying source repos.
474
475.. option:: --no-ssh-copykey
476
477 Do not attempt to copy ssh keys.
478
479.. option:: --overwrite-conf
480
481 Overwrite an existing conf file on remote host (if present).
482
483.. option:: --public-network
484
485 Specify the public network for a cluster.
486
487.. option:: --remove
488
489 Comma-separated package(s) to remove from remote hosts.
490
491.. option:: --repo
492
493 Install repo files only (skips package installation).
494
495.. option:: --repo-url
496
497 Specify a repo url that mirrors/contains Ceph packages.
498
499.. option:: --testing
500
501 Install the latest development release.
502
503.. option:: --username
504
505 The username to connect to the remote host.
506
507.. option:: --version
508
509 The current installed version of :program:`ceph-deploy`.
510
511.. option:: --zap-disk
512
513 Destroy the partition table and content of a disk.
514
515
516Availability
517============
518
519:program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
11fdf7f2 520the documentation at https://ceph.com/ceph-deploy/docs for more information.
7c673cae
FG
521
522
523See also
524========
525
526:doc:`ceph-mon <ceph-mon>`\(8),
527:doc:`ceph-osd <ceph-osd>`\(8),
11fdf7f2 528:doc:`ceph-volume <ceph-volume>`\(8),
7c673cae 529:doc:`ceph-mds <ceph-mds>`\(8)