5 =====================================
6 ceph-deploy -- Ceph deployment tool
7 =====================================
9 .. program:: ceph-deploy
14 | **ceph-deploy** **new** [*initial-monitor-node(s)*]
16 | **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...]
18 | **ceph-deploy** **mon** *create-initial*
20 | **ceph-deploy** **osd** *create* *--data* *device* *ceph-node*
22 | **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
24 | **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
26 | **ceph-deploy** **forgetkeys**
31 :program:`ceph-deploy` is a tool which allows easy and quick deployment of a
32 Ceph cluster without involving complex and detailed manual configuration. It
33 uses ssh to gain access to other Ceph nodes from the admin node, sudo for
34 administrator privileges on them and the underlying Python scripts automates
35 the manual process of Ceph installation on each node from the admin node itself.
36 It can be easily run on an workstation and doesn't require servers, databases or
37 any other automated tools. With :program:`ceph-deploy`, it is really easy to set
38 up and take down a cluster. However, it is not a generic deployment tool. It is
39 a specific tool which is designed for those who want to get Ceph up and running
40 quickly with only the unavoidable initial configuration settings and without the
41 overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
42 who want to customize security settings, partitions or directory locations and
43 want to set up a cluster following detailed manual steps, should use other tools
44 i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
46 With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
47 create a cluster, add monitors, gather/forget keys, add OSDs and metadata
48 servers, configure admin hosts or take down the cluster.
56 Start deploying a new cluster and write a configuration file and keyring for it.
57 It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
58 node(s), validates host IP, creates a cluster with a new initial monitor node or
59 nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
60 a log file for the new cluster. It populates the newly created Ceph configuration
61 file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
62 members under ``[global]`` section.
66 ceph-deploy new [MON][MON...]
68 Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``).
70 Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
71 :option:`--cluster-network` and :option:`--public-network` can also be used with
74 If more than one network interface is used, ``public network`` setting has to be
75 added under ``[global]`` section of Ceph configuration file. If the public subnet
76 is given, ``new`` command will choose the one IP from the remote host that exists
77 within the subnet range. Public network can also be added at runtime using
78 :option:`--public-network` option with the command as mentioned above.
84 Install Ceph packages on remote hosts. As a first step it installs
85 ``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
86 so that Ceph packages from upstream repository get more priority. It then detects
87 the platform and distribution for the hosts and installs Ceph normally by
88 downloading distro compatible packages if adequate repo for Ceph is already added.
89 ``--release`` flag is used to get the latest release for installation. During
90 detection of platform and distribution before installation, if it finds the
91 ``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
92 installation with custom cluster name and uses the default name ``ceph`` for the
95 If the user explicitly specifies a custom repo url with :option:`--repo-url` for
96 installation, anything detected from the configuration will be overridden and
97 the custom repository location will be used for installation of Ceph packages.
98 If required, valid custom repositories are also detected and installed. In case
99 of installation from a custom repo a boolean is used to determine the logic
100 needed to proceed with a custom repo installation. A custom repo install helper
101 is used that goes through config checks to retrieve repos (and any extra repos
102 defined) and installs them. ``cd_conf`` is the object built from ``argparse``
103 that holds the flags and information needed to determine what metadata from the
104 configuration is to be used.
106 A user can also opt to install only the repository without installing Ceph and
107 its dependencies by using :option:`--repo` option.
111 ceph-deploy install [HOST][HOST...]
113 Here, [HOST] is/are the host node(s) where Ceph is to be installed.
115 An option ``--release`` is used to install a release known as CODENAME
118 Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`,
119 :option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
120 :option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
126 Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
127 the ``mds`` command is used to create one on the desired host node. It uses the
128 subcommand ``create`` to do so. ``create`` first gets the hostname and distro
129 information of the desired mds host. It then tries to read the ``bootstrap-mds``
130 key for the cluster and deploy it in the desired host. The key generally has a
131 format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
132 it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
133 host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
134 format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
135 ``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
136 commands based on ``distro.init`` to start the ``mds``.
140 ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
142 The [DAEMON-NAME] is optional.
148 Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
149 to deploy Ceph monitors on other nodes.
151 Subcommand ``create-initial`` deploys for monitors defined in
152 ``mon initial members`` under ``[global]`` section in Ceph configuration file,
153 wait until they form quorum and then gatherkeys, reporting the monitor status
154 along the process. If monitors don't form quorum the command will eventually
159 ceph-deploy mon create-initial
161 Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
162 the hosts which are desired to be made monitors. If no hosts are specified it
163 will default to use the ``mon initial members`` defined under ``[global]``
164 section of Ceph configuration file. ``create`` first detects platform and distro
165 for desired hosts and checks if hostname is compatible for deployment. It then
166 uses the monitor keyring initially created using ``new`` command and deploys the
167 monitor in desired host. If multiple hosts were specified during ``new`` command
168 i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
169 were created then a concatenated keyring is used for deployment of monitors. In
170 this process a keyring parser is used which looks for ``[entity]`` sections in
171 monitor keyrings and returns a list of those sections. A helper is then used to
172 collect all keyrings into a single blob that will be used to inject it to monitors
173 with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
174 in a directory ending with ``.keyring``. During this process the helper uses list
175 of sections returned by keyring parser to check if an entity is already present
176 in a keyring and if not, adds it. The concatenated keyring is used for deployment
177 of monitors to desired multiple hosts.
181 ceph-deploy mon create [HOST] [HOST...]
183 Here, [HOST] is hostname of desired monitor host(s).
185 Subcommand ``add`` is used to add a monitor to an existing cluster. It first
186 detects platform and distro for desired host and checks if hostname is compatible
187 for deployment. It then uses the monitor keyring, ensures configuration for new
188 monitor host and adds the monitor to the cluster. If the section for the monitor
189 exists and defines a monitor address that will be used, otherwise it will fallback by
190 resolving the hostname to an IP. If :option:`--address` is used it will override
191 all other options. After adding the monitor to the cluster, it gives it some time
192 to start. It then looks for any monitor errors and checks monitor status. Monitor
193 errors arise if the monitor is not added in ``mon initial members``, if it doesn't
194 exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
195 were defined for monitors. Under such conditions, monitors may not be able to
196 form quorum. Monitor status tells if the monitor is up and running normally. The
197 status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
198 end which provides the output and returns a boolean status of what is going on.
199 ``False`` means a monitor that is not fine even if it is up and running, while
200 ``True`` means the monitor is up and running correctly.
204 ceph-deploy mon add [HOST]
206 ceph-deploy mon add [HOST] --address [IP]
208 Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
209 node. Please note, unlike other ``mon`` subcommands, only one node can be
212 Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
213 It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
214 daemon really stopped, creates an archive directory ``mon-remove`` under
215 ``/var/lib/ceph/``, archives old monitor directory in
216 ``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
217 cluster by running ``ceph remove...`` command.
221 ceph-deploy mon destroy [HOST] [HOST...]
223 Here, [HOST] is hostname of monitor that is to be removed.
229 Gather authentication keys for provisioning new nodes. It takes hostnames as
230 arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
231 and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
232 authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
237 ceph-deploy gatherkeys [HOST] [HOST...]
239 Here, [HOST] is hostname of the monitor from where keys are to be pulled.
245 Manage disks on a remote host. It actually triggers the ``ceph-volume`` utility
246 and its subcommands to manage disks.
248 Subcommand ``list`` lists disk partitions and Ceph OSDs.
252 ceph-deploy disk list HOST
255 Subcommand ``zap`` zaps/erases/destroys a device's partition table and
256 contents. It actually uses ``ceph-volume lvm zap`` remotely, alternatively
257 allowing someone to remove the Ceph metadata from the logical volume.
262 Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
263 subcommands for managing OSDs.
265 Subcommand ``create`` prepares a device for Ceph OSD. It first checks against
266 multiple OSDs getting created and warns about the possibility of more than the
267 recommended which would cause issues with max allowed PIDs in a system. It then
268 reads the bootstrap-osd key for the cluster or writes the bootstrap key if not
270 It then uses :program:`ceph-volume` utility's ``lvm create`` subcommand to
271 prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.
272 Once prepared, it gives some time to the OSD to start and checks for any
273 possible errors and if found, reports to the user.
277 ceph-deploy osd create --data DISK HOST
281 ceph-deploy osd create --data DISK --journal JOURNAL HOST
284 .. note:: For other flags available, please see the man page or the --help menu
285 on ceph-deploy osd create
287 Subcommand ``list`` lists devices associated to Ceph as part of an OSD.
288 It uses the ``ceph-volume lvm list`` output that has a rich output, mapping
289 OSDs to devices and other interesting information about the OSD setup.
293 ceph-deploy osd list HOST
299 Push configuration and ``client.admin`` key to a remote host. It takes
300 the ``{cluster}.client.admin.keyring`` from admin node and writes it under
301 ``/etc/ceph`` directory of desired node.
305 ceph-deploy admin [HOST] [HOST...]
307 Here, [HOST] is desired host to be configured for Ceph administration.
313 Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
314 to takes the configuration file from admin host and write it to remote host under
315 ``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
316 the configuration file under ``/etc/ceph`` directory of remote host to admin node.
320 ceph-deploy config push [HOST] [HOST...]
322 ceph-deploy config pull [HOST] [HOST...]
324 Here, [HOST] is the hostname of the node where config file will be pushed to or
331 Remove Ceph packages from remote hosts. It detects the platform and distro of
332 selected host and uninstalls Ceph packages from it. However, some dependencies
333 like ``librbd1`` and ``librados2`` will not be removed because they can cause
334 issues with ``qemu-kvm``.
338 ceph-deploy uninstall [HOST] [HOST...]
340 Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
346 Remove Ceph packages from remote hosts and purge all data. It detects the
347 platform and distro of selected host, uninstalls Ceph packages and purges all
348 data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
349 removed because they can cause issues with ``qemu-kvm``.
353 ceph-deploy purge [HOST] [HOST...]
355 Here, [HOST] is hostname of the node from where Ceph will be purged.
361 Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
362 Once it detects the platform and distro of desired host, it first checks if Ceph
363 is still installed on the selected host and if installed, it won't purge data
364 from it. If Ceph is already uninstalled from the host, it tries to remove the
365 contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
366 and needs to be unmounted to continue. It unmount the OSDs and tries to remove
367 the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
368 contents of ``/etc/ceph``. Once all steps are successfully completed, all the
369 Ceph data from the selected host are removed.
373 ceph-deploy purgedata [HOST] [HOST...]
375 Here, [HOST] is hostname of the node from where Ceph data will be purged.
381 Remove authentication keys from the local directory. It removes all the
382 authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
383 and bootstrap-mds keyring from the node.
387 ceph-deploy forgetkeys
393 Manage packages on remote hosts. It is used for installing or removing packages
394 from remote hosts. The package names for installation or removal are to be
395 specified after the command. Two options :option:`--install` and
396 :option:`--remove` are used for this purpose.
400 ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
402 ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
404 Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
405 remote node where packages are to be installed or removed from.
411 .. option:: --address
413 IP address of the host node to be added to the cluster.
415 .. option:: --adjust-repos
417 Install packages modifying source repos.
419 .. option:: --ceph-conf
421 Use (or reuse) a given ``ceph.conf`` file.
423 .. option:: --cluster
429 Install a bleeding edge built from Git branch or tag (default: master).
431 .. option:: --cluster-network
433 Specify the (internal) cluster network.
435 .. option:: --dmcrypt
437 Encrypt [data-path] and/or journal devices with ``dm-crypt``.
439 .. option:: --dmcrypt-key-dir
441 Directory where ``dm-crypt`` keys are stored.
443 .. option:: --install
445 Comma-separated package(s) to install on remote hosts.
447 .. option:: --fs-type
449 Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
453 Provide an alternate FSID for ``ceph.conf`` generation.
455 .. option:: --gpg-url
457 Specify a GPG key url to be used with custom repos (defaults to ceph.com).
459 .. option:: --keyrings
461 Concatenate multiple keyrings to be seeded on new monitors.
463 .. option:: --local-mirror
465 Fetch packages and push them to hosts for a local repo mirror.
469 Inject keys to MONs on remote nodes.
471 .. option:: --no-adjust-repos
473 Install packages without modifying source repos.
475 .. option:: --no-ssh-copykey
477 Do not attempt to copy ssh keys.
479 .. option:: --overwrite-conf
481 Overwrite an existing conf file on remote host (if present).
483 .. option:: --public-network
485 Specify the public network for a cluster.
489 Comma-separated package(s) to remove from remote hosts.
493 Install repo files only (skips package installation).
495 .. option:: --repo-url
497 Specify a repo url that mirrors/contains Ceph packages.
499 .. option:: --testing
501 Install the latest development release.
503 .. option:: --username
505 The username to connect to the remote host.
507 .. option:: --version
509 The current installed version of :program:`ceph-deploy`.
511 .. option:: --zap-disk
513 Destroy the partition table and content of a disk.
519 :program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
520 the documentation at https://ceph.com/ceph-deploy/docs for more information.
526 :doc:`ceph-mon <ceph-mon>`\(8),
527 :doc:`ceph-osd <ceph-osd>`\(8),
528 :doc:`ceph-volume <ceph-volume>`\(8),
529 :doc:`ceph-mds <ceph-mds>`\(8)