]> git.proxmox.com Git - ceph.git/blame - ceph/doc/man/8/ceph-deploy.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / man / 8 / ceph-deploy.rst
CommitLineData
7c673cae
FG
1:orphan:
2
3=====================================
4 ceph-deploy -- Ceph deployment tool
5=====================================
6
7.. program:: ceph-deploy
8
9Synopsis
10========
11
12| **ceph-deploy** **new** [*initial-monitor-node(s)*]
13
14| **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...]
15
16| **ceph-deploy** **mon** *create-initial*
17
18| **ceph-deploy** **osd** *prepare* [*ceph-node*]:[*dir-path*]
19
20| **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*]
21
22| **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*]
23
24| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
25
26| **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
27
28| **ceph-deploy** **forgetkeys**
29
30Description
31===========
32
33:program:`ceph-deploy` is a tool which allows easy and quick deployment of a
34Ceph cluster without involving complex and detailed manual configuration. It
35uses ssh to gain access to other Ceph nodes from the admin node, sudo for
36administrator privileges on them and the underlying Python scripts automates
37the manual process of Ceph installation on each node from the admin node itself.
38It can be easily run on an workstation and doesn't require servers, databases or
39any other automated tools. With :program:`ceph-deploy`, it is really easy to set
40up and take down a cluster. However, it is not a generic deployment tool. It is
41a specific tool which is designed for those who want to get Ceph up and running
42quickly with only the unavoidable initial configuration settings and without the
43overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
44who want to customize security settings, partitions or directory locations and
45want to set up a cluster following detailed manual steps, should use other tools
46i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
47
48With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
49create a cluster, add monitors, gather/forget keys, add OSDs and metadata
50servers, configure admin hosts or take down the cluster.
51
52Commands
53========
54
55new
56---
57
58Start deploying a new cluster and write a configuration file and keyring for it.
59It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
60node(s), validates host IP, creates a cluster with a new initial monitor node or
61nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
62a log file for the new cluster. It populates the newly created Ceph configuration
63file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
64members under ``[global]`` section.
65
66Usage::
67
68 ceph-deploy new [MON][MON...]
69
70Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``).
71
72Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
73:option:`--cluster-network` and :option:`--public-network` can also be used with
74this command.
75
76If more than one network interface is used, ``public network`` setting has to be
77added under ``[global]`` section of Ceph configuration file. If the public subnet
78is given, ``new`` command will choose the one IP from the remote host that exists
79within the subnet range. Public network can also be added at runtime using
80:option:`--public-network` option with the command as mentioned above.
81
82
83install
84-------
85
86Install Ceph packages on remote hosts. As a first step it installs
87``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
88so that Ceph packages from upstream repository get more priority. It then detects
89the platform and distribution for the hosts and installs Ceph normally by
90downloading distro compatible packages if adequate repo for Ceph is already added.
91``--release`` flag is used to get the latest release for installation. During
92detection of platform and distribution before installation, if it finds the
93``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
94installation with custom cluster name and uses the default name ``ceph`` for the
95cluster.
96
97If the user explicitly specifies a custom repo url with :option:`--repo-url` for
98installation, anything detected from the configuration will be overridden and
99the custom repository location will be used for installation of Ceph packages.
100If required, valid custom repositories are also detected and installed. In case
101of installation from a custom repo a boolean is used to determine the logic
102needed to proceed with a custom repo installation. A custom repo install helper
103is used that goes through config checks to retrieve repos (and any extra repos
104defined) and installs them. ``cd_conf`` is the object built from ``argparse``
105that holds the flags and information needed to determine what metadata from the
106configuration is to be used.
107
108A user can also opt to install only the repository without installing Ceph and
109its dependencies by using :option:`--repo` option.
110
111Usage::
112
113 ceph-deploy install [HOST][HOST...]
114
115Here, [HOST] is/are the host node(s) where Ceph is to be installed.
116
117An option ``--release`` is used to install a release known as CODENAME
118(default: firefly).
119
120Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`,
121:option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
122:option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
123
124
125mds
126---
127
128Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
129the ``mds`` command is used to create one on the desired host node. It uses the
130subcommand ``create`` to do so. ``create`` first gets the hostname and distro
131information of the desired mds host. It then tries to read the ``bootstrap-mds``
132key for the cluster and deploy it in the desired host. The key generally has a
133format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
134it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
135host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
136format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
137``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
138commands based on ``distro.init`` to start the ``mds``.
139
140Usage::
141
142 ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
143
144The [DAEMON-NAME] is optional.
145
146
147mon
148---
149
150Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
151to deploy Ceph monitors on other nodes.
152
153Subcommand ``create-initial`` deploys for monitors defined in
154``mon initial members`` under ``[global]`` section in Ceph configuration file,
155wait until they form quorum and then gatherkeys, reporting the monitor status
156along the process. If monitors don't form quorum the command will eventually
157time out.
158
159Usage::
160
161 ceph-deploy mon create-initial
162
163Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
164the hosts which are desired to be made monitors. If no hosts are specified it
165will default to use the ``mon initial members`` defined under ``[global]``
166section of Ceph configuration file. ``create`` first detects platform and distro
167for desired hosts and checks if hostname is compatible for deployment. It then
168uses the monitor keyring initially created using ``new`` command and deploys the
169monitor in desired host. If multiple hosts were specified during ``new`` command
170i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
171were created then a concatenated keyring is used for deployment of monitors. In
172this process a keyring parser is used which looks for ``[entity]`` sections in
173monitor keyrings and returns a list of those sections. A helper is then used to
174collect all keyrings into a single blob that will be used to inject it to monitors
175with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
176in a directory ending with ``.keyring``. During this process the helper uses list
177of sections returned by keyring parser to check if an entity is already present
178in a keyring and if not, adds it. The concatenated keyring is used for deployment
179of monitors to desired multiple hosts.
180
181Usage::
182
183 ceph-deploy mon create [HOST] [HOST...]
184
185Here, [HOST] is hostname of desired monitor host(s).
186
187Subcommand ``add`` is used to add a monitor to an existing cluster. It first
188detects platform and distro for desired host and checks if hostname is compatible
189for deployment. It then uses the monitor keyring, ensures configuration for new
190monitor host and adds the monitor to the cluster. If the section for the monitor
191exists and defines a mon addr that will be used, otherwise it will fallback by
192resolving the hostname to an IP. If :option:`--address` is used it will override
193all other options. After adding the monitor to the cluster, it gives it some time
194to start. It then looks for any monitor errors and checks monitor status. Monitor
195errors arise if the monitor is not added in ``mon initial members``, if it doesn't
196exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
197were defined for monitors. Under such conditions, monitors may not be able to
198form quorum. Monitor status tells if the monitor is up and running normally. The
199status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
200end which provides the output and returns a boolean status of what is going on.
201``False`` means a monitor that is not fine even if it is up and running, while
202``True`` means the monitor is up and running correctly.
203
204Usage::
205
206 ceph-deploy mon add [HOST]
207
208 ceph-deploy mon add [HOST] --address [IP]
209
210Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
211node. Please note, unlike other ``mon`` subcommands, only one node can be
212specified at a time.
213
214Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
215It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
216daemon really stopped, creates an archive directory ``mon-remove`` under
217``/var/lib/ceph/``, archives old monitor directory in
218``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
219cluster by running ``ceph remove...`` command.
220
221Usage::
222
223 ceph-deploy mon destroy [HOST] [HOST...]
224
225Here, [HOST] is hostname of monitor that is to be removed.
226
227
228gatherkeys
229----------
230
231Gather authentication keys for provisioning new nodes. It takes hostnames as
232arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
233and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
234authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
235cluster.
236
237Usage::
238
239 ceph-deploy gatherkeys [HOST] [HOST...]
240
241Here, [HOST] is hostname of the monitor from where keys are to be pulled.
242
243
244disk
245----
246
247Manage disks on a remote host. It actually triggers the ``ceph-disk`` utility
248and it's subcommands to manage disks.
249
250Subcommand ``list`` lists disk partitions and Ceph OSDs.
251
252Usage::
253
254 ceph-deploy disk list [HOST:[DISK]]
255
256Here, [HOST] is hostname of the node and [DISK] is disk name or path.
257
258Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
259creates a GPT partition, marks the partition with Ceph type uuid, creates a
260file system, marks the file system as ready for Ceph consumption, uses entire
261partition and adds a new partition to the journal disk.
262
263Usage::
264
265 ceph-deploy disk prepare [HOST:[DISK]]
266
267Here, [HOST] is hostname of the node and [DISK] is disk name or path.
268
269Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a
270temporary location, allocates an OSD id (if needed), remounts in the correct
271location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is
272triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service
273start with ``ceph disk activate-all``.
274
275Usage::
276
277 ceph-deploy disk activate [HOST:[DISK]]
278
279Here, [HOST] is hostname of the node and [DISK] is disk name or path.
280
281Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents.
282It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and
283MBR data structures so that the disk becomes suitable for repartitioning.
284``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a
285GPT disk. The ``prepare`` subcommand can now be executed which will create a new
286GPT partition.
287
288Usage::
289
290 ceph-deploy disk zap [HOST:[DISK]]
291
292Here, [HOST] is hostname of the node and [DISK] is disk name or path.
293
294
295osd
296---
297
298Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
299subcommands for managing OSDs.
300
301Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
302first checks against multiple OSDs getting created and warns about the
303possibility of more than the recommended which would cause issues with max
304allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
305writes the bootstrap key if not found. It then uses :program:`ceph-disk`
306utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD
307on the desired host. Once prepared, it gives some time to the OSD to settle and
308checks for any possible errors and if found, reports to the user.
309
310Usage::
311
312 ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
313
314Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand.
315It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with
316appropriate init type based on distro to activate the OSD. Once activated, it
317gives some time to the OSD to start and checks for any possible errors and if
318found, reports to the user. It checks the status of the prepared OSD, checks the
319OSD tree and makes sure the OSDs are up and in.
320
321Usage::
322
323 ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
324
325Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an
326OSD.
327
328Usage::
329
330 ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
331
332Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata.
333It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output
334and gets the mount point by matching the line where the partition mentions
335the OSD name, reads metadata from files, checks if a journal path exists,
336if the OSD is in a OSD tree and prints the OSD metadata.
337
338Usage::
339
340 ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
341
342
343admin
344-----
345
346Push configuration and ``client.admin`` key to a remote host. It takes
347the ``{cluster}.client.admin.keyring`` from admin node and writes it under
348``/etc/ceph`` directory of desired node.
349
350Usage::
351
352 ceph-deploy admin [HOST] [HOST...]
353
354Here, [HOST] is desired host to be configured for Ceph administration.
355
356
357config
358------
359
360Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
361to takes the configuration file from admin host and write it to remote host under
362``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
363the configuration file under ``/etc/ceph`` directory of remote host to admin node.
364
365Usage::
366
367 ceph-deploy config push [HOST] [HOST...]
368
369 ceph-deploy config pull [HOST] [HOST...]
370
371Here, [HOST] is the hostname of the node where config file will be pushed to or
372pulled from.
373
374
375uninstall
376---------
377
378Remove Ceph packages from remote hosts. It detects the platform and distro of
379selected host and uninstalls Ceph packages from it. However, some dependencies
380like ``librbd1`` and ``librados2`` will not be removed because they can cause
381issues with ``qemu-kvm``.
382
383Usage::
384
385 ceph-deploy uninstall [HOST] [HOST...]
386
387Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
388
389
390purge
391-----
392
393Remove Ceph packages from remote hosts and purge all data. It detects the
394platform and distro of selected host, uninstalls Ceph packages and purges all
395data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
396removed because they can cause issues with ``qemu-kvm``.
397
398Usage::
399
400 ceph-deploy purge [HOST] [HOST...]
401
402Here, [HOST] is hostname of the node from where Ceph will be purged.
403
404
405purgedata
406---------
407
408Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
409Once it detects the platform and distro of desired host, it first checks if Ceph
410is still installed on the selected host and if installed, it won't purge data
411from it. If Ceph is already uninstalled from the host, it tries to remove the
412contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
413and needs to be unmounted to continue. It unmount the OSDs and tries to remove
414the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
415contents of ``/etc/ceph``. Once all steps are successfully completed, all the
416Ceph data from the selected host are removed.
417
418Usage::
419
420 ceph-deploy purgedata [HOST] [HOST...]
421
422Here, [HOST] is hostname of the node from where Ceph data will be purged.
423
424
425forgetkeys
426----------
427
428Remove authentication keys from the local directory. It removes all the
429authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
430and bootstrap-mds keyring from the node.
431
432Usage::
433
434 ceph-deploy forgetkeys
435
436
437pkg
438---
439
440Manage packages on remote hosts. It is used for installing or removing packages
441from remote hosts. The package names for installation or removal are to be
442specified after the command. Two options :option:`--install` and
443:option:`--remove` are used for this purpose.
444
445Usage::
446
447 ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
448
449 ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
450
451Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
452remote node where packages are to be installed or removed from.
453
454
455calamari
456--------
457
458Install and configure Calamari nodes. It first checks if distro is supported
459for Calamari installation by ceph-deploy. An argument ``connect`` is used for
460installation and configuration. It checks for ``ceph-deploy`` configuration
461file (cd_conf) and Calamari release repo or ``calamari-minion`` repo. It relies
462on default for repo installation as it doesn't install Ceph unless specified
463otherwise. ``options`` dictionary is also defined because ``ceph-deploy``
464pops items internally which causes issues when those items are needed to be
465available for every host. If the distro is Debian/Ubuntu, it is ensured that
466proxy is disabled for ``calamari-minion`` repo. ``calamari-minion`` package is
467then installed and custom repository files are added. minion config is placed
468prior to installation so that it is present when the minion first starts.
469config directory, calamari salt config are created and salt-minion package
470is installed. If the distro is Redhat/CentOS, the salt-minion service needs to
471be started.
472
473Usage::
474
475 ceph-deploy calamari {connect} [HOST] [HOST...]
476
477Here, [HOST] is the hostname where Calamari is to be installed.
478
479An option ``--release`` can be used to use a given release from repositories
480defined in :program:`ceph-deploy`'s configuration. Defaults to ``calamari-minion``.
481
482Another option :option:`--master` can also be used with this command.
483
484Options
485=======
486
487.. option:: --address
488
489 IP address of the host node to be added to the cluster.
490
491.. option:: --adjust-repos
492
493 Install packages modifying source repos.
494
495.. option:: --ceph-conf
496
497 Use (or reuse) a given ``ceph.conf`` file.
498
499.. option:: --cluster
500
501 Name of the cluster.
502
503.. option:: --dev
504
505 Install a bleeding edge built from Git branch or tag (default: master).
506
507.. option:: --cluster-network
508
509 Specify the (internal) cluster network.
510
511.. option:: --dmcrypt
512
513 Encrypt [data-path] and/or journal devices with ``dm-crypt``.
514
515.. option:: --dmcrypt-key-dir
516
517 Directory where ``dm-crypt`` keys are stored.
518
519.. option:: --install
520
521 Comma-separated package(s) to install on remote hosts.
522
523.. option:: --fs-type
524
224ce89b 525 Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
7c673cae
FG
526
527.. option:: --fsid
528
529 Provide an alternate FSID for ``ceph.conf`` generation.
530
531.. option:: --gpg-url
532
533 Specify a GPG key url to be used with custom repos (defaults to ceph.com).
534
535.. option:: --keyrings
536
537 Concatenate multiple keyrings to be seeded on new monitors.
538
539.. option:: --local-mirror
540
541 Fetch packages and push them to hosts for a local repo mirror.
542
543.. option:: --master
544
545 The domain for the Calamari master server.
546
547.. option:: --mkfs
548
549 Inject keys to MONs on remote nodes.
550
551.. option:: --no-adjust-repos
552
553 Install packages without modifying source repos.
554
555.. option:: --no-ssh-copykey
556
557 Do not attempt to copy ssh keys.
558
559.. option:: --overwrite-conf
560
561 Overwrite an existing conf file on remote host (if present).
562
563.. option:: --public-network
564
565 Specify the public network for a cluster.
566
567.. option:: --remove
568
569 Comma-separated package(s) to remove from remote hosts.
570
571.. option:: --repo
572
573 Install repo files only (skips package installation).
574
575.. option:: --repo-url
576
577 Specify a repo url that mirrors/contains Ceph packages.
578
579.. option:: --testing
580
581 Install the latest development release.
582
583.. option:: --username
584
585 The username to connect to the remote host.
586
587.. option:: --version
588
589 The current installed version of :program:`ceph-deploy`.
590
591.. option:: --zap-disk
592
593 Destroy the partition table and content of a disk.
594
595
596Availability
597============
598
599:program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
600the documentation at http://ceph.com/ceph-deploy/docs for more information.
601
602
603See also
604========
605
606:doc:`ceph-mon <ceph-mon>`\(8),
607:doc:`ceph-osd <ceph-osd>`\(8),
608:doc:`ceph-disk <ceph-disk>`\(8),
609:doc:`ceph-mds <ceph-mds>`\(8)