]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | :orphan: |
2 | ||
3 | ===================================== | |
4 | ceph-deploy -- Ceph deployment tool | |
5 | ===================================== | |
6 | ||
7 | .. program:: ceph-deploy | |
8 | ||
9 | Synopsis | |
10 | ======== | |
11 | ||
12 | | **ceph-deploy** **new** [*initial-monitor-node(s)*] | |
13 | ||
14 | | **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...] | |
15 | ||
16 | | **ceph-deploy** **mon** *create-initial* | |
17 | ||
18 | | **ceph-deploy** **osd** *prepare* [*ceph-node*]:[*dir-path*] | |
19 | ||
20 | | **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*] | |
21 | ||
22 | | **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*] | |
23 | ||
24 | | **ceph-deploy** **admin** [*admin-node*][*ceph-node*...] | |
25 | ||
26 | | **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...] | |
27 | ||
28 | | **ceph-deploy** **forgetkeys** | |
29 | ||
30 | Description | |
31 | =========== | |
32 | ||
33 | :program:`ceph-deploy` is a tool which allows easy and quick deployment of a | |
34 | Ceph cluster without involving complex and detailed manual configuration. It | |
35 | uses ssh to gain access to other Ceph nodes from the admin node, sudo for | |
36 | administrator privileges on them and the underlying Python scripts automates | |
37 | the manual process of Ceph installation on each node from the admin node itself. | |
38 | It can be easily run on an workstation and doesn't require servers, databases or | |
39 | any other automated tools. With :program:`ceph-deploy`, it is really easy to set | |
40 | up and take down a cluster. However, it is not a generic deployment tool. It is | |
41 | a specific tool which is designed for those who want to get Ceph up and running | |
42 | quickly with only the unavoidable initial configuration settings and without the | |
43 | overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those | |
44 | who want to customize security settings, partitions or directory locations and | |
45 | want to set up a cluster following detailed manual steps, should use other tools | |
46 | i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``. | |
47 | ||
48 | With :program:`ceph-deploy`, you can install Ceph packages on remote nodes, | |
49 | create a cluster, add monitors, gather/forget keys, add OSDs and metadata | |
50 | servers, configure admin hosts or take down the cluster. | |
51 | ||
52 | Commands | |
53 | ======== | |
54 | ||
55 | new | |
56 | --- | |
57 | ||
58 | Start deploying a new cluster and write a configuration file and keyring for it. | |
59 | It tries to copy ssh keys from admin node to gain passwordless ssh to monitor | |
60 | node(s), validates host IP, creates a cluster with a new initial monitor node or | |
61 | nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and | |
62 | a log file for the new cluster. It populates the newly created Ceph configuration | |
63 | file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor | |
64 | members under ``[global]`` section. | |
65 | ||
66 | Usage:: | |
67 | ||
68 | ceph-deploy new [MON][MON...] | |
69 | ||
70 | Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``). | |
71 | ||
72 | Other options like :option:`--no-ssh-copykey`, :option:`--fsid`, | |
73 | :option:`--cluster-network` and :option:`--public-network` can also be used with | |
74 | this command. | |
75 | ||
76 | If more than one network interface is used, ``public network`` setting has to be | |
77 | added under ``[global]`` section of Ceph configuration file. If the public subnet | |
78 | is given, ``new`` command will choose the one IP from the remote host that exists | |
79 | within the subnet range. Public network can also be added at runtime using | |
80 | :option:`--public-network` option with the command as mentioned above. | |
81 | ||
82 | ||
83 | install | |
84 | ------- | |
85 | ||
86 | Install Ceph packages on remote hosts. As a first step it installs | |
87 | ``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo | |
88 | so that Ceph packages from upstream repository get more priority. It then detects | |
89 | the platform and distribution for the hosts and installs Ceph normally by | |
90 | downloading distro compatible packages if adequate repo for Ceph is already added. | |
91 | ``--release`` flag is used to get the latest release for installation. During | |
92 | detection of platform and distribution before installation, if it finds the | |
93 | ``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow | |
94 | installation with custom cluster name and uses the default name ``ceph`` for the | |
95 | cluster. | |
96 | ||
97 | If the user explicitly specifies a custom repo url with :option:`--repo-url` for | |
98 | installation, anything detected from the configuration will be overridden and | |
99 | the custom repository location will be used for installation of Ceph packages. | |
100 | If required, valid custom repositories are also detected and installed. In case | |
101 | of installation from a custom repo a boolean is used to determine the logic | |
102 | needed to proceed with a custom repo installation. A custom repo install helper | |
103 | is used that goes through config checks to retrieve repos (and any extra repos | |
104 | defined) and installs them. ``cd_conf`` is the object built from ``argparse`` | |
105 | that holds the flags and information needed to determine what metadata from the | |
106 | configuration is to be used. | |
107 | ||
108 | A user can also opt to install only the repository without installing Ceph and | |
109 | its dependencies by using :option:`--repo` option. | |
110 | ||
111 | Usage:: | |
112 | ||
113 | ceph-deploy install [HOST][HOST...] | |
114 | ||
115 | Here, [HOST] is/are the host node(s) where Ceph is to be installed. | |
116 | ||
117 | An option ``--release`` is used to install a release known as CODENAME | |
118 | (default: firefly). | |
119 | ||
120 | Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`, | |
121 | :option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`, | |
122 | :option:`--repo-url` and :option:`--gpg-url` can also be used with this command. | |
123 | ||
124 | ||
125 | mds | |
126 | --- | |
127 | ||
128 | Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and | |
129 | the ``mds`` command is used to create one on the desired host node. It uses the | |
130 | subcommand ``create`` to do so. ``create`` first gets the hostname and distro | |
131 | information of the desired mds host. It then tries to read the ``bootstrap-mds`` | |
132 | key for the cluster and deploy it in the desired host. The key generally has a | |
133 | format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring, | |
134 | it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired | |
135 | host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}`` | |
136 | format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in | |
137 | ``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate | |
138 | commands based on ``distro.init`` to start the ``mds``. | |
139 | ||
140 | Usage:: | |
141 | ||
142 | ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...] | |
143 | ||
144 | The [DAEMON-NAME] is optional. | |
145 | ||
146 | ||
147 | mon | |
148 | --- | |
149 | ||
150 | Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands | |
151 | to deploy Ceph monitors on other nodes. | |
152 | ||
153 | Subcommand ``create-initial`` deploys for monitors defined in | |
154 | ``mon initial members`` under ``[global]`` section in Ceph configuration file, | |
155 | wait until they form quorum and then gatherkeys, reporting the monitor status | |
156 | along the process. If monitors don't form quorum the command will eventually | |
157 | time out. | |
158 | ||
159 | Usage:: | |
160 | ||
161 | ceph-deploy mon create-initial | |
162 | ||
163 | Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying | |
164 | the hosts which are desired to be made monitors. If no hosts are specified it | |
165 | will default to use the ``mon initial members`` defined under ``[global]`` | |
166 | section of Ceph configuration file. ``create`` first detects platform and distro | |
167 | for desired hosts and checks if hostname is compatible for deployment. It then | |
168 | uses the monitor keyring initially created using ``new`` command and deploys the | |
169 | monitor in desired host. If multiple hosts were specified during ``new`` command | |
170 | i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings | |
171 | were created then a concatenated keyring is used for deployment of monitors. In | |
172 | this process a keyring parser is used which looks for ``[entity]`` sections in | |
173 | monitor keyrings and returns a list of those sections. A helper is then used to | |
174 | collect all keyrings into a single blob that will be used to inject it to monitors | |
175 | with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be | |
176 | in a directory ending with ``.keyring``. During this process the helper uses list | |
177 | of sections returned by keyring parser to check if an entity is already present | |
178 | in a keyring and if not, adds it. The concatenated keyring is used for deployment | |
179 | of monitors to desired multiple hosts. | |
180 | ||
181 | Usage:: | |
182 | ||
183 | ceph-deploy mon create [HOST] [HOST...] | |
184 | ||
185 | Here, [HOST] is hostname of desired monitor host(s). | |
186 | ||
187 | Subcommand ``add`` is used to add a monitor to an existing cluster. It first | |
188 | detects platform and distro for desired host and checks if hostname is compatible | |
189 | for deployment. It then uses the monitor keyring, ensures configuration for new | |
190 | monitor host and adds the monitor to the cluster. If the section for the monitor | |
191 | exists and defines a mon addr that will be used, otherwise it will fallback by | |
192 | resolving the hostname to an IP. If :option:`--address` is used it will override | |
193 | all other options. After adding the monitor to the cluster, it gives it some time | |
194 | to start. It then looks for any monitor errors and checks monitor status. Monitor | |
195 | errors arise if the monitor is not added in ``mon initial members``, if it doesn't | |
196 | exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys | |
197 | were defined for monitors. Under such conditions, monitors may not be able to | |
198 | form quorum. Monitor status tells if the monitor is up and running normally. The | |
199 | status is checked by running ``ceph daemon mon.hostname mon_status`` on remote | |
200 | end which provides the output and returns a boolean status of what is going on. | |
201 | ``False`` means a monitor that is not fine even if it is up and running, while | |
202 | ``True`` means the monitor is up and running correctly. | |
203 | ||
204 | Usage:: | |
205 | ||
206 | ceph-deploy mon add [HOST] | |
207 | ||
208 | ceph-deploy mon add [HOST] --address [IP] | |
209 | ||
210 | Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor | |
211 | node. Please note, unlike other ``mon`` subcommands, only one node can be | |
212 | specified at a time. | |
213 | ||
214 | Subcommand ``destroy`` is used to completely remove monitors on remote hosts. | |
215 | It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon`` | |
216 | daemon really stopped, creates an archive directory ``mon-remove`` under | |
217 | ``/var/lib/ceph/``, archives old monitor directory in | |
218 | ``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from | |
219 | cluster by running ``ceph remove...`` command. | |
220 | ||
221 | Usage:: | |
222 | ||
223 | ceph-deploy mon destroy [HOST] [HOST...] | |
224 | ||
225 | Here, [HOST] is hostname of monitor that is to be removed. | |
226 | ||
227 | ||
228 | gatherkeys | |
229 | ---------- | |
230 | ||
231 | Gather authentication keys for provisioning new nodes. It takes hostnames as | |
232 | arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring | |
233 | and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These | |
234 | authentication keys are used when new ``monitors/OSDs/MDS`` are added to the | |
235 | cluster. | |
236 | ||
237 | Usage:: | |
238 | ||
239 | ceph-deploy gatherkeys [HOST] [HOST...] | |
240 | ||
241 | Here, [HOST] is hostname of the monitor from where keys are to be pulled. | |
242 | ||
243 | ||
244 | disk | |
245 | ---- | |
246 | ||
247 | Manage disks on a remote host. It actually triggers the ``ceph-disk`` utility | |
248 | and it's subcommands to manage disks. | |
249 | ||
250 | Subcommand ``list`` lists disk partitions and Ceph OSDs. | |
251 | ||
252 | Usage:: | |
253 | ||
254 | ceph-deploy disk list [HOST:[DISK]] | |
255 | ||
256 | Here, [HOST] is hostname of the node and [DISK] is disk name or path. | |
257 | ||
258 | Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It | |
259 | creates a GPT partition, marks the partition with Ceph type uuid, creates a | |
260 | file system, marks the file system as ready for Ceph consumption, uses entire | |
261 | partition and adds a new partition to the journal disk. | |
262 | ||
263 | Usage:: | |
264 | ||
265 | ceph-deploy disk prepare [HOST:[DISK]] | |
266 | ||
267 | Here, [HOST] is hostname of the node and [DISK] is disk name or path. | |
268 | ||
269 | Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a | |
270 | temporary location, allocates an OSD id (if needed), remounts in the correct | |
271 | location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is | |
272 | triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service | |
273 | start with ``ceph disk activate-all``. | |
274 | ||
275 | Usage:: | |
276 | ||
277 | ceph-deploy disk activate [HOST:[DISK]] | |
278 | ||
279 | Here, [HOST] is hostname of the node and [DISK] is disk name or path. | |
280 | ||
281 | Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents. | |
282 | It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and | |
283 | MBR data structures so that the disk becomes suitable for repartitioning. | |
284 | ``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a | |
285 | GPT disk. The ``prepare`` subcommand can now be executed which will create a new | |
286 | GPT partition. | |
287 | ||
288 | Usage:: | |
289 | ||
290 | ceph-deploy disk zap [HOST:[DISK]] | |
291 | ||
292 | Here, [HOST] is hostname of the node and [DISK] is disk name or path. | |
293 | ||
294 | ||
295 | osd | |
296 | --- | |
297 | ||
298 | Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain | |
299 | subcommands for managing OSDs. | |
300 | ||
301 | Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It | |
302 | first checks against multiple OSDs getting created and warns about the | |
303 | possibility of more than the recommended which would cause issues with max | |
304 | allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or | |
305 | writes the bootstrap key if not found. It then uses :program:`ceph-disk` | |
306 | utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD | |
307 | on the desired host. Once prepared, it gives some time to the OSD to settle and | |
308 | checks for any possible errors and if found, reports to the user. | |
309 | ||
310 | Usage:: | |
311 | ||
312 | ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...] | |
313 | ||
314 | Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand. | |
315 | It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with | |
316 | appropriate init type based on distro to activate the OSD. Once activated, it | |
317 | gives some time to the OSD to start and checks for any possible errors and if | |
318 | found, reports to the user. It checks the status of the prepared OSD, checks the | |
319 | OSD tree and makes sure the OSDs are up and in. | |
320 | ||
321 | Usage:: | |
322 | ||
323 | ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...] | |
324 | ||
325 | Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an | |
326 | OSD. | |
327 | ||
328 | Usage:: | |
329 | ||
330 | ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...] | |
331 | ||
332 | Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata. | |
333 | It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output | |
334 | and gets the mount point by matching the line where the partition mentions | |
335 | the OSD name, reads metadata from files, checks if a journal path exists, | |
336 | if the OSD is in a OSD tree and prints the OSD metadata. | |
337 | ||
338 | Usage:: | |
339 | ||
340 | ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...] | |
341 | ||
342 | ||
343 | admin | |
344 | ----- | |
345 | ||
346 | Push configuration and ``client.admin`` key to a remote host. It takes | |
347 | the ``{cluster}.client.admin.keyring`` from admin node and writes it under | |
348 | ``/etc/ceph`` directory of desired node. | |
349 | ||
350 | Usage:: | |
351 | ||
352 | ceph-deploy admin [HOST] [HOST...] | |
353 | ||
354 | Here, [HOST] is desired host to be configured for Ceph administration. | |
355 | ||
356 | ||
357 | config | |
358 | ------ | |
359 | ||
360 | Push/pull configuration file to/from a remote host. It uses ``push`` subcommand | |
361 | to takes the configuration file from admin host and write it to remote host under | |
362 | ``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull | |
363 | the configuration file under ``/etc/ceph`` directory of remote host to admin node. | |
364 | ||
365 | Usage:: | |
366 | ||
367 | ceph-deploy config push [HOST] [HOST...] | |
368 | ||
369 | ceph-deploy config pull [HOST] [HOST...] | |
370 | ||
371 | Here, [HOST] is the hostname of the node where config file will be pushed to or | |
372 | pulled from. | |
373 | ||
374 | ||
375 | uninstall | |
376 | --------- | |
377 | ||
378 | Remove Ceph packages from remote hosts. It detects the platform and distro of | |
379 | selected host and uninstalls Ceph packages from it. However, some dependencies | |
380 | like ``librbd1`` and ``librados2`` will not be removed because they can cause | |
381 | issues with ``qemu-kvm``. | |
382 | ||
383 | Usage:: | |
384 | ||
385 | ceph-deploy uninstall [HOST] [HOST...] | |
386 | ||
387 | Here, [HOST] is hostname of the node from where Ceph will be uninstalled. | |
388 | ||
389 | ||
390 | purge | |
391 | ----- | |
392 | ||
393 | Remove Ceph packages from remote hosts and purge all data. It detects the | |
394 | platform and distro of selected host, uninstalls Ceph packages and purges all | |
395 | data. However, some dependencies like ``librbd1`` and ``librados2`` will not be | |
396 | removed because they can cause issues with ``qemu-kvm``. | |
397 | ||
398 | Usage:: | |
399 | ||
400 | ceph-deploy purge [HOST] [HOST...] | |
401 | ||
402 | Here, [HOST] is hostname of the node from where Ceph will be purged. | |
403 | ||
404 | ||
405 | purgedata | |
406 | --------- | |
407 | ||
408 | Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``. | |
409 | Once it detects the platform and distro of desired host, it first checks if Ceph | |
410 | is still installed on the selected host and if installed, it won't purge data | |
411 | from it. If Ceph is already uninstalled from the host, it tries to remove the | |
412 | contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted | |
413 | and needs to be unmounted to continue. It unmount the OSDs and tries to remove | |
414 | the contents of ``/var/lib/ceph`` again and checks for errors. It also removes | |
415 | contents of ``/etc/ceph``. Once all steps are successfully completed, all the | |
416 | Ceph data from the selected host are removed. | |
417 | ||
418 | Usage:: | |
419 | ||
420 | ceph-deploy purgedata [HOST] [HOST...] | |
421 | ||
422 | Here, [HOST] is hostname of the node from where Ceph data will be purged. | |
423 | ||
424 | ||
425 | forgetkeys | |
426 | ---------- | |
427 | ||
428 | Remove authentication keys from the local directory. It removes all the | |
429 | authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd | |
430 | and bootstrap-mds keyring from the node. | |
431 | ||
432 | Usage:: | |
433 | ||
434 | ceph-deploy forgetkeys | |
435 | ||
436 | ||
437 | pkg | |
438 | --- | |
439 | ||
440 | Manage packages on remote hosts. It is used for installing or removing packages | |
441 | from remote hosts. The package names for installation or removal are to be | |
442 | specified after the command. Two options :option:`--install` and | |
443 | :option:`--remove` are used for this purpose. | |
444 | ||
445 | Usage:: | |
446 | ||
447 | ceph-deploy pkg --install [PKGs] [HOST] [HOST...] | |
448 | ||
449 | ceph-deploy pkg --remove [PKGs] [HOST] [HOST...] | |
450 | ||
451 | Here, [PKGs] is comma-separated package names and [HOST] is hostname of the | |
452 | remote node where packages are to be installed or removed from. | |
453 | ||
454 | ||
455 | calamari | |
456 | -------- | |
457 | ||
458 | Install and configure Calamari nodes. It first checks if distro is supported | |
459 | for Calamari installation by ceph-deploy. An argument ``connect`` is used for | |
460 | installation and configuration. It checks for ``ceph-deploy`` configuration | |
461 | file (cd_conf) and Calamari release repo or ``calamari-minion`` repo. It relies | |
462 | on default for repo installation as it doesn't install Ceph unless specified | |
463 | otherwise. ``options`` dictionary is also defined because ``ceph-deploy`` | |
464 | pops items internally which causes issues when those items are needed to be | |
465 | available for every host. If the distro is Debian/Ubuntu, it is ensured that | |
466 | proxy is disabled for ``calamari-minion`` repo. ``calamari-minion`` package is | |
467 | then installed and custom repository files are added. minion config is placed | |
468 | prior to installation so that it is present when the minion first starts. | |
469 | config directory, calamari salt config are created and salt-minion package | |
470 | is installed. If the distro is Redhat/CentOS, the salt-minion service needs to | |
471 | be started. | |
472 | ||
473 | Usage:: | |
474 | ||
475 | ceph-deploy calamari {connect} [HOST] [HOST...] | |
476 | ||
477 | Here, [HOST] is the hostname where Calamari is to be installed. | |
478 | ||
479 | An option ``--release`` can be used to use a given release from repositories | |
480 | defined in :program:`ceph-deploy`'s configuration. Defaults to ``calamari-minion``. | |
481 | ||
482 | Another option :option:`--master` can also be used with this command. | |
483 | ||
484 | Options | |
485 | ======= | |
486 | ||
487 | .. option:: --address | |
488 | ||
489 | IP address of the host node to be added to the cluster. | |
490 | ||
491 | .. option:: --adjust-repos | |
492 | ||
493 | Install packages modifying source repos. | |
494 | ||
495 | .. option:: --ceph-conf | |
496 | ||
497 | Use (or reuse) a given ``ceph.conf`` file. | |
498 | ||
499 | .. option:: --cluster | |
500 | ||
501 | Name of the cluster. | |
502 | ||
503 | .. option:: --dev | |
504 | ||
505 | Install a bleeding edge built from Git branch or tag (default: master). | |
506 | ||
507 | .. option:: --cluster-network | |
508 | ||
509 | Specify the (internal) cluster network. | |
510 | ||
511 | .. option:: --dmcrypt | |
512 | ||
513 | Encrypt [data-path] and/or journal devices with ``dm-crypt``. | |
514 | ||
515 | .. option:: --dmcrypt-key-dir | |
516 | ||
517 | Directory where ``dm-crypt`` keys are stored. | |
518 | ||
519 | .. option:: --install | |
520 | ||
521 | Comma-separated package(s) to install on remote hosts. | |
522 | ||
523 | .. option:: --fs-type | |
524 | ||
224ce89b | 525 | Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs. |
7c673cae FG |
526 | |
527 | .. option:: --fsid | |
528 | ||
529 | Provide an alternate FSID for ``ceph.conf`` generation. | |
530 | ||
531 | .. option:: --gpg-url | |
532 | ||
533 | Specify a GPG key url to be used with custom repos (defaults to ceph.com). | |
534 | ||
535 | .. option:: --keyrings | |
536 | ||
537 | Concatenate multiple keyrings to be seeded on new monitors. | |
538 | ||
539 | .. option:: --local-mirror | |
540 | ||
541 | Fetch packages and push them to hosts for a local repo mirror. | |
542 | ||
543 | .. option:: --master | |
544 | ||
545 | The domain for the Calamari master server. | |
546 | ||
547 | .. option:: --mkfs | |
548 | ||
549 | Inject keys to MONs on remote nodes. | |
550 | ||
551 | .. option:: --no-adjust-repos | |
552 | ||
553 | Install packages without modifying source repos. | |
554 | ||
555 | .. option:: --no-ssh-copykey | |
556 | ||
557 | Do not attempt to copy ssh keys. | |
558 | ||
559 | .. option:: --overwrite-conf | |
560 | ||
561 | Overwrite an existing conf file on remote host (if present). | |
562 | ||
563 | .. option:: --public-network | |
564 | ||
565 | Specify the public network for a cluster. | |
566 | ||
567 | .. option:: --remove | |
568 | ||
569 | Comma-separated package(s) to remove from remote hosts. | |
570 | ||
571 | .. option:: --repo | |
572 | ||
573 | Install repo files only (skips package installation). | |
574 | ||
575 | .. option:: --repo-url | |
576 | ||
577 | Specify a repo url that mirrors/contains Ceph packages. | |
578 | ||
579 | .. option:: --testing | |
580 | ||
581 | Install the latest development release. | |
582 | ||
583 | .. option:: --username | |
584 | ||
585 | The username to connect to the remote host. | |
586 | ||
587 | .. option:: --version | |
588 | ||
589 | The current installed version of :program:`ceph-deploy`. | |
590 | ||
591 | .. option:: --zap-disk | |
592 | ||
593 | Destroy the partition table and content of a disk. | |
594 | ||
595 | ||
596 | Availability | |
597 | ============ | |
598 | ||
599 | :program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to | |
600 | the documentation at http://ceph.com/ceph-deploy/docs for more information. | |
601 | ||
602 | ||
603 | See also | |
604 | ======== | |
605 | ||
606 | :doc:`ceph-mon <ceph-mon>`\(8), | |
607 | :doc:`ceph-osd <ceph-osd>`\(8), | |
608 | :doc:`ceph-disk <ceph-disk>`\(8), | |
609 | :doc:`ceph-mds <ceph-mds>`\(8) |