]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/kube-helm.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / start / kube-helm.rst
1 ================================
2 Installation (Kubernetes + Helm)
3 ================================
4
5 The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environment.
6 This documentation assumes a Kubernetes environment is available.
7
8 Current limitations
9 ===================
10
11 - The public and cluster networks must be the same
12 - If the storage class user id is not admin, you will have to manually create the user
13 in your Ceph cluster and create its secret in Kubernetes
14 - ceph-mgr can only run with 1 replica
15
16 Install and start Helm
17 ======================
18
19 Helm can be installed by following these instructions_.
20
21 Helm finds the Kubernetes cluster by reading from the local Kubernetes config file; make sure this is downloaded and accessible to the ``helm`` client.
22
23 A Tiller server must be configured and running for your Kubernetes cluster, and the local Helm client must be connected to it. It may be helpful to look at the Helm documentation for init_. To run Tiller locally and connect Helm to it, run::
24
25 $ helm init
26
27 The ceph-helm project uses a local Helm repo by default to store charts. To start a local Helm repo server, run::
28
29 $ helm serve &
30 $ helm repo add local http://localhost:8879/charts
31
32 Add ceph-helm to Helm local repos
33 ==================================
34 ::
35
36 $ git clone https://github.com/ceph/ceph-helm
37 $ cd ceph-helm/ceph
38 $ make
39
40 Configure your Ceph cluster
41 ===========================
42
43 Create a ``ceph-overrides.yaml`` that will contain your Ceph configuration. This file may exist anywhere, but for this document will be assumed to reside in the user's home directory.::
44
45 $ cat ~/ceph-overrides.yaml
46 network:
47 public: 172.21.0.0/20
48 cluster: 172.21.0.0/20
49
50 osd_devices:
51 - name: dev-sdd
52 device: /dev/sdd
53 zap: "1"
54 - name: dev-sde
55 device: /dev/sde
56 zap: "1"
57
58 storageclass:
59 name: ceph-rbd
60 pool: rbd
61 user_id: k8s
62
63 .. note:: If journal is not set it will be colocated with device
64
65 .. note:: The ``ceph-helm/ceph/ceph/values.yaml`` file contains the full
66 list of option that can be set
67
68 Create the Ceph cluster namespace
69 ==================================
70
71 By default, ceph-helm components assume they are to be run in the ``ceph`` Kubernetes namespace. To create the namespace, run::
72
73 $ kubectl create namespace ceph
74
75 Configure RBAC permissions
76 ==========================
77
78 Kubernetes >=v1.6 makes RBAC the default admission controller. ceph-helm provides RBAC roles and permissions for each component::
79
80 $ kubectl create -f ~/ceph-helm/ceph/rbac.yaml
81
82 The ``rbac.yaml`` file assumes that the Ceph cluster will be deployed in the ``ceph`` namespace.
83
84 Label kubelets
85 ==============
86
87 The following labels need to be set to deploy a Ceph cluster:
88 - ceph-mon=enabled
89 - ceph-mgr=enabled
90 - ceph-osd=enabled
91 - ceph-osd-device-<name>=enabled
92
93 The ``ceph-osd-device-<name>`` label is created based on the osd_devices name value defined in our ``ceph-overrides.yaml``.
94 From our example above we will have the two following label: ``ceph-osd-device-dev-sdb`` and ``ceph-osd-device-dev-sdc``.
95
96 For each Ceph Monitor::
97
98 $ kubectl label node <nodename> ceph-mon=enabled ceph-mgr=enabled
99
100 For each OSD node::
101
102 $ kubectl label node <nodename> ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled
103
104 Ceph Deployment
105 ===============
106
107 Run the helm install command to deploy Ceph::
108
109 $ helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml
110 NAME: ceph
111 LAST DEPLOYED: Wed Oct 18 22:25:06 2017
112 NAMESPACE: ceph
113 STATUS: DEPLOYED
114
115 RESOURCES:
116 ==> v1/Secret
117 NAME TYPE DATA AGE
118 ceph-keystone-user-rgw Opaque 7 1s
119
120 ==> v1/ConfigMap
121 NAME DATA AGE
122 ceph-bin-clients 2 1s
123 ceph-bin 24 1s
124 ceph-etc 1 1s
125 ceph-templates 5 1s
126
127 ==> v1/Service
128 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
129 ceph-mon None <none> 6789/TCP 1s
130 ceph-rgw 10.101.219.239 <none> 8088/TCP 1s
131
132 ==> v1beta1/DaemonSet
133 NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
134 ceph-mon 3 3 0 3 0 ceph-mon=enabled 1s
135 ceph-osd-dev-sde 3 3 0 3 0 ceph-osd-device-dev-sde=enabled,ceph-osd=enabled 1s
136 ceph-osd-dev-sdd 3 3 0 3 0 ceph-osd-device-dev-sdd=enabled,ceph-osd=enabled 1s
137
138 ==> v1beta1/Deployment
139 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
140 ceph-mds 1 1 1 0 1s
141 ceph-mgr 1 1 1 0 1s
142 ceph-mon-check 1 1 1 0 1s
143 ceph-rbd-provisioner 2 2 2 0 1s
144 ceph-rgw 1 1 1 0 1s
145
146 ==> v1/Job
147 NAME DESIRED SUCCESSFUL AGE
148 ceph-mgr-keyring-generator 1 0 1s
149 ceph-mds-keyring-generator 1 0 1s
150 ceph-osd-keyring-generator 1 0 1s
151 ceph-rgw-keyring-generator 1 0 1s
152 ceph-mon-keyring-generator 1 0 1s
153 ceph-namespace-client-key-generator 1 0 1s
154 ceph-storage-keys-generator 1 0 1s
155
156 ==> v1/StorageClass
157 NAME TYPE
158 ceph-rbd ceph.com/rbd
159
160 The output from helm install shows us the different types of resources that will be deployed.
161
162 A StorageClass named ``ceph-rbd`` of type ``ceph.com/rbd`` will be created with ``ceph-rbd-provisioner`` Pods. These
163 will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first
164 time. All RBDs will use the ext4 filesystem. ``ceph.com/rbd`` does not support the ``fsType`` option.
165 By default, RBDs will use image format 2 and layering. You can overwrite the following storageclass' defaults in your values file::
166
167 storageclass:
168 name: ceph-rbd
169 pool: rbd
170 user_id: k8s
171 user_secret_name: pvc-ceph-client-key
172 image_format: "2"
173 image_features: layering
174
175 Check that all Pods are running with the command below. This might take a few minutes::
176
177 $ kubectl -n ceph get pods
178 NAME READY STATUS RESTARTS AGE
179 ceph-mds-3804776627-976z9 0/1 Pending 0 1m
180 ceph-mgr-3367933990-b368c 1/1 Running 0 1m
181 ceph-mon-check-1818208419-0vkb7 1/1 Running 0 1m
182 ceph-mon-cppdk 3/3 Running 0 1m
183 ceph-mon-t4stn 3/3 Running 0 1m
184 ceph-mon-vqzl0 3/3 Running 0 1m
185 ceph-osd-dev-sdd-6dphp 1/1 Running 0 1m
186 ceph-osd-dev-sdd-6w7ng 1/1 Running 0 1m
187 ceph-osd-dev-sdd-l80vv 1/1 Running 0 1m
188 ceph-osd-dev-sde-6dq6w 1/1 Running 0 1m
189 ceph-osd-dev-sde-kqt0r 1/1 Running 0 1m
190 ceph-osd-dev-sde-lp2pf 1/1 Running 0 1m
191 ceph-rbd-provisioner-2099367036-4prvt 1/1 Running 0 1m
192 ceph-rbd-provisioner-2099367036-h9kw7 1/1 Running 0 1m
193 ceph-rgw-3375847861-4wr74 0/1 Pending 0 1m
194
195 .. note:: The MDS and RGW Pods are pending since we did not label any nodes with
196 ``ceph-rgw=enabled`` or ``ceph-mds=enabled``
197
198 Once all Pods are running, check the status of the Ceph cluster from one Mon::
199
200 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph -s
201 cluster:
202 id: e8f9da03-c2d2-4ad3-b807-2a13d0775504
203 health: HEALTH_OK
204
205 services:
206 mon: 3 daemons, quorum mira115,mira110,mira109
207 mgr: mira109(active)
208 osd: 6 osds: 6 up, 6 in
209
210 data:
211 pools: 0 pools, 0 pgs
212 objects: 0 objects, 0 bytes
213 usage: 644 MB used, 5555 GB / 5556 GB avail
214 pgs:
215
216 Configure a Pod to use a PersistentVolume from Ceph
217 ===================================================
218
219 Create a keyring for the k8s user defined in the ``~/ceph-overwrite.yaml`` and convert
220 it to base64::
221
222 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- bash
223 # ceph auth get-or-create-key client.k8s mon 'allow r' osd 'allow rwx pool=rbd' | base64
224 QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo=
225 # exit
226
227 Edit the user secret present in the ``ceph`` namespace::
228
229 $ kubectl -n ceph edit secrets/pvc-ceph-client-key
230
231 Add the base64 value to the key value with your own and save::
232
233 apiVersion: v1
234 data:
235 key: QVFCLzdPaFoxeUxCRVJBQUVEVGdHcE9YU3BYMVBSdURHUEU0T0E9PQo=
236 kind: Secret
237 metadata:
238 creationTimestamp: 2017-10-19T17:34:04Z
239 name: pvc-ceph-client-key
240 namespace: ceph
241 resourceVersion: "8665522"
242 selfLink: /api/v1/namespaces/ceph/secrets/pvc-ceph-client-key
243 uid: b4085944-b4f3-11e7-add7-002590347682
244 type: kubernetes.io/rbd
245
246 We are going to create a Pod that consumes a RBD in the default namespace.
247 Copy the user secret from the ``ceph`` namespace to ``default``::
248
249 $ kubectl -n ceph get secrets/pvc-ceph-client-key -o json | jq '.metadata.namespace = "default"' | kubectl create -f -
250 secret "pvc-ceph-client-key" created
251 $ kubectl get secrets
252 NAME TYPE DATA AGE
253 default-token-r43wl kubernetes.io/service-account-token 3 61d
254 pvc-ceph-client-key kubernetes.io/rbd 1 20s
255
256 Create and initialize the RBD pool::
257
258 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd 256
259 pool 'rbd' created
260 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd pool init rbd
261
262 .. important:: Kubernetes uses the RBD kernel module to map RBDs to hosts. Luminous requires
263 CRUSH_TUNABLES 5 (Jewel). The minimal kernel version for these tunables is 4.5.
264 If your kernel does not support these tunables, run ``ceph osd crush tunables hammer``
265
266
267 .. important:: Since RBDs are mapped on the host system. Hosts need to be able to resolve
268 the ceph-mon.ceph.svc.cluster.local name managed by the kube-dns service. To get the
269 IP address of the kube-dns service, run ``kubectl -n kube-system get svc/kube-dns``
270
271 Create a PVC::
272
273 $ cat pvc-rbd.yaml
274 kind: PersistentVolumeClaim
275 apiVersion: v1
276 metadata:
277 name: ceph-pvc
278 spec:
279 accessModes:
280 - ReadWriteOnce
281 resources:
282 requests:
283 storage: 20Gi
284 storageClassName: ceph-rbd
285
286 $ kubectl create -f pvc-rbd.yaml
287 persistentvolumeclaim "ceph-pvc" created
288 $ kubectl get pvc
289 NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
290 ceph-pvc Bound pvc-1c2ada50-b456-11e7-add7-002590347682 20Gi RWO ceph-rbd 3s
291
292 You can check that the RBD has been created on your cluster::
293
294 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd ls
295 kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915
296 $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd info kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915
297 rbd image 'kubernetes-dynamic-pvc-1c2e9442-b456-11e7-9bd2-2a4159ce3915':
298 size 20480 MB in 5120 objects
299 order 22 (4096 kB objects)
300 block_name_prefix: rbd_data.10762ae8944a
301 format: 2
302 features: layering
303 flags:
304 create_timestamp: Wed Oct 18 22:45:59 2017
305
306 Create a Pod that will use the PVC::
307
308 $ cat pod-with-rbd.yaml
309 kind: Pod
310 apiVersion: v1
311 metadata:
312 name: mypod
313 spec:
314 containers:
315 - name: busybox
316 image: busybox
317 command:
318 - sleep
319 - "3600"
320 volumeMounts:
321 - mountPath: "/mnt/rbd"
322 name: vol1
323 volumes:
324 - name: vol1
325 persistentVolumeClaim:
326 claimName: ceph-pvc
327
328 $ kubectl create -f pod-with-rbd.yaml
329 pod "mypod" created
330
331 Check the Pod::
332
333 $ kubectl get pods
334 NAME READY STATUS RESTARTS AGE
335 mypod 1/1 Running 0 17s
336 $ kubectl exec mypod -- mount | grep rbd
337 /dev/rbd0 on /mnt/rbd type ext4 (rw,relatime,stripe=1024,data=ordered)
338
339 Logging
340 =======
341
342 OSDs and Monitor logs can be accessed via the ``kubectl logs [-f]`` command. Monitors have multiple stream of logging,
343 each stream is accessible from a container running in the ceph-mon Pod.
344
345 There are 3 containers running in the ceph-mon Pod:
346 - ceph-mon, equivalent of ceph-mon.hostname.log on baremetal
347 - cluster-audit-log-tailer, equivalent of ceph.audit.log on baremetal
348 - cluster-log-tailer, equivalent of ceph.log on baremetal or ``ceph -w``
349
350 Each container is accessible via the ``--container`` or ``-c`` option.
351 For instance, to access the cluster-tail-log, one can run::
352
353 $ kubectl -n ceph logs ceph-mon-cppdk -c cluster-log-tailer
354
355 .. _ceph-helm: https://github.com/ceph/ceph-helm/
356 .. _instructions: https://github.com/kubernetes/helm/blob/master/docs/install.md
357 .. _init: https://github.com/kubernetes/helm/blob/master/docs/helm/helm_init.md