]>
Commit | Line | Data |
---|---|---|
11fdf7f2 TL |
1 | |
2 | .. _kubernetes-dev: | |
3 | ||
4 | ======================================= | |
5 | Hacking on Ceph in Kubernetes with Rook | |
6 | ======================================= | |
7 | ||
8 | .. warning:: | |
9 | ||
10 | This is *not* official user documentation for setting up production | |
11 | Ceph clusters with Kubernetes. It is aimed at developers who want | |
12 | to hack on Ceph in Kubernetes. | |
13 | ||
14 | This guide is aimed at Ceph developers getting started with running | |
f67539c2 | 15 | in a Kubernetes environment. It assumes that you may be hacking on Rook, |
11fdf7f2 TL |
16 | Ceph or both, so everything is built from source. |
17 | ||
9f95a23c TL |
18 | TL;DR for hacking on MGR modules |
19 | ================================ | |
20 | ||
21 | Make your changes to the Python code base and then from Ceph's | |
22 | ``build`` directory, run:: | |
23 | ||
24 | ../src/script/kubejacker/kubejacker.sh '192.168.122.1:5000' | |
25 | ||
26 | where ``'192.168.122.1:5000'`` is a local docker registry and | |
27 | Rook's ``CephCluster`` CR uses ``image: 192.168.122.1:5000/ceph/ceph:latest``. | |
28 | ||
11fdf7f2 TL |
29 | 1. Build a kubernetes cluster |
30 | ============================= | |
31 | ||
32 | Before installing Ceph/Rook, make sure you've got a working kubernetes | |
33 | cluster with some nodes added (i.e. ``kubectl get nodes`` shows you something). | |
34 | The rest of this guide assumes that your development workstation has network | |
35 | access to your kubernetes cluster, such that ``kubectl`` works from your | |
36 | workstation. | |
37 | ||
9f95a23c | 38 | `There are many ways <https://kubernetes.io/docs/setup/>`_ |
11fdf7f2 TL |
39 | to build a kubernetes cluster: here we include some tips/pointers on where |
40 | to get started. | |
41 | ||
9f95a23c TL |
42 | `kubic-terraform-kvm <https://github.com/kubic-project/kubic-terraform-kvm>`_ |
43 | might also be an option. | |
11fdf7f2 | 44 | |
9f95a23c TL |
45 | Or `Host your own <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`_ with |
46 | ``kubeadm``. | |
11fdf7f2 | 47 | |
9f95a23c TL |
48 | Some Tips |
49 | --------- | |
11fdf7f2 | 50 | |
f67539c2 TL |
51 | Here are some tips for a smoother ride with |
52 | ||
53 | ``kubeadm``: | |
11fdf7f2 | 54 | |
11fdf7f2 TL |
55 | - If you have previously added any yum/deb repos for kubernetes packages, |
56 | disable them before trying to use the packages.cloud.google.com repository. | |
57 | If you don't, you'll get quite confusing conflicts. | |
58 | - Even if your distro already has docker, make sure you're installing it | |
59 | a version from docker.com that is within the range mentioned in the | |
f67539c2 | 60 | kubeadm install instructions. Especially, note that the docker in CentOS 7, 8 |
11fdf7f2 TL |
61 | will *not* work. |
62 | ||
f67539c2 TL |
63 | ``minikube``: |
64 | ||
65 | - Start up minikube by passing local docker registry address:: | |
66 | ``minikube start --driver=docker --insecure-registry='192.168.122.1:5000'`` | |
67 | ||
11fdf7f2 TL |
68 | Hosted elsewhere |
69 | ---------------- | |
70 | ||
71 | If you do not have any servers to hand, you might try a pure | |
f67539c2 | 72 | container provider such as Google Compute Engine. Your mileage may |
11fdf7f2 TL |
73 | vary when it comes to what kinds of storage devices are visible |
74 | to your kubernetes cluster. | |
75 | ||
76 | Make sure you check how much it's costing you before you spin up a big cluster! | |
77 | ||
78 | ||
9f95a23c TL |
79 | 2. Run a docker registry |
80 | ======================== | |
11fdf7f2 | 81 | |
9f95a23c | 82 | Run this somewhere accessible from both your workstation and your |
11fdf7f2 TL |
83 | kubernetes cluster (i.e. so that ``docker push/pull`` just works everywhere). |
84 | This is likely to be the same host you're using as your kubernetes master. | |
85 | ||
86 | 1. Install the ``docker-distribution`` package. | |
87 | 2. If you want to configure the port, edit ``/etc/docker-distribution/registry/config.yml`` | |
88 | 3. Enable the registry service: | |
89 | ||
90 | :: | |
91 | ||
92 | systemctl enable docker-distribution | |
93 | systemctl start docker-distribution | |
94 | ||
9f95a23c | 95 | You may need to mark the registry as **insecure**. |
11fdf7f2 TL |
96 | |
97 | 3. Build Rook | |
98 | ============= | |
99 | ||
100 | .. note:: | |
101 | ||
9f95a23c | 102 | Building Rook is **not required** to make changes to Ceph. |
11fdf7f2 TL |
103 | |
104 | Install Go if you don't already have it. | |
105 | ||
106 | Download the Rook source code: | |
107 | ||
108 | :: | |
109 | ||
110 | go get github.com/rook/rook | |
111 | ||
112 | # Ignore this warning, as Rook is not a conventional go package | |
113 | can't load package: package github.com/rook/rook: no Go files in /home/jspray/go/src/github.com/rook/rook | |
114 | ||
115 | You will now have a Rook source tree in ~/go/src/github.com/rook/rook -- you may | |
116 | be tempted to clone it elsewhere, but your life will be easier if you | |
117 | leave it in your GOPATH. | |
118 | ||
119 | Run ``make`` in the root of your Rook tree to build its binaries and containers: | |
120 | ||
121 | :: | |
122 | ||
123 | make | |
124 | ... | |
125 | === saving image build-9204c79b/ceph-amd64 | |
126 | === docker build build-9204c79b/ceph-toolbox-base-amd64 | |
127 | sha256:653bb4f8d26d6178570f146fe637278957e9371014ea9fce79d8935d108f1eaa | |
128 | === docker build build-9204c79b/ceph-toolbox-amd64 | |
129 | sha256:445d97b71e6f8de68ca1c40793058db0b7dd1ebb5d05789694307fd567e13863 | |
130 | === caching image build-9204c79b/ceph-toolbox-base-amd64 | |
131 | ||
132 | You can use ``docker image ls`` to see the resulting built images. The | |
133 | images you care about are the ones with tags ending "ceph-amd64" (used | |
134 | for the Rook operator and Ceph daemons) and "ceph-toolbox-amd64" (used | |
135 | for the "toolbox" container where the CLI is run). | |
136 | ||
11fdf7f2 TL |
137 | 4. Build Ceph |
138 | ============= | |
139 | ||
9f95a23c TL |
140 | .. note:: |
141 | ||
142 | Building Ceph is **not required** to make changes to MGR modules | |
143 | written in Python. | |
144 | ||
11fdf7f2 | 145 | |
9f95a23c TL |
146 | The Rook containers and the Ceph containers are independent now. Note that |
147 | Rook's Ceph client libraries need to communicate with the Ceph cluster, | |
148 | therefore a compatible major version is required. | |
149 | ||
f67539c2 | 150 | You can run a Registry docker container with access to your Ceph source |
11fdf7f2 TL |
151 | tree using a command like: |
152 | ||
153 | :: | |
154 | ||
f67539c2 TL |
155 | docker run -i -v /my/ceph/src:/my/ceph/src -p 192.168.122.1:5000:5000 -t --name registry registry:2 |
156 | ||
11fdf7f2 TL |
157 | |
158 | Once you have built Ceph, you can inject the resulting binaries into | |
159 | the Rook container image using the ``kubejacker.sh`` script (run from | |
160 | your build directory but from *outside* your build container). | |
161 | ||
9f95a23c TL |
162 | 5. Run Kubejacker |
163 | ================= | |
164 | ||
165 | ``kubejacker`` needs access to your docker registry. Execute the script | |
166 | to build a docker image containing your latest Ceph binaries: | |
11fdf7f2 TL |
167 | |
168 | :: | |
169 | ||
9f95a23c | 170 | build$ ../src/script/kubejacker/kubejacker.sh "<host>:<port>" |
11fdf7f2 | 171 | |
11fdf7f2 TL |
172 | |
173 | Now you've got your freshly built Rook and freshly built Ceph into | |
174 | a single container image, ready to run. Next time you change something | |
175 | in Ceph, you can re-run this to update your image and restart your | |
f67539c2 | 176 | kubernetes containers. If you change something in Rook, then re-run the Rook |
11fdf7f2 TL |
177 | build, and the Ceph build too. |
178 | ||
179 | 5. Run a Rook cluster | |
180 | ===================== | |
181 | ||
9f95a23c TL |
182 | Please refer to `Rook's documentation <https://rook.io/docs/rook/master/ceph-quickstart.html>`_ |
183 | for setting up a Rook operator, a Ceph cluster and the toolbox. | |
11fdf7f2 TL |
184 | |
185 | The Rook source tree includes example .yaml files in | |
9f95a23c TL |
186 | ``cluster/examples/kubernetes/ceph/``. Copy these into |
187 | a working directory, and edit as necessary to configure | |
188 | the setup you want: | |
11fdf7f2 | 189 | |
9f95a23c | 190 | - Ensure that ``spec.cephVersion.image`` points to your docker registry:: |
11fdf7f2 | 191 | |
9f95a23c TL |
192 | spec: |
193 | cephVersion: | |
194 | allowUnsupported: true | |
195 | image: 192.168.122.1:5000/ceph/ceph:latest | |
11fdf7f2 | 196 | |
11fdf7f2 TL |
197 | Then, load the configuration into the kubernetes API using ``kubectl``: |
198 | ||
199 | :: | |
200 | ||
9f95a23c TL |
201 | kubectl apply -f ./cluster-test.yaml |
202 | ||
203 | Use ``kubectl -n rook-ceph get pods`` to check the operator | |
204 | pod the Ceph daemons and toolbox are is coming up. | |
11fdf7f2 | 205 | |
9f95a23c | 206 | Once everything is up and running, |
11fdf7f2 TL |
207 | you should be able to open a shell in the toolbox container and |
208 | run ``ceph status``. | |
209 | ||
210 | If your mon services start but the rest don't, it could be that they're | |
211 | unable to form a quorum due to a Kubernetes networking issue: check that | |
212 | containers in your Kubernetes cluster can ping containers on other nodes. | |
213 | ||
214 | Cheat sheet | |
215 | =========== | |
216 | ||
217 | Open a shell in your toolbox container:: | |
218 | ||
9f95a23c | 219 | kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- bash |
11fdf7f2 TL |
220 | |
221 | Inspect the Rook operator container's logs:: | |
222 | ||
9f95a23c | 223 | kubectl -n rook-ceph logs -l app=rook-ceph-operator |
11fdf7f2 TL |
224 | |
225 | Inspect the ceph-mgr container's logs:: | |
226 | ||
227 | kubectl -n rook-ceph logs -l app=rook-ceph-mgr | |
228 |