5 There are multiple ways to set up a development environment for the SSH orchestrator.
6 In the following I'll use the `vstart` method.
8 1) Make sure remoto is installed (0.35 or newer)
10 2) Use vstart to spin up a cluster
15 # ../src/vstart.sh -n --cephadm
17 *Note that when you specify `--cephadm` you have to have passwordless ssh access to localhost*
19 It will add your ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub to `mgr/ssh/ssh_identity_{key, pub}`
20 and add your $HOSTNAME to the list of known hosts.
22 This will also enable the cephadm mgr module and enable it as the orchestrator backend.
26 While the above is sufficient for most operations, you may want to add a second host to the mix.
27 There is `Vagrantfile` for creating a minimal cluster in `src/pybind/mgr/cephadm/`.
29 If you wish to extend the one-node-localhost cluster to i.e. test more sophisticated OSD deployments you can follow the next steps:
31 From within the `src/pybind/mgr/cephadm` directory.
40 This will spawn three machines by default.
41 mon0, mgr0 and osd0 with 2 additional disks.
43 You can change that by passing `MONS` (default: 1), `MGRS` (default: 1), `OSDS` (default: 1) and
44 `DISKS` (default: 2) environment variables to overwrite the defaults. In order to not always have
45 to set the environment variables you can now create as JSON see `./vagrant.config.example.json`
48 If will also come with the necessary packages preinstalled as well as your ~/.ssh/id_rsa.pub key
49 injected. (to users root and vagrant; the cephadm-orchestrator currently connects as root)
52 2) Update the ssh-config
54 The cephadm orchestrator needs to understand how to connect to the new node. Most likely the VM
55 isn't reachable with the default settings used:
60 StrictHostKeyChecking no
63 You want to adjust this by retrieving an adapted ssh_config from Vagrant.
67 # vagrant ssh-config > ssh-config
70 Now set the newly created config for Ceph.
74 # ceph cephadm set-ssh-config -i <path_to_ssh_conf>
79 Add the newly created host(s) to the inventory.
84 # ceph orch host add <host>
87 4) Verify the inventory
89 You should see the hostname in the list.
98 To verify all disks are set and in good shape look if all devices have been spawned
103 # ceph orch device ls
106 6) Make a snapshot of all your VMs!
108 To not go the long way again the next time snapshot your VMs in order to revert them back
111 In `this repository <https://github.com/Devp00l/vagrant-helper-scripts>`_ you can find two
112 scripts that will help you with doing a snapshot and reverting it, without having to manual
113 snapshot and revert each VM individually.
116 Understanding ``AsyncCompletion``
117 =================================
119 How can I store temporary variables?
120 ------------------------------------
122 Let's imagine you want to write code similar to
126 hosts = self.get_hosts()
127 inventory = self.get_inventory(hosts)
128 return self._create_osd(hosts, drive_group, inventory)
130 That won't work, as ``get_hosts`` and ``get_inventory`` return objects
131 of type ``AsyncCompletion``.
133 Now let's imaging a Python 3 world, where we can use ``async`` and
134 ``await``. Then we actually can write this like so:
138 hosts = await self.get_hosts()
139 inventory = await self.get_inventory(hosts)
140 return self._create_osd(hosts, drive_group, inventory)
142 Let's use a simple example to make this clear:
149 As we're not yet in Python 3, we need to do write ``await`` manually by
150 calling ``orchestrator.Completion.then()``:
154 func_1().then(lambda val: func_2(val))
157 func_1().then(func_2)
159 Now let's desugar the original example:
163 hosts = await self.get_hosts()
164 inventory = await self.get_inventory(hosts)
165 return self._create_osd(hosts, drive_group, inventory)
167 Now let's replace one ``async`` at a time:
171 hosts = await self.get_hosts()
172 return self.get_inventory(hosts).then(lambda inventory:
173 self._create_osd(hosts, drive_group, inventory))
179 self.get_hosts().then(lambda hosts:
180 self.get_inventory(hosts).then(lambda inventory:
181 self._create_osd(hosts,
182 drive_group, inventory)))
184 This also works without lambdas:
188 def call_inventory(hosts):
189 def call_create(inventory)
190 return self._create_osd(hosts, drive_group, inventory)
192 return self.get_inventory(hosts).then(call_create)
194 self.get_hosts(call_inventory)
196 We should add support for ``await`` as soon as we're on Python 3.
198 I want to call my function for every host!
199 ------------------------------------------
201 Imagine you have a function that looks like so:
206 def deploy_stuff(name, node):
209 And you want to call ``deploy_stuff`` like so:
213 return [deploy_stuff(name, node) for node in nodes]
215 This won't work as expected. The number of ``AsyncCompletion`` objects
216 created should be ``O(1)``. But there is a solution:
217 ``@async_map_completion``
221 @async_map_completion
222 def deploy_stuff(name, node):
225 return deploy_stuff([(name, node) for node in nodes])
227 This way, we're only creating one ``AsyncCompletion`` object. Note that
228 you should not create new ``AsyncCompletion`` within ``deploy_stuff``, as
229 we're then no longer have ``O(1)`` completions:
234 def other_async_function():
237 @async_map_completion
238 def deploy_stuff(name, node):
239 return other_async_function() # wrong!
244 I've tried to look into making Completions composable by being able to
245 call one completion from another completion. I.e. making them re-usable
250 >>> return self.get_hosts().then(self._create_osd)
252 where ``get_hosts`` returns a Completion of list of hosts and
253 ``_create_osd`` takes a list of hosts.
255 The concept behind this is to store the computation steps explicit and
256 then explicitly evaluate the chain:
260 p = Completion(on_complete=lambda x: x*2).then(on_complete=lambda x: str(x))
262 assert p.result = "4"
268 +---------------+ +-----------------+
270 | lambda x: x*x | +--> | lambda x: str(x)|
272 +---------------+ +-----------------+