root / docs / snf-deploy.rst @ 6d8a47d0
History | View | Annotate | Download (16.9 kB)
1 |
.. _snf-deploy: |
---|---|
2 |
|
3 |
snf-deploy tool |
4 |
^^^^^^^^^^^^^^^ |
5 |
|
6 |
The `snf-deploy` tool allows you to automatically deploy Synnefo. |
7 |
You can use `snf-deploy` to deploy Synnefo, in two ways: |
8 |
|
9 |
1. Create a virtual cluster on your local machine and then deploy on that cluster. |
10 |
2. Deploy on a pre-existent cluster of physical nodes running Debian Squeeze. |
11 |
|
12 |
Currently, `snf-deploy` is mostly useful for testing/demo installations and is |
13 |
not recommended for production environment Synnefo deployments. If you want to |
14 |
deploy Synnefo in production, please read first the :ref:`Admin's installation |
15 |
guide <quick-install-admin-guide>` and then the :ref:`Admin's guide |
16 |
<admin-guide>`. |
17 |
|
18 |
If you use `snf-deploy` you will setup an up-and-running Synnefo installation, |
19 |
but the end-to-end functionality will depend on your underlying infrastracture |
20 |
(e.g. is nested virtualization enabled in your PC, is the router properly |
21 |
configured, do nodes have fully qualified domain names, etc.). In any way, it |
22 |
will enable you to get a grasp of the Web UI/API and base funtionality of |
23 |
Synnefo and also provide a proper configuration that you can afterwards consult |
24 |
while reading the Admin guides to set up a production environment that will |
25 |
scale up and use all available features (e.g. RADOS, Archipelago, etc). |
26 |
|
27 |
`snf-deploy` is a debian package that should be installed locally and allows |
28 |
you to install Synnefo on remote nodes (if you go for (2)), or spawn a cluster |
29 |
of VMs on your local machine using KVM and then install Synnefo on this cluster |
30 |
(if you go for (1)). To this end, here we will break down our description into |
31 |
three sections: |
32 |
|
33 |
a. :ref:`snf-deploy configuration <conf>` |
34 |
b. :ref:`Creating a virtual cluster <vcluster>` (needed for (1)) |
35 |
c. :ref:`Synnefo deployment <inst>` (either on virtual nodes created on section b, |
36 |
or on remote physical nodes) |
37 |
|
38 |
If you go for (1) you will need to walk through all the sections. If you go for |
39 |
(2), you should skip section `(b) <vcluster>`, since you only need sections |
40 |
`(a) <conf>` and `(c) <inst>`. |
41 |
|
42 |
Before getting any further we should mention the roles that `snf-deploy` refers |
43 |
to. The Synnefo roles are described in detail :ref:`here |
44 |
<physical-node-roles>`. Note that multiple roles can co-exist in the same node |
45 |
(virtual or physical). |
46 |
|
47 |
Currently, `snf-deploy` recognizes the following combined roles: |
48 |
|
49 |
* **accounts** = **WEBSERVER** + **ASTAKOS** |
50 |
* **pithos** = **WEBSERVER** + **PITHOS** |
51 |
* **cyclades** = **WEBSERVER** + **CYCLADES** |
52 |
* **db** = **ASTAKOS_DB** + **PITHOS_DB** + **CYCLADES_DB** |
53 |
|
54 |
the following independent roles: |
55 |
|
56 |
* **qh** = **QHOLDER** |
57 |
* **cms** = **CMS** |
58 |
* **mq** = **MQ** |
59 |
* **ns** = **NS** |
60 |
* **client** = **CLIENT** |
61 |
* **router**: The node to do any routing and NAT needed |
62 |
|
63 |
The above define the roles relative to the Synnefo components. However, in |
64 |
order to have instances up-and-running, at least one backend must be associated |
65 |
with Cyclades. Backends are Ganeti clusters, each with multiple **GANETI_NODE** |
66 |
s. Please note that these nodes may be the same as the ones used for the |
67 |
previous roles. To this end, `snf-deploy` also recognizes: |
68 |
|
69 |
* **cluster_nodes** = **G_BACKEND** = All available nodes of a specific backend |
70 |
* **master_node** = **GANETI_MASTER** |
71 |
|
72 |
Finally, it recognizes the group role: |
73 |
|
74 |
* **existing_nodes** = **SYNNEFO** + (N x **G_BACKEND**) |
75 |
|
76 |
In the future, `snf-deploy` will recognize all the independent roles of a scale |
77 |
out deployment as stated in the :ref:`scale up section <scale-up>`. When that's |
78 |
done, it won't need to introduce its own roles (stated here with lowercase) but |
79 |
rather use the scale out ones (stated with uppercase on the admin guide). |
80 |
|
81 |
|
82 |
.. _conf: |
83 |
|
84 |
Configuration (a) |
85 |
================= |
86 |
|
87 |
All configuration of `snf-deploy` happens by editting the following files under |
88 |
``/etc/snf-deploy``: |
89 |
|
90 |
``nodes.conf`` |
91 |
-------------- |
92 |
|
93 |
This file reflects the hardware infrastucture on which Synnefo is going to be |
94 |
deployed and is the first to be set before running `snf-deploy`. |
95 |
|
96 |
Defines the nodes' hostnames and their IPs. Currently `snf-deploy` expects all |
97 |
nodes to reside in the same network subnet and domain, and share the same |
98 |
gateway and nameserver. Since Synnefo requires FQDNs to operate, a nameserver |
99 |
is going to be automatically setup in the cluster by `snf-deploy`. Thus, the |
100 |
nameserver's IP should appear among the defined node IPs. From now on, we will |
101 |
refer to the nodes with their hostnames. This implies their FQDN and their IP. |
102 |
|
103 |
Also, defines the nodes' authentication credentials (username, password). |
104 |
Furthermore, whether nodes have an extra disk (used for LVM/DRBD storage in |
105 |
Ganeti backends) or not. The VM container nodes should have three separate |
106 |
network interfaces (either physical or vlans) each in the same collision |
107 |
domain; one for the node's public network, one for VMs' public network and one |
108 |
for VMs' private networks. In order to support the most common case, a router |
109 |
is setup on the VMs' public interface and does NAT (hoping the node has itself |
110 |
internet access). |
111 |
|
112 |
The nodes defined in this file can reflect a number of physical nodes, on which |
113 |
you will deploy Synnefo (option (2)), or a number of virtual nodes which will |
114 |
get created by `snf-deploy` using KVM (option (1)), before deploying Synnefo. |
115 |
As we will see in the next sections, one should first set up this file and then |
116 |
tell `snf-deploy` whether the nodes on this file should be created, or treated |
117 |
as pre-existing. |
118 |
|
119 |
An example ``nodes.conf`` file looks like this: |
120 |
|
121 |
FIXME: example file here |
122 |
|
123 |
``synnefo.conf`` |
124 |
---------------- |
125 |
|
126 |
This file reflects the way Synnefo will be deployed on the nodes defined at |
127 |
``nodes.conf``. |
128 |
|
129 |
The important section here is the roles. In this file we assing each of the |
130 |
roles described in the :ref:`introduction <snf-deploy>` to a specific node. The |
131 |
node is one of the nodes defined at ``nodes.conf``. Note that we refer to nodes |
132 |
with their short hostnames. |
133 |
|
134 |
Here we also define all credentials related to users needed by the various |
135 |
Synnefo services (database, RAPI, RabbitMQ) and the credentials of a test |
136 |
end-user (`snf-deploy` simulates a user signing up). |
137 |
|
138 |
Furthermore, define the Pithos shared directory which will hold all the Pithos |
139 |
related data (maps and blocks). |
140 |
|
141 |
Finally, define the name of the bridge interfaces controlled by Synnefo, and a |
142 |
testing Image to register after everything is up and running. |
143 |
|
144 |
An example ``setup.conf`` file (based on the previous ``nodes.conf`` example) |
145 |
looks like this: |
146 |
|
147 |
FIXME: example file here |
148 |
|
149 |
``ganeti.conf`` |
150 |
--------------- |
151 |
|
152 |
This file reflects the way Ganeti clusters will be deployed on the nodes |
153 |
defined at ``nodes.conf``. |
154 |
|
155 |
Here we include all info with regard to Ganeti backends. That is: the master |
156 |
node, its floating IP, the volume group name (in case of LVM support) and the |
157 |
VMs' public network associated to it. Please note that currently Synnefo |
158 |
expects different public networks per backend but still can support multiple |
159 |
public networks per backend. |
160 |
|
161 |
FIXME: example file here |
162 |
|
163 |
``deploy.conf`` |
164 |
--------------- |
165 |
|
166 |
This file customizes `snf-deploy` itself. |
167 |
|
168 |
It defines some needed directories and also includes options that have to do |
169 |
with the source of the packages to be deployed. Specifically, whether to deploy |
170 |
using local packages found under a local directory or deploy using an apt |
171 |
repository. If deploying from local packages, there is also an option to first |
172 |
download the packages from a custom URL and save them under the local directory |
173 |
for later use. |
174 |
|
175 |
FIXME: example file here |
176 |
|
177 |
``vcluster.conf`` |
178 |
----------------- |
179 |
|
180 |
This file defines options that are relevant to the virtual cluster creationi, if |
181 |
one chooses to create one. |
182 |
|
183 |
There is an option to define the URL of the Image that will be used as the host |
184 |
OS for the VMs of the virtual cluster. Also, options for defining an LVM space |
185 |
or a plain file to be used as a second disk. Finally, networking options to |
186 |
define where to bridge the virtual cluster. |
187 |
|
188 |
|
189 |
.. _vcluster: |
190 |
|
191 |
Virtual Cluster Creation (b) |
192 |
============================ |
193 |
|
194 |
As stated in the introduction, `snf-deploy` gives you the ability to create a |
195 |
local virtual cluster using KVM and then deploy Synnefo on top of this cluster. |
196 |
The number of cluster nodes is arbitrary and is defined in ``nodes.conf``. |
197 |
|
198 |
This section describes the creation of the virtual cluster, on which Synnefo |
199 |
will be deployed in the :ref:`next section <inst>`. If you want to deploy |
200 |
Synnefo on existing physical nodes, you should skip this section. |
201 |
|
202 |
The first thing you need to deploy a virtual cluster, is a Debian Base image, |
203 |
which will be used to spawn the VMs. We already provide an 8GB Debian Squeeze |
204 |
Base image with preinstalled keys and network-manager hostname hooks. This |
205 |
resides on our production Pithos service. Please see the corresponding |
206 |
``squeeze_image_url`` variable in ``vcluster.conf``. The image can be fetched |
207 |
by running: |
208 |
|
209 |
.. code-block:: console |
210 |
|
211 |
snf-deploy vcluster image |
212 |
|
213 |
This will download the image from the URL defined at ``squeeez_image_url`` |
214 |
(Pithos by default) and save it locally under ``/var/lib/snf-deploy/images``. |
215 |
|
216 |
TODO: mention related options: --img-dir, --extra-disk, --lvg, --os |
217 |
|
218 |
Once you have the image, then you need to setup the local machine's networking |
219 |
appropriately. You can do this by running: |
220 |
|
221 |
.. code-block:: console |
222 |
|
223 |
snf-deploy vcluster network |
224 |
|
225 |
This will add a bridge (defined with the ``bridge`` option inside |
226 |
``vcluster.conf``), iptables to allow traffic from/to the cluster, and enable |
227 |
forwarding and NAT for the selected network subnet (defined inside |
228 |
``nodes.conf`` in the ``subnet`` option). |
229 |
|
230 |
To complete the preparation, you need a DHCP server that will provide the |
231 |
selected hostnames and IPs to the cluster (defined under ``[ips]`` in |
232 |
``nodes.conf``). To do so, run: |
233 |
|
234 |
.. code-block:: console |
235 |
|
236 |
snf-deploy vcluster dhcp |
237 |
|
238 |
This will launch a dnsmasq instance, acting only as DHCP server and listening |
239 |
only on the cluster's bridge. Every time you make changes inside ``nodes.conf`` |
240 |
you should re-create the dnsmasq related files (under ``/etc/snf-deploy``) by |
241 |
passing --save-config option. |
242 |
|
243 |
After running all the above preparation tasks we can finally create the cluster |
244 |
defined in ``nodes.conf`` by running: |
245 |
|
246 |
.. code-block:: console |
247 |
|
248 |
snf-deploy vcluster create |
249 |
|
250 |
This will launch all the needed KVM virtual machines, snapshotting the image we |
251 |
fetched before. Their taps will be connected with the already created bridge |
252 |
and their primary interface will get the given address. |
253 |
|
254 |
Now that we have the nodes ready, we can move on and deploy Synnefo on them. |
255 |
|
256 |
|
257 |
.. _inst: |
258 |
|
259 |
Synnefo Installation (c) |
260 |
======================== |
261 |
|
262 |
At this point you should have an up-and-running cluster, either virtual |
263 |
(created in the :ref:`previous section <vcluster>` on your local machine) or |
264 |
physical on remote nodes. The cluster should also have valid hostnames and IPs. |
265 |
And all its nodes should be defined in ``nodes.conf``. |
266 |
|
267 |
You should also have set up ``synnefo.conf`` to reflect which Synnefo component |
268 |
will reside in which node. |
269 |
|
270 |
Node Requirements |
271 |
----------------- |
272 |
|
273 |
- OS: Debian Squeeze |
274 |
- authentication: `root` with same password for all nodes |
275 |
- primary network interface: `eth0` |
276 |
- primary IP in the same IPv4 subnet and network domain |
277 |
- spare network interfaces: `eth1`, `eth2` (or vlans on `eth0`) |
278 |
|
279 |
In case you have created a virtual cluster as described in the :ref:`section |
280 |
(b) <vcluster>`, the above requirements are already taken care of. In case of a |
281 |
physical cluster, you need to set them up manually by yourself, before |
282 |
proceeding with the Synnefo installation. |
283 |
|
284 |
Preparing the Synnefo deployment |
285 |
-------------------------------- |
286 |
|
287 |
The following actions are mandatory and must run before the actual deployment. |
288 |
In the following we refer to the sub commands of ``snf-deploy prepare`` and |
289 |
what they actually do. |
290 |
|
291 |
Synnefo expects FQDNs and therefore a nameserver (BIND) should be setup in a |
292 |
node inside the cluster. All nodes along with your local machine should use |
293 |
this nameserver and search in the corresponding network domain. To this end, |
294 |
add to your local ``resolv.conf`` (please change the default values with the |
295 |
ones of your custom configuration): |
296 |
|
297 |
.. code-block:: console |
298 |
|
299 |
search <your_domain> synnefo.deploy.local |
300 |
nameserver 192.168.0.1 |
301 |
|
302 |
WARNING: In case you are running the installation on physical nodes please |
303 |
ensure that they have the same `resolv.conf` and it does not change during |
304 |
and after installation (because of NetworkManager hooks or something..) |
305 |
|
306 |
To actually setup the nameserver in the node specified as ``ns`` in |
307 |
``synnefo.conf`` run: |
308 |
|
309 |
.. code-block:: console |
310 |
|
311 |
snf-deploy prepare ns |
312 |
|
313 |
To do some node tweaking and install correct `id_rsa/dsa` keys and `authorized_keys` |
314 |
needed for password-less intra-node communication run: |
315 |
|
316 |
.. code-block:: console |
317 |
|
318 |
snf-deploy prepare hosts |
319 |
|
320 |
At this point you should have a cluster with FQDNs and reverse DNS lookups |
321 |
ready for the Synnefo deployment. To sum up, we mention all the node |
322 |
requirements for a successful Synnefo installation, before proceeding. |
323 |
|
324 |
To check the network configuration (FQDNs, connectivity): |
325 |
|
326 |
.. code-block:: console |
327 |
|
328 |
snf-deploy prepare check |
329 |
|
330 |
WARNING: In case ping fails check ``/etc/nsswitch.conf`` hosts entry and put dns |
331 |
after files!!! |
332 |
|
333 |
To setup the apt repository and update each nodes' package index files: |
334 |
|
335 |
.. code-block:: console |
336 |
|
337 |
snf-deploy prepare apt |
338 |
|
339 |
Finally Synnefo needs a shared file system, so we need to setup the NFS server |
340 |
on node ``pithos`` defined in ``synnefo.conf``: |
341 |
|
342 |
.. code-block:: console |
343 |
|
344 |
snf-deploy prepare nfs |
345 |
|
346 |
If everything is setup correctly and all prerequisites are met, we can start |
347 |
the Synnefo deployment. |
348 |
|
349 |
Synnefo deployment |
350 |
------------------ |
351 |
|
352 |
To install the Synnefo stack on the existing cluster run: |
353 |
|
354 |
.. code-block:: console |
355 |
|
356 |
snf-deploy synnefo -vvv |
357 |
|
358 |
This might take a while. |
359 |
|
360 |
If this finishes without errors, check for successful installation by visiting |
361 |
from your local machine (make sure you have already setup your local |
362 |
``resolv.conf`` to point at the cluster's DNS): |
363 |
|
364 |
| https://accounts.synnefo.deploy.local/im/ |
365 |
|
366 |
and login with: |
367 |
|
368 |
| username: dimara@grnet.gr password: lala |
369 |
|
370 |
or the ``user_name`` and ``user_passwd`` defined in your ``synnefo.conf``. |
371 |
Take a small tour checking out Pithos and the rest of the Web UI. You can |
372 |
upload a sample file on Pithos to see that Pithos is working. Do not try to |
373 |
create a VM yet, since we have not yet added a Ganeti backend. |
374 |
|
375 |
If everything seems to work, we go ahead to the last step which is adding a |
376 |
Ganeti backend. |
377 |
|
378 |
Adding a Ganeti Backend |
379 |
----------------------- |
380 |
|
381 |
Assuming that everything works as expected, you must have Astakos, Pithos, CMS, |
382 |
DB and RabbitMQ up and running. Cyclades should work too, but partially. That's |
383 |
because no backend is registered yet. Let's setup one. Currently, Synnefo |
384 |
supports only Ganeti clusters as valid backends. They have to be created |
385 |
independently with `snf-deploy` and once they are up and running, we register |
386 |
them to Cyclades. From version 0.12, Synnefo supports multiple Ganeti backends. |
387 |
`snf-deploy` defines them in ``ganeti.conf``. |
388 |
|
389 |
After setting up ``ganeti.conf``, run: |
390 |
|
391 |
.. code-block:: console |
392 |
|
393 |
snf-deploy backend create --backend-name ganeti1 -vvv |
394 |
|
395 |
where ``ganeti1`` should have previously been defined as a section in |
396 |
``ganeti.conf``. This will create the ``ganeti1`` backend on the corresponding |
397 |
nodes (``cluster_nodes``, ``master_node``) defined in the ``ganeti1`` section |
398 |
of the ``ganeti.conf`` file. If you are an experienced user and want to deploy |
399 |
more than one Ganeti backend you should create multiple sections in |
400 |
``ganeti.conf`` and re-run the above command with the corresponding backend |
401 |
names. |
402 |
|
403 |
After creating and adding the Ganeti backend, we need to setup the backend |
404 |
networking. To do so, we run: |
405 |
|
406 |
.. code-block:: console |
407 |
|
408 |
snf-deploy backend network --backend-name ganeti1 |
409 |
|
410 |
And finally, we need to setup the backend storage: |
411 |
|
412 |
.. code-block:: console |
413 |
|
414 |
snf-deploy backend storage --backend-name ganeti1 |
415 |
|
416 |
This command will first check the ``extra_disk`` in ``nodes.conf`` and try to |
417 |
find it on the nodes of the cluster. If the nodes indeed have that disk, |
418 |
`snf-deploy` will create a PV and the corresponding VG and will enable LVM and |
419 |
DRBD storage in the Ganeti cluster. |
420 |
|
421 |
If the option is blank or `snf-deploy` can't find the disk on the nodes, LVM |
422 |
and DRBD will be disabled and only Ganeti's ``file`` disk template will be |
423 |
enabled. |
424 |
|
425 |
To test everything went as expected, visit from your local machine: |
426 |
|
427 |
.. code-block:: console |
428 |
|
429 |
https://cyclades.synnefo.deploy.local/ui/ |
430 |
|
431 |
and try to create a VM. Also create a Private Network and try to connect it. If |
432 |
everything works, you have setup Synnefo successfully. Enjoy! |
433 |
|
434 |
|
435 |
snf-deploy as a DevTool |
436 |
======================= |
437 |
|
438 |
For developers, a single node setup is highly recommended and `snf-deploy` is a |
439 |
very helpful tool. `snf-deploy` also supports updating packages that are |
440 |
locally generated. For this to work please add all \*.deb files in packages |
441 |
directory (see ``deploy.conf``) and set the ``use_local_packages`` option to |
442 |
``True``. Then run: |
443 |
|
444 |
.. code-block:: console |
445 |
|
446 |
snf-deploy synnefo update --use-local-packages |
447 |
snf-deploy backend update --backend-name ganeti2 --use-local-packages |
448 |
|
449 |
For advanced users, `snf-deploy` gives the ability to run one or more times |
450 |
independently some of the supported actions. To find out which are those, run: |
451 |
|
452 |
.. code-block:: console |
453 |
|
454 |
snf-deploy run --help |