root / docs / snf-deploy.rst @ 36f3d51e
History | View | Annotate | Download (13.2 kB)
1 |
.. _snf-deploy: |
---|---|
2 |
|
3 |
snf-deploy tool |
4 |
^^^^^^^^^^^^^^^ |
5 |
|
6 |
The `snf-deploy` tool allows you to automatically deploy Synnefo. |
7 |
You can use `snf-deploy` to deploy Synnefo, in two ways: |
8 |
|
9 |
1. Create a virtual cluster on your local machine and then deploy on that cluster. |
10 |
2. Deploy on a pre-existent cluster of physical nodes running Debian Wheezy. |
11 |
|
12 |
Currently, `snf-deploy` is mostly useful for testing/demo installations and is |
13 |
not recommended for production environment Synnefo deployments. If you want to |
14 |
deploy Synnefo in production, please read first the :ref:`Admin's installation |
15 |
guide <quick-install-admin-guide>` and then the :ref:`Admin's guide |
16 |
<admin-guide>`. |
17 |
|
18 |
If you use `snf-deploy` you will setup an up-and-running Synnefo installation, |
19 |
but the end-to-end functionality will depend on your underlying infrastracture |
20 |
(e.g. is nested virtualization enabled in your PC, is the router properly |
21 |
configured, do nodes have fully qualified domain names, etc.). In any way, it |
22 |
will enable you to get a grasp of the Web UI/API and base funtionality of |
23 |
Synnefo and also provide a proper configuration that you can afterwards consult |
24 |
while reading the Admin guides to set up a production environment that will |
25 |
scale up and use all available features (e.g. RADOS, Archipelago, etc). |
26 |
|
27 |
`snf-deploy` is a debian package that should be installed locally and allows |
28 |
you to install Synnefo locally, or on remote nodes, or spawn a cluster of VMs |
29 |
on your local machine using KVM and then install Synnefo on this cluster. To |
30 |
this end, here we will break down our description into three sections: |
31 |
|
32 |
a. :ref:`snf-deploy configuration <conf>` |
33 |
b. :ref:`Creating a virtual cluster <vcluster>` (needed for (1)) |
34 |
c. :ref:`Synnefo deployment <inst>` (either on virtual nodes created on section b, |
35 |
or on remote physical nodes) |
36 |
|
37 |
If you go for (1) you will need to walk through all the sections. If you go for |
38 |
(2), you should skip section `(b) <vcluster>`, since you only need sections |
39 |
`(a) <conf>` and `(c) <inst>`. |
40 |
|
41 |
Before getting any further we should mention the roles that `snf-deploy` refers |
42 |
to. The Synnefo roles are described in detail :ref:`here |
43 |
<physical-node-roles>`. Those nodes consist of certain components. |
44 |
Note that multiple roles can co-exist in the same node |
45 |
(virtual or physical). |
46 |
|
47 |
Currently, `snf-deploy` defines the following roles: |
48 |
|
49 |
* ns: bind server (DNS) |
50 |
* db: postgresql server (database) |
51 |
* mq: rabbitmq server (message queue) |
52 |
* nfs: nfs server |
53 |
* astakos: identity service |
54 |
* pithos: storage service |
55 |
* cyclades: compute service |
56 |
* cms: cms service |
57 |
* stats: stats service |
58 |
* ganeti: ganeti node |
59 |
* master: master node |
60 |
|
61 |
|
62 |
The previous roles are combinations of the following software components: |
63 |
|
64 |
* HW: IP and internet access |
65 |
* SSH: ssh keys and config |
66 |
* DDNS: ddns keys and ddns client config |
67 |
* NS: nameserver with ddns config |
68 |
* DNS: resolver config |
69 |
* APT: apt sources config |
70 |
* DB: database server with postgresql |
71 |
* MQ: message queue server with rabbitmq |
72 |
* NFS: nfs server |
73 |
* Mount: nfs mount point |
74 |
* Apache: web server with Apache |
75 |
* Gunicorn: gunicorn server |
76 |
* Common: synnefo common |
77 |
* WEB: synnefo webclient |
78 |
* Astakos: astakos webapp |
79 |
* Pithos: pithos webapp |
80 |
* Cyclades: cyclades webapp |
81 |
* CMS: cms webapp |
82 |
* VNC: vnc authentication proxy |
83 |
* Collectd: collectd config |
84 |
* Stats: stats webapp |
85 |
* Kamaki: kamaki client |
86 |
* Burnin: qa software |
87 |
* Ganeti: ganeti node |
88 |
* Master: ganeti master node |
89 |
* Image: synnefo image os provider |
90 |
* Network: synnefo networking scripts |
91 |
* GTools: synnefo tools for ganeti |
92 |
* GanetiCollectd: collectd config for ganeti nodes |
93 |
|
94 |
Each component defines the following things: |
95 |
|
96 |
* commands to check prereqs |
97 |
* commands to prepare installation |
98 |
* list of packages to install |
99 |
* specific configuration files (templates) |
100 |
* restart/reload commands |
101 |
* initialization commands |
102 |
* test commands |
103 |
|
104 |
All a components needs is the node info that it gets installed to and the |
105 |
snf-deploy configuration environment (available after parsing conf files). |
106 |
|
107 |
.. _conf: |
108 |
|
109 |
Configuration (a) |
110 |
================= |
111 |
|
112 |
All configuration of `snf-deploy` happens by editting the following files under |
113 |
``/etc/snf-deploy``: |
114 |
|
115 |
``nodes.conf`` |
116 |
-------------- |
117 |
|
118 |
This file reflects the hardware infrastucture on which Synnefo is going to be |
119 |
deployed and is the first to be set before running `snf-deploy`. |
120 |
|
121 |
Defines the nodes' hostnames and their IPs. Currently `snf-deploy` expects all |
122 |
nodes to reside under the same domain. Since Synnefo requires FQDNs to operate, |
123 |
a nameserver is going to be automatically setup in the cluster by `snf-deploy` |
124 |
and all nodes with use this node for resolver. |
125 |
|
126 |
Also, defines the nodes' authentication credentials (username, password). |
127 |
Furthermore, whether nodes have an extra disk (used for LVM/DRBD storage in |
128 |
Ganeti backends) or not. The VM container nodes should have three separate |
129 |
network interfaces (either physical or vlans) each in the same collision |
130 |
domain; one for the node's public network, one for VMs' public network and one |
131 |
for VMs' private networks. In order to support the most common case, a router |
132 |
is setup on the VMs' public interface and does NAT (hoping the node has itself |
133 |
internet access). |
134 |
|
135 |
The nodes defined in this file can reflect a number of physical nodes, on which |
136 |
you will deploy Synnefo (option (2)), or a number of virtual nodes which will |
137 |
get created by `snf-deploy` using KVM (option (1)), before deploying Synnefo. |
138 |
As we will see in the next sections, one should first set up this file and then |
139 |
tell `snf-deploy` whether the nodes on this file should be created, or treated |
140 |
as pre-existing. |
141 |
|
142 |
In case you deploy all-in-one you can install `snf-deploy` package in the |
143 |
target node and use `--autoconf` option. By that you must change only |
144 |
the passwords section and everything else will be automatically configured. |
145 |
|
146 |
An example ``nodes.conf`` file looks like this: |
147 |
|
148 |
FIXME: example file here |
149 |
|
150 |
``synnefo.conf`` |
151 |
---------------- |
152 |
|
153 |
This file reflects the way Synnefo will be deployed on the nodes defined at |
154 |
``nodes.conf``. |
155 |
|
156 |
The important section here is the roles. In this file we assing each of the |
157 |
roles described in the :ref:`introduction <snf-deploy>` to a specific node. The |
158 |
node is one of the nodes defined at ``nodes.conf``. Note that we refer to nodes |
159 |
with their ID (node1, node2, etc). |
160 |
|
161 |
Here we also define all credentials related to users needed by the various |
162 |
Synnefo services (database, RAPI, RabbitMQ) and the credentials of a test |
163 |
end-user (`snf-deploy` simulates a user signing up). |
164 |
|
165 |
Furthermore, define the Pithos shared directory which will hold all the Pithos |
166 |
related data (maps and blocks). |
167 |
|
168 |
Finally, define the name of the bridge interfaces controlled by Synnefo, and a |
169 |
testing Image to register after everything is up and running. |
170 |
|
171 |
An example ``setup.conf`` file (based on the previous ``nodes.conf`` example) |
172 |
looks like this: |
173 |
|
174 |
FIXME: example file here |
175 |
|
176 |
``ganeti.conf`` |
177 |
--------------- |
178 |
|
179 |
This file reflects the way Ganeti clusters will be deployed on the nodes |
180 |
defined at ``nodes.conf``. |
181 |
|
182 |
Here we include all info with regard to Ganeti backends. That is: the master |
183 |
node, its floating IP, the rest of the cluster nodes (if any) the volume group |
184 |
name (in case of LVM support) and the VMs' public network associated to it. |
185 |
|
186 |
FIXME: example file here |
187 |
|
188 |
``deploy.conf`` |
189 |
--------------- |
190 |
|
191 |
This file customizes `snf-deploy` itself. |
192 |
|
193 |
It defines some needed directories and also includes options that have to do |
194 |
with the source of the packages to be deployed. Specifically, whether to deploy |
195 |
using local packages found under a local directory or deploy using an apt |
196 |
repository. If deploying from local packages, there is also an option to first |
197 |
download the packages from a custom URL and save them under the local directory |
198 |
for later use. |
199 |
|
200 |
FIXME: example file here |
201 |
|
202 |
``vcluster.conf`` |
203 |
----------------- |
204 |
|
205 |
This file defines options that are relevant to the virtual cluster creation, if |
206 |
one chooses to create one. |
207 |
|
208 |
There is an option to define the URL of the Image that will be used as the host |
209 |
OS for the VMs of the virtual cluster. Also, options for defining an LVM space |
210 |
or a plain file to be used as a second disk. Finally, networking options to |
211 |
define where to bridge the virtual cluster. |
212 |
|
213 |
|
214 |
.. _vcluster: |
215 |
|
216 |
Virtual Cluster Creation (b) |
217 |
============================ |
218 |
|
219 |
As stated in the introduction, `snf-deploy` gives you the ability to create a |
220 |
local virtual cluster using KVM and then deploy Synnefo on top of this cluster. |
221 |
The number of cluster nodes is arbitrary and is defined in ``nodes.conf``. |
222 |
|
223 |
This section describes the creation of the virtual cluster, on which Synnefo |
224 |
will be deployed in the :ref:`next section <inst>`. If you want to deploy |
225 |
Synnefo on existing physical nodes, you should skip this section. |
226 |
|
227 |
The first thing you need to deploy a virtual cluster, is a Debian Base image, |
228 |
which will be used to spawn the VMs. |
229 |
|
230 |
FIXME: Find a way to provide this image. |
231 |
|
232 |
The virtual cluster can be created by running: |
233 |
|
234 |
.. code-block:: console |
235 |
|
236 |
snf-deploy vcluster |
237 |
|
238 |
This will download the image from the URL defined at ``squeeze_image_url`` |
239 |
(Pithos by default) and save it locally under ``/var/lib/snf-deploy/images``. |
240 |
|
241 |
TODO: mention related options: --img-dir, --extra-disk, --lvg, --os |
242 |
|
243 |
Afterwards it will add a bridge (defined with the ``bridge`` option inside |
244 |
``vcluster.conf``), iptables to allow traffic from/to the cluster, and enable |
245 |
forwarding and NAT for the selected network subnet (defined inside |
246 |
``nodes.conf`` in the ``subnet`` option). |
247 |
|
248 |
To complete the preparation, you need a DHCP server that will provide the |
249 |
selected hostnames and IPs to the cluster (defined under ``[ips]`` in |
250 |
``nodes.conf``). |
251 |
|
252 |
It will launch a dnsmasq instance, acting only as DHCP server and listening |
253 |
only on the cluster's bridge. |
254 |
|
255 |
Finally it will launch all the needed KVM virtual machines, snapshotting the |
256 |
image we fetched before. Their taps will be connected with the already created |
257 |
bridge and their primary interface will get the given address. |
258 |
|
259 |
Now that we have the nodes ready, we can move on and deploy Synnefo on them. |
260 |
|
261 |
|
262 |
.. _inst: |
263 |
|
264 |
Synnefo Installation (c) |
265 |
======================== |
266 |
|
267 |
At this point you should have an up-and-running cluster, either virtual |
268 |
(created in the :ref:`previous section <vcluster>` on your local machine) or |
269 |
physical on remote nodes. The cluster should also have valid hostnames and IPs. |
270 |
And all its nodes should be defined in ``nodes.conf``. |
271 |
|
272 |
You should also have set up ``synnefo.conf`` to reflect which Synnefo component |
273 |
will reside in which node. |
274 |
|
275 |
Node Requirements |
276 |
----------------- |
277 |
|
278 |
- OS: Debian Wheezy |
279 |
- authentication: `root` user with corresponding for each node password |
280 |
- primary network interface: `eth0` |
281 |
- spare network interfaces: `eth1`, `eth2` (or vlans on `eth0`) |
282 |
|
283 |
In case you have created a virtual cluster as described in the :ref:`section |
284 |
(b) <vcluster>`, the above requirements are already taken care of. In case of a |
285 |
physical cluster, you need to set them up manually by yourself, before |
286 |
proceeding with the Synnefo installation. |
287 |
|
288 |
|
289 |
Synnefo deployment |
290 |
------------------ |
291 |
|
292 |
To install the Synnefo stack on the existing cluster run: |
293 |
|
294 |
.. code-block:: console |
295 |
|
296 |
snf-deploy all -vvv |
297 |
|
298 |
This might take a while. |
299 |
|
300 |
If this finishes without errors, check for successful installation by visiting |
301 |
from your local machine (make sure you have already setup your local |
302 |
``resolv.conf`` to point at the cluster's DNS): |
303 |
|
304 |
| https://accounts.synnefo.live/astakos/ui/ |
305 |
|
306 |
and login with: |
307 |
|
308 |
| username: user@synnefo.org password: 12345 |
309 |
|
310 |
or the ``user_name`` and ``user_passwd`` defined in your ``synnefo.conf``. |
311 |
Take a small tour checking out Pithos and the rest of the Web UI. You can |
312 |
upload a sample file on Pithos to see that Pithos is working. To test |
313 |
everything went as expected, visit from your local machine: |
314 |
|
315 |
.. code-block:: console |
316 |
|
317 |
https://cyclades.synnefo.live/cyclades/ui/ |
318 |
|
319 |
and try to create a VM. Also create a Private Network and try to connect it. If |
320 |
everything works, you have setup Synnefo successfully. Enjoy! |
321 |
|
322 |
|
323 |
Adding another Ganeti Backend |
324 |
----------------------------- |
325 |
|
326 |
From version 0.12, Synnefo supports multiple Ganeti backends. |
327 |
`snf-deploy` defines them in ``ganeti.conf``. |
328 |
|
329 |
After adding another section in ``ganeti.conf``, run: |
330 |
|
331 |
.. code-block:: console |
332 |
|
333 |
snf-deploy backend --cluster-name ganeti2 -vvv |
334 |
|
335 |
|
336 |
snf-deploy for Ganeti |
337 |
===================== |
338 |
|
339 |
`snf-deploy` can be used to deploy a Ganeti cluster on pre-existing nodes |
340 |
by issuing: |
341 |
|
342 |
.. code-block:: console |
343 |
|
344 |
snf-deploy ganeti --cluster-name ganeti3 -vvv |
345 |
|
346 |
|
347 |
snf-deploy as a DevTool |
348 |
======================= |
349 |
|
350 |
For developers, a single node setup is highly recommended and `snf-deploy` is a |
351 |
very helpful tool. `snf-deploy` also setting up components using packages that |
352 |
are locally generated. For this to work please add all related \*.deb files in |
353 |
packages directory (see ``deploy.conf``) and set the ``use_local_packages`` |
354 |
option to ``True``. Then run: |
355 |
|
356 |
.. code-block:: console |
357 |
|
358 |
snf-deploy run <action1> [<action2>..] |
359 |
|
360 |
|
361 |
to execute predefined actions or: |
362 |
|
363 |
.. code-block:: console |
364 |
|
365 |
snf-deploy run setup --node nodeX \ |
366 |
--role ROLE | --component COMPONENT --method METHOD |
367 |
|
368 |
to setup a synnefo role on a target node or run a specific component's method. |
369 |
|
370 |
For instance, to add another node to an existing ganeti backend run: |
371 |
|
372 |
.. code-block:: console |
373 |
|
374 |
snf-deploy run setup --node5 --role ganeti --cluster-name ganeti3 |
375 |
|
376 |
`snf-deploy` keeps track of installed components per node in |
377 |
``/etc/snf-deploy/status.conf``. If a deployment command fails, the developer |
378 |
can make the required fix and then re-run the same command; `snf-deploy` will |
379 |
not re-install components that have been already setup and their status |
380 |
is ``ok``. |