root / docs / quick-install-admin-guide.rst @ 980190ae
History | View | Annotate | Download (71.2 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
.. note:: It is import that the two machines are under the same domain name. |
43 |
If they are not, you can do this by editting the file ``/etc/hosts`` |
44 |
on both machines, and add the following lines: |
45 |
|
46 |
.. code-block:: console |
47 |
|
48 |
4.3.2.1 node1.example.com |
49 |
4.3.2.2 node2.example.com |
50 |
|
51 |
|
52 |
General Prerequisites |
53 |
===================== |
54 |
|
55 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
56 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
57 |
|
58 |
To be able to download all synnefo components you need to add the following |
59 |
lines in your ``/etc/apt/sources.list`` file: |
60 |
|
61 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
62 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
63 |
| ``deb http://apt.dev.grnet.gr squeeze-backports main`` |
64 |
|
65 |
and import the repo's GPG key: |
66 |
|
67 |
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -`` |
68 |
|
69 |
Also add the following line to enable the ``squeeze-backports`` repository, |
70 |
which may provide more recent versions of certain packages. The repository |
71 |
is deactivated by default and must be specified expicitly in ``apt-get`` |
72 |
operations: |
73 |
|
74 |
| ``deb http://backports.debian.org/debian-backports squeeze-backports main`` |
75 |
|
76 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
77 |
data inside this directory. By 'all data', we mean files, images, and pithos |
78 |
specific mapping data. If you plan to upload more than one basic image, this |
79 |
directory should have at least 50GB of free space. During this guide, we will |
80 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
81 |
to node2 (be sure to set no_root_squash flag). Node2 has this directory |
82 |
mounted under ``/srv/pithos``, too. |
83 |
|
84 |
Before starting the synnefo installation, you will need basic third party |
85 |
software to be installed and configured on the physical nodes. We will describe |
86 |
each node's general prerequisites separately. Any additional configuration, |
87 |
specific to a synnefo service for each node, will be described at the service's |
88 |
section. |
89 |
|
90 |
Finally, it is required for Cyclades and Ganeti nodes to have synchronized |
91 |
system clocks (e.g. by running ntpd). |
92 |
|
93 |
Node1 |
94 |
----- |
95 |
|
96 |
General Synnefo dependencies |
97 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
98 |
|
99 |
* apache (http server) |
100 |
* gunicorn (WSGI http server) |
101 |
* postgresql (database) |
102 |
* rabbitmq (message queue) |
103 |
* ntp (NTP daemon) |
104 |
* gevent |
105 |
|
106 |
You can install apache2, progresql and ntp by running: |
107 |
|
108 |
.. code-block:: console |
109 |
|
110 |
# apt-get install apache2 postgresql ntp |
111 |
|
112 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
113 |
the official debian backports: |
114 |
|
115 |
.. code-block:: console |
116 |
|
117 |
# apt-get -t squeeze-backports install gunicorn |
118 |
|
119 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
120 |
|
121 |
.. code-block:: console |
122 |
|
123 |
# apt-get -t squeeze-backports install python-gevent |
124 |
|
125 |
On node1, we will create our databases, so you will also need the |
126 |
python-psycopg2 package: |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
# apt-get install python-psycopg2 |
131 |
|
132 |
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the |
133 |
following line to ``/etc/apt/sources.list``: |
134 |
|
135 |
.. code-block:: console |
136 |
|
137 |
deb http://www.rabbitmq.com/debian testing main |
138 |
|
139 |
Add RabbitMQ public key, to trusted key list: |
140 |
|
141 |
.. code-block:: console |
142 |
|
143 |
# wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc |
144 |
# apt-key add rabbitmq-signing-key-public.asc |
145 |
|
146 |
Finally, to install the package run: |
147 |
|
148 |
.. code-block:: console |
149 |
|
150 |
# apt-get update |
151 |
# apt-get install rabbitmq-server |
152 |
|
153 |
Database setup |
154 |
~~~~~~~~~~~~~~ |
155 |
|
156 |
On node1, we create a database called ``snf_apps``, that will host all django |
157 |
apps related tables. We also create the user ``synnefo`` and grant him all |
158 |
privileges on the database. We do this by running: |
159 |
|
160 |
.. code-block:: console |
161 |
|
162 |
root@node1:~ # su - postgres |
163 |
postgres@node1:~ $ psql |
164 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
165 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
166 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
167 |
|
168 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
169 |
grant the ``synnefo`` user all privileges on the database. This database could |
170 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
171 |
create all needed databases on node1 and then node2 will connect to them. |
172 |
|
173 |
.. code-block:: console |
174 |
|
175 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
176 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
177 |
|
178 |
Configure the database to listen to all network interfaces. You can do this by |
179 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
180 |
``listen_addresses`` to ``'*'`` : |
181 |
|
182 |
.. code-block:: console |
183 |
|
184 |
listen_addresses = '*' |
185 |
|
186 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
187 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
188 |
connections:`` : |
189 |
|
190 |
.. code-block:: console |
191 |
|
192 |
host all all 4.3.2.1/32 md5 |
193 |
host all all 4.3.2.2/32 md5 |
194 |
|
195 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
196 |
actual IPs. Now, restart the server to apply the changes: |
197 |
|
198 |
.. code-block:: console |
199 |
|
200 |
# /etc/init.d/postgresql restart |
201 |
|
202 |
Gunicorn setup |
203 |
~~~~~~~~~~~~~~ |
204 |
|
205 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
206 |
|
207 |
.. code-block:: console |
208 |
|
209 |
CONFIG = { |
210 |
'mode': 'django', |
211 |
'environment': { |
212 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
213 |
}, |
214 |
'working_dir': '/etc/synnefo', |
215 |
'user': 'www-data', |
216 |
'group': 'www-data', |
217 |
'args': ( |
218 |
'--bind=127.0.0.1:8080', |
219 |
'--worker-class=gevent', |
220 |
'--workers=8', |
221 |
'--log-level=debug', |
222 |
), |
223 |
} |
224 |
|
225 |
.. warning:: Do NOT start the server yet, because it won't find the |
226 |
``synnefo.settings`` module. We will start the server after successful |
227 |
installation of astakos. If the server is running:: |
228 |
|
229 |
# /etc/init.d/gunicorn stop |
230 |
|
231 |
Apache2 setup |
232 |
~~~~~~~~~~~~~ |
233 |
|
234 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
235 |
the following: |
236 |
|
237 |
.. code-block:: console |
238 |
|
239 |
<VirtualHost *:80> |
240 |
ServerName node1.example.com |
241 |
|
242 |
RewriteEngine On |
243 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
244 |
RewriteRule ^(.*)$ - [F,L] |
245 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
246 |
</VirtualHost> |
247 |
|
248 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
249 |
containing the following: |
250 |
|
251 |
.. code-block:: console |
252 |
|
253 |
<IfModule mod_ssl.c> |
254 |
<VirtualHost _default_:443> |
255 |
ServerName node1.example.com |
256 |
|
257 |
Alias /static "/usr/share/synnefo/static" |
258 |
|
259 |
# SetEnv no-gzip |
260 |
# SetEnv dont-vary |
261 |
|
262 |
AllowEncodedSlashes On |
263 |
|
264 |
RequestHeader set X-Forwarded-Protocol "https" |
265 |
|
266 |
<Proxy * > |
267 |
Order allow,deny |
268 |
Allow from all |
269 |
</Proxy> |
270 |
|
271 |
SetEnv proxy-sendchunked |
272 |
SSLProxyEngine off |
273 |
ProxyErrorOverride off |
274 |
|
275 |
ProxyPass /static ! |
276 |
ProxyPass / http://localhost:8080/ retry=0 |
277 |
ProxyPassReverse / http://localhost:8080/ |
278 |
|
279 |
RewriteEngine On |
280 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
281 |
RewriteRule ^(.*)$ - [F,L] |
282 |
|
283 |
SSLEngine on |
284 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
285 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
286 |
</VirtualHost> |
287 |
</IfModule> |
288 |
|
289 |
Now enable sites and modules by running: |
290 |
|
291 |
.. code-block:: console |
292 |
|
293 |
# a2enmod ssl |
294 |
# a2enmod rewrite |
295 |
# a2dissite default |
296 |
# a2ensite synnefo |
297 |
# a2ensite synnefo-ssl |
298 |
# a2enmod headers |
299 |
# a2enmod proxy_http |
300 |
|
301 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
302 |
|
303 |
# /etc/init.d/apache2 stop |
304 |
|
305 |
.. _rabbitmq-setup: |
306 |
|
307 |
Message Queue setup |
308 |
~~~~~~~~~~~~~~~~~~~ |
309 |
|
310 |
The message queue will run on node1, so we need to create the appropriate |
311 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
312 |
exchanges: |
313 |
|
314 |
.. code-block:: console |
315 |
|
316 |
# rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd" |
317 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
318 |
|
319 |
We do not need to initialize the exchanges. This will be done automatically, |
320 |
during the Cyclades setup. |
321 |
|
322 |
Pithos+ data directory setup |
323 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
324 |
|
325 |
As mentioned in the General Prerequisites section, there is a directory called |
326 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
327 |
directory inside it: |
328 |
|
329 |
.. code-block:: console |
330 |
|
331 |
# cd /srv/pithos |
332 |
# mkdir data |
333 |
# chown www-data:www-data data |
334 |
# chmod g+ws data |
335 |
|
336 |
You are now ready with all general prerequisites concerning node1. Let's go to |
337 |
node2. |
338 |
|
339 |
Node2 |
340 |
----- |
341 |
|
342 |
General Synnefo dependencies |
343 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
344 |
|
345 |
* apache (http server) |
346 |
* gunicorn (WSGI http server) |
347 |
* postgresql (database) |
348 |
* ntp (NTP daemon) |
349 |
* gevent |
350 |
|
351 |
You can install the above by running: |
352 |
|
353 |
.. code-block:: console |
354 |
|
355 |
# apt-get install apache2 postgresql ntp |
356 |
|
357 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
358 |
the official debian backports: |
359 |
|
360 |
.. code-block:: console |
361 |
|
362 |
# apt-get -t squeeze-backports install gunicorn |
363 |
|
364 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
365 |
|
366 |
.. code-block:: console |
367 |
|
368 |
# apt-get -t squeeze-backports install python-gevent |
369 |
|
370 |
Node2 will connect to the databases on node1, so you will also need the |
371 |
python-psycopg2 package: |
372 |
|
373 |
.. code-block:: console |
374 |
|
375 |
# apt-get install python-psycopg2 |
376 |
|
377 |
Database setup |
378 |
~~~~~~~~~~~~~~ |
379 |
|
380 |
All databases have been created and setup on node1, so we do not need to take |
381 |
any action here. From node2, we will just connect to them. When you get familiar |
382 |
with the software you may choose to run different databases on different nodes, |
383 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
384 |
of the purpose of this guide. |
385 |
|
386 |
Gunicorn setup |
387 |
~~~~~~~~~~~~~~ |
388 |
|
389 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
390 |
(same contents as in node1; you can just copy/paste the file): |
391 |
|
392 |
.. code-block:: console |
393 |
|
394 |
CONFIG = { |
395 |
'mode': 'django', |
396 |
'environment': { |
397 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
398 |
}, |
399 |
'working_dir': '/etc/synnefo', |
400 |
'user': 'www-data', |
401 |
'group': 'www-data', |
402 |
'args': ( |
403 |
'--bind=127.0.0.1:8080', |
404 |
'--worker-class=gevent', |
405 |
'--workers=4', |
406 |
'--log-level=debug', |
407 |
'--timeout=43200' |
408 |
), |
409 |
} |
410 |
|
411 |
.. warning:: Do NOT start the server yet, because it won't find the |
412 |
``synnefo.settings`` module. We will start the server after successful |
413 |
installation of astakos. If the server is running:: |
414 |
|
415 |
# /etc/init.d/gunicorn stop |
416 |
|
417 |
Apache2 setup |
418 |
~~~~~~~~~~~~~ |
419 |
|
420 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
421 |
the following: |
422 |
|
423 |
.. code-block:: console |
424 |
|
425 |
<VirtualHost *:80> |
426 |
ServerName node2.example.com |
427 |
|
428 |
RewriteEngine On |
429 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
430 |
RewriteRule ^(.*)$ - [F,L] |
431 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
432 |
</VirtualHost> |
433 |
|
434 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
435 |
containing the following: |
436 |
|
437 |
.. code-block:: console |
438 |
|
439 |
<IfModule mod_ssl.c> |
440 |
<VirtualHost _default_:443> |
441 |
ServerName node2.example.com |
442 |
|
443 |
Alias /static "/usr/share/synnefo/static" |
444 |
|
445 |
SetEnv no-gzip |
446 |
SetEnv dont-vary |
447 |
AllowEncodedSlashes On |
448 |
|
449 |
RequestHeader set X-Forwarded-Protocol "https" |
450 |
|
451 |
<Proxy * > |
452 |
Order allow,deny |
453 |
Allow from all |
454 |
</Proxy> |
455 |
|
456 |
SetEnv proxy-sendchunked |
457 |
SSLProxyEngine off |
458 |
ProxyErrorOverride off |
459 |
|
460 |
ProxyPass /static ! |
461 |
ProxyPass / http://localhost:8080/ retry=0 |
462 |
ProxyPassReverse / http://localhost:8080/ |
463 |
|
464 |
SSLEngine on |
465 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
466 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
467 |
</VirtualHost> |
468 |
</IfModule> |
469 |
|
470 |
As in node1, enable sites and modules by running: |
471 |
|
472 |
.. code-block:: console |
473 |
|
474 |
# a2enmod ssl |
475 |
# a2enmod rewrite |
476 |
# a2dissite default |
477 |
# a2ensite synnefo |
478 |
# a2ensite synnefo-ssl |
479 |
# a2enmod headers |
480 |
# a2enmod proxy_http |
481 |
|
482 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
483 |
|
484 |
# /etc/init.d/apache2 stop |
485 |
|
486 |
We are now ready with all general prerequisites for node2. Now that we have |
487 |
finished with all general prerequisites for both nodes, we can start installing |
488 |
the services. First, let's install Astakos on node1. |
489 |
|
490 |
|
491 |
Installation of Astakos on node1 |
492 |
================================ |
493 |
|
494 |
To install astakos, grab the package from our repository (make sure you made |
495 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
496 |
previously), by running: |
497 |
|
498 |
.. code-block:: console |
499 |
|
500 |
# apt-get install snf-astakos-app |
501 |
|
502 |
After successful installation of snf-astakos-app, make sure that also |
503 |
snf-webproject has been installed (marked as "Recommended" package). By default |
504 |
Debian installs "Recommended" packages, but if you have changed your |
505 |
configuration and the package didn't install automatically, you should |
506 |
explicitly install it manually running: |
507 |
|
508 |
.. code-block:: console |
509 |
|
510 |
# apt-get install snf-webproject |
511 |
|
512 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
513 |
the experienced administrator the ability to install synnefo in a custom made |
514 |
django project. This corner case concerns only very advanced users that know |
515 |
what they are doing and want to experiment with synnefo. |
516 |
|
517 |
|
518 |
.. _conf-astakos: |
519 |
|
520 |
Configuration of Astakos |
521 |
======================== |
522 |
|
523 |
Conf Files |
524 |
---------- |
525 |
|
526 |
After astakos is successfully installed, you will find the directory |
527 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
528 |
commented configuration options, which are the default options. While installing |
529 |
new snf-* components, new configuration files will appear inside the directory. |
530 |
In this guide (and for all services), we will edit only the minimum necessary |
531 |
configuration options, to reflect our setup. Everything else will remain as is. |
532 |
|
533 |
After getting familiar with synnefo, you will be able to customize the software |
534 |
as you wish and fits your needs. Many options are available, to empower the |
535 |
administrator with extensively customizable setups. |
536 |
|
537 |
For the snf-webproject component (installed as an astakos dependency), we |
538 |
need the following: |
539 |
|
540 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
541 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
542 |
|
543 |
.. code-block:: console |
544 |
|
545 |
DATABASES = { |
546 |
'default': { |
547 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
548 |
'ENGINE': 'postgresql_psycopg2', |
549 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
550 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
551 |
'NAME': 'snf_apps', |
552 |
'USER': 'synnefo', # Not used with sqlite3. |
553 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
554 |
# Set to empty string for localhost. Not used with sqlite3. |
555 |
'HOST': '4.3.2.1', |
556 |
# Set to empty string for default. Not used with sqlite3. |
557 |
'PORT': '5432', |
558 |
} |
559 |
} |
560 |
|
561 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
562 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
563 |
seed in secret-key hashing algorithms. Set this to a random string of your |
564 |
choise and keep it private: |
565 |
|
566 |
.. code-block:: console |
567 |
|
568 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
569 |
|
570 |
For astakos specific configuration, edit the following options in |
571 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
572 |
|
573 |
.. code-block:: console |
574 |
|
575 |
ASTAKOS_DEFAULT_ADMIN_EMAIL = None |
576 |
|
577 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
578 |
|
579 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
580 |
|
581 |
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all |
582 |
services). ``ASTAKOS_BASEURL`` is the astakos home page. |
583 |
|
584 |
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email. |
585 |
Every time a new account is created a notification is sent to this email. |
586 |
For this we need access to a running mail server, so we have disabled |
587 |
it for now by setting its value to None. For more informations on this, |
588 |
read the relative :ref:`section <mail-server>`. |
589 |
|
590 |
.. note:: For the purpose of this guide, we don't enable recaptcha authentication. |
591 |
If you would like to enable it, you have to edit the following options: |
592 |
|
593 |
.. code-block:: console |
594 |
|
595 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
596 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
597 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
598 |
ASTAKOS_RECAPTCHA_ENABLED = True |
599 |
|
600 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
601 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
602 |
|
603 |
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` : |
604 |
|
605 |
.. code-block:: console |
606 |
|
607 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
608 |
|
609 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
610 |
|
611 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
612 |
|
613 |
Those settings have to do with the black cloudbar endpoints and will be described |
614 |
in more detail later on in this guide. For now, just edit the domain to point at |
615 |
node1 which is where we have installed Astakos. |
616 |
|
617 |
If you are an advanced user and want to use the Shibboleth Authentication method, |
618 |
read the relative :ref:`section <shibboleth-auth>`. |
619 |
|
620 |
.. note:: Because Cyclades and Astakos are running on the same machine |
621 |
in our example, we have to deactivate the CSRF verification. We can do so |
622 |
by adding to |
623 |
``/etc/synnefo/99-local.conf``: |
624 |
|
625 |
.. code-block:: console |
626 |
|
627 |
MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware') |
628 |
TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf') |
629 |
|
630 |
Enable Pooling |
631 |
-------------- |
632 |
|
633 |
This section can be bypassed, but we strongly recommend you apply the following, |
634 |
since they result in a significant performance boost. |
635 |
|
636 |
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper |
637 |
around Psycopg2. This allows independent Django requests to reuse pooled DB |
638 |
connections, with significant performance gains. |
639 |
|
640 |
To use, first monkey-patch psycopg2. For Django, run this before the |
641 |
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``: |
642 |
|
643 |
.. code-block:: console |
644 |
|
645 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
646 |
monkey_patch_psycopg2() |
647 |
|
648 |
Since we are running with greenlets, we should modify psycopg2 behavior, so it |
649 |
works properly in a greenlet context: |
650 |
|
651 |
.. code-block:: console |
652 |
|
653 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
654 |
make_psycopg_green() |
655 |
|
656 |
Use the Psycopg2 driver as usual. For Django, this means using |
657 |
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable |
658 |
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI |
659 |
driver, through ``DATABASES.OPTIONS`` in django. |
660 |
|
661 |
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf`` |
662 |
file that looks like this: |
663 |
|
664 |
.. code-block:: console |
665 |
|
666 |
# Monkey-patch psycopg2 |
667 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
668 |
monkey_patch_psycopg2() |
669 |
|
670 |
# If running with greenlets |
671 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
672 |
make_psycopg_green() |
673 |
|
674 |
DATABASES = { |
675 |
'default': { |
676 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
677 |
'ENGINE': 'postgresql_psycopg2', |
678 |
'OPTIONS': {'synnefo_poolsize': 8}, |
679 |
|
680 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
681 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
682 |
'NAME': 'snf_apps', |
683 |
'USER': 'synnefo', # Not used with sqlite3. |
684 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
685 |
# Set to empty string for localhost. Not used with sqlite3. |
686 |
'HOST': '4.3.2.1', |
687 |
# Set to empty string for default. Not used with sqlite3. |
688 |
'PORT': '5432', |
689 |
} |
690 |
} |
691 |
|
692 |
Database Initialization |
693 |
----------------------- |
694 |
|
695 |
After configuration is done, we initialize the database by running: |
696 |
|
697 |
.. code-block:: console |
698 |
|
699 |
# snf-manage syncdb |
700 |
|
701 |
At this example we don't need to create a django superuser, so we select |
702 |
``[no]`` to the question. After a successful sync, we run the migration needed |
703 |
for astakos: |
704 |
|
705 |
.. code-block:: console |
706 |
|
707 |
# snf-manage migrate im |
708 |
|
709 |
Then, we load the pre-defined user groups |
710 |
|
711 |
.. code-block:: console |
712 |
|
713 |
# snf-manage loaddata groups |
714 |
|
715 |
.. _services-reg: |
716 |
|
717 |
Services Registration |
718 |
--------------------- |
719 |
|
720 |
When the database is ready, we configure the elements of the Astakos cloudbar, |
721 |
to point to our future services: |
722 |
|
723 |
.. code-block:: console |
724 |
|
725 |
# snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png |
726 |
# snf-manage service-add "cyclades" https://node1.example.com/ui/ |
727 |
# snf-manage service-add "pithos+" https://node2.example.com/ui/ |
728 |
|
729 |
Servers Initialization |
730 |
---------------------- |
731 |
|
732 |
Finally, we initialize the servers on node1: |
733 |
|
734 |
.. code-block:: console |
735 |
|
736 |
root@node1:~ # /etc/init.d/gunicorn restart |
737 |
root@node1:~ # /etc/init.d/apache2 restart |
738 |
|
739 |
We have now finished the Astakos setup. Let's test it now. |
740 |
|
741 |
|
742 |
Testing of Astakos |
743 |
================== |
744 |
|
745 |
Open your favorite browser and go to: |
746 |
|
747 |
``http://node1.example.com/im`` |
748 |
|
749 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
750 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
751 |
|
752 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
753 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
754 |
see a green box on the top, which informs you that you made a successful request |
755 |
and the request has been sent to the administrators. So far so good, let's assume |
756 |
that you created the user with username ``user@example.com``. |
757 |
|
758 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
759 |
|
760 |
.. code-block:: console |
761 |
|
762 |
root@node1:~ # snf-manage user-list |
763 |
|
764 |
This command should show you a list with only one user; the one we just created. |
765 |
This user should have an id with a value of ``1``. It should also have an |
766 |
"active" status with the value of ``0`` (inactive). Now run: |
767 |
|
768 |
.. code-block:: console |
769 |
|
770 |
root@node1:~ # snf-manage user-update --set-active 1 |
771 |
|
772 |
This modifies the active value to ``1``, and actually activates the user. |
773 |
When running in production, the activation is done automatically with different |
774 |
types of moderation, that Astakos supports. You can see the moderation methods |
775 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
776 |
documentation. In production, you can also manually activate a user, by sending |
777 |
him/her an activation email. See how to do this at the :ref:`User |
778 |
activation <user_activation>` section. |
779 |
|
780 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
781 |
your browser again. Try to sign in using your new credentials. If the astakos |
782 |
menu appears and you can see your profile, then you have successfully setup |
783 |
Astakos. |
784 |
|
785 |
Let's continue to install Pithos+ now. |
786 |
|
787 |
|
788 |
Installation of Pithos+ on node2 |
789 |
================================ |
790 |
|
791 |
To install pithos+, grab the packages from our repository (make sure you made |
792 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
793 |
previously), by running: |
794 |
|
795 |
.. code-block:: console |
796 |
|
797 |
# apt-get install snf-pithos-app |
798 |
|
799 |
After successful installation of snf-pithos-app, make sure that also |
800 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
801 |
the "Installation of Astakos on node1" section, if you don't remember why this |
802 |
should happen. Now, install the pithos web interface: |
803 |
|
804 |
.. code-block:: console |
805 |
|
806 |
# apt-get install snf-pithos-webclient |
807 |
|
808 |
This package provides the standalone pithos web client. The web client is the |
809 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
810 |
interface's cloudbar, at the top of the Astakos homepage. |
811 |
|
812 |
|
813 |
.. _conf-pithos: |
814 |
|
815 |
Configuration of Pithos+ |
816 |
======================== |
817 |
|
818 |
Conf Files |
819 |
---------- |
820 |
|
821 |
After pithos+ is successfully installed, you will find the directory |
822 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
823 |
after installation of astakos. Here, you will not have to change anything that |
824 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
825 |
only need to change settings that have to do with pithos+. Specifically: |
826 |
|
827 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
828 |
this options: |
829 |
|
830 |
.. code-block:: console |
831 |
|
832 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
833 |
|
834 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
835 |
|
836 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
837 |
PITHOS_AUTHENTICATION_USERS = None |
838 |
|
839 |
PITHOS_SERVICE_TOKEN = 'pithos_service_token22w==' |
840 |
PITHOS_USER_CATALOG_URL = 'http://node1.example.com/user_catalogs' |
841 |
PITHOS_USER_FEEDBACK_URL = 'http://node1.example.com/feedback' |
842 |
PITHOS_USER_LOGIN_URL = 'http://node1.example.com/login' |
843 |
|
844 |
|
845 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
846 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
847 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
848 |
``example_passw0rd``. All those settings where setup during node1's "Database |
849 |
setup" section. |
850 |
|
851 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
852 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
853 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
854 |
directory at node1's "Pithos+ data directory setup" section. |
855 |
|
856 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
857 |
is available the astakos authentication api. If not set, pithos+ tries to |
858 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
859 |
|
860 |
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on |
861 |
the Astakos node (node1 in our case): |
862 |
|
863 |
.. code-block:: console |
864 |
|
865 |
# snf-manage service-list |
866 |
|
867 |
The token has been generated automatically during the :ref:`Pithos+ service |
868 |
registration <services-reg>`. |
869 |
|
870 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
871 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
872 |
|
873 |
.. code-block:: console |
874 |
|
875 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
876 |
PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback" |
877 |
|
878 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
879 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
880 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
881 |
services, so we use this one. |
882 |
|
883 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
884 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
885 |
|
886 |
.. code-block:: console |
887 |
|
888 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
889 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3' |
890 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
891 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
892 |
|
893 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
894 |
cloudbar. |
895 |
|
896 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered |
897 |
Astakos service. You can see all :ref:`registered services <services-reg>` by |
898 |
running on the Astakos node (node1): |
899 |
|
900 |
.. code-block:: console |
901 |
|
902 |
# snf-manage service-list |
903 |
|
904 |
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos service's |
905 |
``id`` as shown by the above command, in our case ``3``. |
906 |
|
907 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
908 |
pithos+ web client to get from astakos all the information needed to fill its |
909 |
own cloudbar. So we put our astakos deployment urls there. |
910 |
|
911 |
Pooling and Greenlets |
912 |
--------------------- |
913 |
|
914 |
Pithos is pooling-ready without the need of further configuration, because it |
915 |
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos |
916 |
backend objects for access to the Pithos DB. |
917 |
|
918 |
However, as in Astakos, since we are running with Greenlets, it is also |
919 |
recommended to modify psycopg2 behavior so it works properly in a greenlet |
920 |
context. This means adding the following lines at the top of your |
921 |
``/etc/synnefo/10-snf-webproject-database.conf`` file: |
922 |
|
923 |
.. code-block:: console |
924 |
|
925 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
926 |
make_psycopg_green() |
927 |
|
928 |
Servers Initialization |
929 |
---------------------- |
930 |
|
931 |
After configuration is done, we initialize the servers on node2: |
932 |
|
933 |
.. code-block:: console |
934 |
|
935 |
root@node2:~ # /etc/init.d/gunicorn restart |
936 |
root@node2:~ # /etc/init.d/apache2 restart |
937 |
|
938 |
You have now finished the Pithos+ setup. Let's test it now. |
939 |
|
940 |
|
941 |
Testing of Pithos+ |
942 |
================== |
943 |
|
944 |
Open your browser and go to the Astakos homepage: |
945 |
|
946 |
``http://node1.example.com/im`` |
947 |
|
948 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
949 |
top black cloudbar. If everything was setup correctly, this will redirect you |
950 |
to: |
951 |
|
952 |
``https://node2.example.com/ui`` |
953 |
|
954 |
and you will see the blue interface of the Pithos+ application. Click the |
955 |
orange "Upload" button and upload your first file. If the file gets uploaded |
956 |
successfully, then this is your first sign of a successful Pithos+ installation. |
957 |
Go ahead and experiment with the interface to make sure everything works |
958 |
correctly. |
959 |
|
960 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
961 |
|
962 |
If you don't stumble on any problems, then you have successfully installed |
963 |
Pithos+, which you can use as a standalone File Storage Service. |
964 |
|
965 |
If you would like to do more, such as: |
966 |
|
967 |
* Spawning VMs |
968 |
* Spawning VMs from Images stored on Pithos+ |
969 |
* Uploading your custom Images to Pithos+ |
970 |
* Spawning VMs from those custom Images |
971 |
* Registering existing Pithos+ files as Images |
972 |
* Connect VMs to the Internet |
973 |
* Create Private Networks |
974 |
* Add VMs to Private Networks |
975 |
|
976 |
please continue with the rest of the guide. |
977 |
|
978 |
|
979 |
Cyclades (and Plankton) Prerequisites |
980 |
===================================== |
981 |
|
982 |
Before proceeding with the Cyclades (and Plankton) installation, make sure you |
983 |
have successfully set up Astakos and Pithos+ first, because Cyclades depends |
984 |
on them. If you don't have a working Astakos and Pithos+ installation yet, |
985 |
please return to the :ref:`top <quick-install-admin-guide>` of this guide. |
986 |
|
987 |
Besides Astakos and Pithos+, you will also need a number of additional working |
988 |
prerequisites, before you start the Cyclades installation. |
989 |
|
990 |
Ganeti |
991 |
------ |
992 |
|
993 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
994 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
995 |
Please refer to the |
996 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the |
997 |
gory details. A successful Ganeti installation concludes with a working |
998 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
999 |
<GANETI_NODES>`. |
1000 |
|
1001 |
The above Ganeti cluster can run on different physical machines than node1 and |
1002 |
node2 and can scale independently, according to your needs. |
1003 |
|
1004 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
1005 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
1006 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
1007 |
|
1008 |
We highly recommend that you read the official Ganeti documentation, if you are |
1009 |
not familiar with Ganeti. |
1010 |
|
1011 |
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't |
1012 |
support IP pool management. This feature will be available in Ganeti >= 2.7. |
1013 |
Synnefo depends on the IP pool functionality of Ganeti, so you have to use |
1014 |
GRNET provided packages until stable 2.7 is out. To do so: |
1015 |
|
1016 |
.. code-block:: console |
1017 |
|
1018 |
# apt-get install snf-ganeti ganeti-htools |
1019 |
# modprobe drbd minor_count=255 usermode_helper=/bin/true |
1020 |
|
1021 |
You should have: |
1022 |
|
1023 |
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1 |
1024 |
|
1025 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
1026 |
both nodes, choose a domain name that resolves to a valid floating IP (let's |
1027 |
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same |
1028 |
dsa/rsa keys and authorised_keys for password-less root ssh between each other. |
1029 |
If not then skip passing --no-ssh-init but be aware that it will replace |
1030 |
/root/.ssh/* related files and you might lose access to master node. Also, |
1031 |
make sure there is an lvm volume group named ``ganeti`` that will host your |
1032 |
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0). |
1033 |
Then run on node1: |
1034 |
|
1035 |
.. code-block:: console |
1036 |
|
1037 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \ |
1038 |
--no-etc-hosts --vg-name=ganeti \ |
1039 |
--nic-parameters link=br0 --master-netdev eth0 \ |
1040 |
ganeti.node1.example.com |
1041 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
1042 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
1043 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
1044 |
|
1045 |
root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \ |
1046 |
--vm-capable=yes node2.example.com |
1047 |
root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti |
1048 |
root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default |
1049 |
|
1050 |
For any problems you may stumble upon installing Ganeti, please refer to the |
1051 |
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation |
1052 |
of Ganeti is out of the scope of this guide. |
1053 |
|
1054 |
.. _cyclades-install-snfimage: |
1055 |
|
1056 |
snf-image |
1057 |
--------- |
1058 |
|
1059 |
Installation |
1060 |
~~~~~~~~~~~~ |
1061 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
1062 |
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all* |
1063 |
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on |
1064 |
node1 and node2. You can do this by running on *both* nodes: |
1065 |
|
1066 |
.. code-block:: console |
1067 |
|
1068 |
# apt-get install snf-image-host snf-pithos-backend python-psycopg2 |
1069 |
|
1070 |
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able to |
1071 |
handle image files stored on Pithos+. It also needs `python-psycopg2` to be able |
1072 |
to access the Pithos+ database. This is why, we also install them on *all* |
1073 |
VM-capable Ganeti nodes. |
1074 |
|
1075 |
After `snf-image-host` has been installed successfully, create the helper VM by |
1076 |
running on *both* nodes: |
1077 |
|
1078 |
.. code-block:: console |
1079 |
|
1080 |
# snf-image-update-helper |
1081 |
|
1082 |
This will create all the needed files under ``/var/lib/snf-image/helper/`` for |
1083 |
snf-image-host to run successfully, and it may take a few minutes depending on |
1084 |
your Internet connection. |
1085 |
|
1086 |
Configuration |
1087 |
~~~~~~~~~~~~~ |
1088 |
snf-image supports native access to Images stored on Pithos+. This means that |
1089 |
snf-image can talk directly to the Pithos+ backend, without the need of providing |
1090 |
a public URL. More details, are described in the next section. For now, the only |
1091 |
thing we need to do, is configure snf-image to access our Pithos+ backend. |
1092 |
|
1093 |
To do this, we need to set the corresponding variables in |
1094 |
``/etc/default/snf-image``, to reflect our Pithos+ setup: |
1095 |
|
1096 |
.. code-block:: console |
1097 |
|
1098 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
1099 |
|
1100 |
PITHOS_DATA="/srv/pithos/data" |
1101 |
|
1102 |
If you have installed your Ganeti cluster on different nodes than node1 and node2 make |
1103 |
sure that ``/srv/pithos/data`` is visible by all of them. |
1104 |
|
1105 |
If you would like to use Images that are also/only stored locally, you need to |
1106 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
1107 |
Pithos+. |
1108 |
|
1109 |
Testing |
1110 |
~~~~~~~ |
1111 |
You can test that snf-image is successfully installed by running on the |
1112 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
1113 |
|
1114 |
.. code-block:: console |
1115 |
|
1116 |
# gnt-os diagnose |
1117 |
|
1118 |
This should return ``valid`` for snf-image. |
1119 |
|
1120 |
If you are interested to learn more about snf-image's internals (and even use |
1121 |
it alongside Ganeti without Synnefo), please see |
1122 |
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning |
1123 |
installation instructions, documentation on the design and implementation, and |
1124 |
supported Image formats. |
1125 |
|
1126 |
.. _snf-image-images: |
1127 |
|
1128 |
Actual Images for snf-image |
1129 |
--------------------------- |
1130 |
|
1131 |
Now that snf-image is installed successfully we need to provide it with some |
1132 |
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``, |
1133 |
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump`` |
1134 |
format. For more information about snf-image Image formats see `here |
1135 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_. |
1136 |
|
1137 |
:ref:`snf-image <snf-image>` also supports three (3) different locations for the |
1138 |
above Images to be stored: |
1139 |
|
1140 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in |
1141 |
:file:`/etc/default/snf-image`) |
1142 |
* On a remote host (accessible via a public URL e.g: http://... or ftp://...) |
1143 |
* On Pithos+ (accessible natively, not only by its public URL) |
1144 |
|
1145 |
For the purpose of this guide, we will use the `Debian Squeeze Base Image |
1146 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official |
1147 |
`snf-image page |
1148 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is |
1149 |
of type ``diskdump``. We will store it in our new Pithos+ installation. |
1150 |
|
1151 |
To do so, do the following: |
1152 |
|
1153 |
a) Download the Image from the official snf-image page (`image link |
1154 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_). |
1155 |
|
1156 |
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI |
1157 |
or the command line client `kamaki |
1158 |
<http://docs.dev.grnet.gr/kamaki/latest/index.html>`_. |
1159 |
|
1160 |
Once the Image is uploaded successfully, download the Image's metadata file |
1161 |
from the official snf-image page (`image_metadata link |
1162 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for |
1163 |
spawning a VM from Ganeti, in the next section. |
1164 |
|
1165 |
Of course, you can repeat the procedure to upload more Images, available from the |
1166 |
`official snf-image page |
1167 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. |
1168 |
|
1169 |
.. _ganeti-with-pithos-images: |
1170 |
|
1171 |
Spawning a VM from a Pithos+ Image, using Ganeti |
1172 |
------------------------------------------------ |
1173 |
|
1174 |
Now, it is time to test our installation so far. So, we have Astakos and |
1175 |
Pithos+ installed, we have a working Ganeti installation, the snf-image |
1176 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
1177 |
Pithos+. Make sure you also have the `metadata file |
1178 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
1179 |
|
1180 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
1181 |
|
1182 |
.. code-block:: console |
1183 |
|
1184 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1185 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1186 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1187 |
testvm1 |
1188 |
|
1189 |
In the above command: |
1190 |
|
1191 |
* ``img_passwd``: the arbitrary root password of your new instance |
1192 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
1193 |
* ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this |
1194 |
should have the format ``pithos://<username>/<container>/<filename>``: |
1195 |
* ``username``: ``user@example.com`` (defined during Astakos sign up) |
1196 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
1197 |
* ``filename``: the name of file (visible also from the Web UI) |
1198 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
1199 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1200 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_ |
1201 |
|
1202 |
If the ``gnt-instance add`` command returns successfully, then run: |
1203 |
|
1204 |
.. code-block:: console |
1205 |
|
1206 |
# gnt-instance info testvm1 | grep "console connection" |
1207 |
|
1208 |
to find out where to connect using VNC. If you can connect successfully and can |
1209 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
1210 |
then everything works as expected and you have your new Debian Base VM up and |
1211 |
running. |
1212 |
|
1213 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
1214 |
to access the Pithos+ database and the Pithos+ backend data. Also, make sure |
1215 |
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add`` |
1216 |
succeeds but you cannot connect, again find out what went wrong. Do *NOT* |
1217 |
proceed to the next steps unless you are sure everything works till this point. |
1218 |
|
1219 |
If everything works, you have successfully connected Ganeti with Pithos+. Let's |
1220 |
move on to networking now. |
1221 |
|
1222 |
.. warning:: |
1223 |
|
1224 |
You can bypass the networking sections and go straight to |
1225 |
:ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup |
1226 |
the Cyclades Network Service, but only the Cyclades Compute Service |
1227 |
(recommended for now). |
1228 |
|
1229 |
Networking Setup Overview |
1230 |
------------------------- |
1231 |
|
1232 |
This part is deployment-specific and must be customized based on the specific |
1233 |
needs of the system administrator. However, to do so, the administrator needs |
1234 |
to understand how each level handles Virtual Networks, to be able to setup the |
1235 |
backend appropriately, before installing Cyclades. To do so, please read the |
1236 |
:ref:`Network <networks>` section before proceeding. |
1237 |
|
1238 |
Since synnefo 0.11 all network actions are managed with the snf-manage |
1239 |
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd, |
1240 |
snf-network, bridges, vlans) to be already configured correctly. The only |
1241 |
actions needed in this point are: |
1242 |
|
1243 |
a) Have Ganeti with IP pool management support installed. |
1244 |
|
1245 |
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc. |
1246 |
|
1247 |
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs. |
1248 |
|
1249 |
In order to test that everything is setup correctly before installing Cyclades, |
1250 |
we will make some testing actions in this section, and the actual setup will be |
1251 |
done afterwards with snf-manage commands. |
1252 |
|
1253 |
.. _snf-network: |
1254 |
|
1255 |
snf-network |
1256 |
~~~~~~~~~~~ |
1257 |
|
1258 |
snf-network includes `kvm-vif-bridge` script that is invoked every time |
1259 |
a tap (a VM's NIC) is created. Based on environment variables passed by |
1260 |
Ganeti it issues various commands depending on the network type the NIC is |
1261 |
connected to and sets up a corresponding dhcp lease. |
1262 |
|
1263 |
Install snf-network on all Ganeti nodes: |
1264 |
|
1265 |
.. code-block:: console |
1266 |
|
1267 |
# apt-get install snf-network |
1268 |
|
1269 |
Then, in :file:`/etc/default/snf-network` set: |
1270 |
|
1271 |
.. code-block:: console |
1272 |
|
1273 |
MAC_MASK=ff:ff:f0:00:00:00 |
1274 |
|
1275 |
.. _nfdhcpd: |
1276 |
|
1277 |
nfdhcpd |
1278 |
~~~~~~~ |
1279 |
|
1280 |
Each NIC's IP is chosen by Ganeti (with IP pool management support). |
1281 |
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and |
1282 |
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will |
1283 |
create a dhcp response. |
1284 |
|
1285 |
.. code-block:: console |
1286 |
|
1287 |
# apt-get install nfqueue-bindings-python=0.3+physindev-1 |
1288 |
# apt-get install nfdhcpd |
1289 |
|
1290 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1291 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1292 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new |
1293 |
VMs. Once you are finished, restart the server on all nodes: |
1294 |
|
1295 |
.. code-block:: console |
1296 |
|
1297 |
# /etc/init.d/nfdhcpd restart |
1298 |
|
1299 |
If you are using ``ferm``, then you need to run the following: |
1300 |
|
1301 |
.. code-block:: console |
1302 |
|
1303 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
1304 |
# /etc/init.d/ferm restart |
1305 |
|
1306 |
or make sure to run after boot: |
1307 |
|
1308 |
.. code-block:: console |
1309 |
|
1310 |
# iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42 |
1311 |
|
1312 |
and if you have IPv6 enabled: |
1313 |
|
1314 |
.. code-block:: console |
1315 |
|
1316 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43 |
1317 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44 |
1318 |
|
1319 |
You can check which clients are currently served by nfdhcpd by running: |
1320 |
|
1321 |
.. code-block:: console |
1322 |
|
1323 |
# kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid` |
1324 |
|
1325 |
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``. |
1326 |
|
1327 |
Public Network Setup |
1328 |
-------------------- |
1329 |
|
1330 |
To achieve basic networking the simplest way is to have a common bridge (e.g. |
1331 |
``br0``, on the same collision domain with the router) where all VMs will connect |
1332 |
to. Packets will be "forwarded" to the router and then to the Internet. If |
1333 |
you want a more advanced setup (ip-less routing and proxy-arp plese refer to |
1334 |
:ref:`Network <networks>` section). |
1335 |
|
1336 |
Physical Host Setup |
1337 |
~~~~~~~~~~~~~~~~~~~ |
1338 |
|
1339 |
Assuming ``eth0`` on both hosts is the public interface (directly connected |
1340 |
to the router), run on every node: |
1341 |
|
1342 |
.. code-block:: console |
1343 |
|
1344 |
# brctl addbr br0 |
1345 |
# ip link set br0 up |
1346 |
# vconfig add eth0 100 |
1347 |
# ip link set eth0.100 up |
1348 |
# brctl addif br0 eth0.100 |
1349 |
|
1350 |
|
1351 |
Testing a Public Network |
1352 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1353 |
|
1354 |
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you |
1355 |
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the |
1356 |
network by running: |
1357 |
|
1358 |
.. code-block:: console |
1359 |
|
1360 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public |
1361 |
|
1362 |
Then, connect the network to all your nodegroups. We assume that we only have |
1363 |
one nodegroup (``default``) in our Ganeti cluster: |
1364 |
|
1365 |
.. code-block:: console |
1366 |
|
1367 |
# gnt-network connect test-net-public default bridged br0 |
1368 |
|
1369 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1370 |
the Public Network. We will add a new VM, the same way we did it on the |
1371 |
previous testing section. However, now will also add one NIC, configured to be |
1372 |
managed from our previously defined network. Run on the GANETI-MASTER (node1): |
1373 |
|
1374 |
.. code-block:: console |
1375 |
|
1376 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1377 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1378 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1379 |
--net 0:ip=pool,network=test-net-public \ |
1380 |
testvm2 |
1381 |
|
1382 |
If the above returns successfully, connect to the new VM and run: |
1383 |
|
1384 |
.. code-block:: console |
1385 |
|
1386 |
root@testvm2:~ # ip addr |
1387 |
root@testvm2:~ # ip route |
1388 |
root@testvm2:~ # cat /etc/resolv.conf |
1389 |
|
1390 |
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config |
1391 |
(nameserver option in nfdhcpd.conf). This shows correct configuration of |
1392 |
ganeti, snf-network and nfdhcpd. |
1393 |
|
1394 |
Now ping the outside world. If this works too, then you have also configured |
1395 |
correctly your physical host and router. |
1396 |
|
1397 |
Make sure everything works as expected, before proceeding with the Private |
1398 |
Networks setup. |
1399 |
|
1400 |
.. _private-networks-setup: |
1401 |
|
1402 |
Private Networks Setup |
1403 |
---------------------- |
1404 |
|
1405 |
Synnefo supports two types of private networks: |
1406 |
|
1407 |
- based on MAC filtering |
1408 |
- based on physical VLANs |
1409 |
|
1410 |
Both types provide Layer 2 isolation to the end-user. |
1411 |
|
1412 |
For the first type a common bridge (e.g. ``prv0``) is needed while for the second a |
1413 |
range of bridges (e.g. ``prv1..prv100``) each bridged on a different physical |
1414 |
VLAN. To this end to assure isolation among end-users' private networks each |
1415 |
has to have different MAC prefix (for the filtering to take place) or to be |
1416 |
"connected" to a different bridge (VLAN actually). |
1417 |
|
1418 |
Physical Host Setup |
1419 |
~~~~~~~~~~~~~~~~~~~ |
1420 |
|
1421 |
In order to create the necessary VLAN/bridges, one for MAC filtered private |
1422 |
networks and various (e.g. 20) for private networks based on physical VLANs, |
1423 |
run on every node: |
1424 |
|
1425 |
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs |
1426 |
configured correctly) connected together, run on every node: |
1427 |
|
1428 |
.. code-block:: console |
1429 |
|
1430 |
# apt-get install vlan |
1431 |
# modprobe 8021q |
1432 |
# $iface=eth0 |
1433 |
# for prv in $(seq 0 20); do |
1434 |
vlan=$prv |
1435 |
bridge=prv$prv |
1436 |
vconfig add $iface $vlan |
1437 |
ifconfig $iface.$vlan up |
1438 |
brctl addbr $bridge |
1439 |
brctl setfd $bridge 0 |
1440 |
brctl addif $bridge $iface.$vlan |
1441 |
ifconfig $bridge up |
1442 |
done |
1443 |
|
1444 |
The above will do the following : |
1445 |
|
1446 |
* provision 21 new bridges: ``prv0`` - ``prv20`` |
1447 |
* provision 21 new vlans: ``eth0.0`` - ``eth0.20`` |
1448 |
* add the corresponding vlan to the equivalent bridge |
1449 |
|
1450 |
You can run ``brctl show`` on both nodes to see if everything was setup |
1451 |
correctly. |
1452 |
|
1453 |
Testing the Private Networks |
1454 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1455 |
|
1456 |
To test the Private Networks, we will create two instances and put them in the |
1457 |
same Private Networks (one MAC Filtered and one Physical VLAN). This means |
1458 |
that the instances will have a second NIC connected to the ``prv0`` |
1459 |
pre-provisioned bridge and a third to ``prv1``. |
1460 |
|
1461 |
We run the same command as in the Public Network testing section, but with one |
1462 |
more argument for the second NIC: |
1463 |
|
1464 |
.. code-block:: console |
1465 |
|
1466 |
# gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac |
1467 |
# gnt-network connect test-net-prv-mac default bridged prv0 |
1468 |
|
1469 |
# gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan |
1470 |
# gnt-network connect test-net-prv-vlan default bridged prv1 |
1471 |
|
1472 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1473 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1474 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1475 |
--net 0:ip=pool,network=test-net-public \ |
1476 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1477 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1478 |
testvm3 |
1479 |
|
1480 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1481 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1482 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1483 |
--net 0:ip=pool,network=test-net-public \ |
1484 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1485 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1486 |
testvm4 |
1487 |
|
1488 |
Above, we create two instances with first NIC connected to the internet, their |
1489 |
second NIC connected to a MAC filtered private Network and their third NIC |
1490 |
connected to the first Physical VLAN Private Network. Now, connect to the |
1491 |
instances using VNC and make sure everything works as expected: |
1492 |
|
1493 |
a) The instances have access to the public internet through their first eth |
1494 |
interface (``eth0``), which has been automatically assigned a public IP. |
1495 |
|
1496 |
b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``) |
1497 |
|
1498 |
c) ip link set ``eth1``/``eth2`` up |
1499 |
|
1500 |
d) dhclient ``eth1``/``eth2`` |
1501 |
|
1502 |
e) On testvm3 ping 192.168.1.2/10.0.0.2 |
1503 |
|
1504 |
If everything works as expected, then you have finished the Network Setup at the |
1505 |
backend for both types of Networks (Public & Private). |
1506 |
|
1507 |
.. _cyclades-gtools: |
1508 |
|
1509 |
Cyclades Ganeti tools |
1510 |
--------------------- |
1511 |
|
1512 |
In order for Ganeti to be connected with Cyclades later on, we need the |
1513 |
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our |
1514 |
case). You can install them by running in both nodes: |
1515 |
|
1516 |
.. code-block:: console |
1517 |
|
1518 |
# apt-get install snf-cyclades-gtools |
1519 |
|
1520 |
This will install the following: |
1521 |
|
1522 |
* ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ) |
1523 |
* ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``) |
1524 |
* ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages) |
1525 |
|
1526 |
Configure ``snf-cyclades-gtools`` |
1527 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1528 |
|
1529 |
The package will install the ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` |
1530 |
configuration file. At least we need to set the RabbitMQ endpoint for all tools |
1531 |
that need it: |
1532 |
|
1533 |
.. code-block:: console |
1534 |
|
1535 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1536 |
|
1537 |
The above variables should reflect your :ref:`Message Queue setup |
1538 |
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes. |
1539 |
|
1540 |
Connect ``snf-image`` with ``snf-progress-monitor`` |
1541 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1542 |
|
1543 |
Finally, we need to configure ``snf-image`` to publish progress messages during |
1544 |
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and |
1545 |
set the corresponding variable to ``snf-progress-monitor``: |
1546 |
|
1547 |
.. code-block:: console |
1548 |
|
1549 |
PROGRESS_MONITOR="snf-progress-monitor" |
1550 |
|
1551 |
This file should be editted in all Ganeti nodes. |
1552 |
|
1553 |
.. _rapi-user: |
1554 |
|
1555 |
Synnefo RAPI user |
1556 |
----------------- |
1557 |
|
1558 |
As a last step before installing Cyclades, create a new RAPI user that will |
1559 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1560 |
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``. |
1561 |
You can do this, by first running: |
1562 |
|
1563 |
.. code-block:: console |
1564 |
|
1565 |
# echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5 |
1566 |
|
1567 |
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows: |
1568 |
|
1569 |
.. code-block:: console |
1570 |
|
1571 |
cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write |
1572 |
|
1573 |
More about Ganeti's RAPI users `here. |
1574 |
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_ |
1575 |
|
1576 |
You have now finished with all needed Prerequisites for Cyclades (and |
1577 |
Plankton). Let's move on to the actual Cyclades installation. |
1578 |
|
1579 |
|
1580 |
Installation of Cyclades (and Plankton) on node1 |
1581 |
================================================ |
1582 |
|
1583 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
1584 |
Compute service. Plankton (the Image Registry service) will get installed |
1585 |
automatically along with Cyclades, because it is contained in the same Synnefo |
1586 |
component right now. |
1587 |
|
1588 |
We will install Cyclades (and Plankton) on node1. To do so, we install the |
1589 |
corresponding package by running on node1: |
1590 |
|
1591 |
.. code-block:: console |
1592 |
|
1593 |
# apt-get install snf-cyclades-app |
1594 |
|
1595 |
If all packages install successfully, then Cyclades and Plankton are installed |
1596 |
and we proceed with their configuration. |
1597 |
|
1598 |
|
1599 |
Configuration of Cyclades (and Plankton) |
1600 |
======================================== |
1601 |
|
1602 |
Conf files |
1603 |
---------- |
1604 |
|
1605 |
After installing Cyclades, a number of new configuration files will appear under |
1606 |
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will descibe here |
1607 |
only the minimal needed changes to result with a working system. In general, sane |
1608 |
defaults have been chosen for the most of the options, to cover most of the |
1609 |
common scenarios. However, if you want to tweak Cyclades feel free to do so, |
1610 |
once you get familiar with the different options. |
1611 |
|
1612 |
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``: |
1613 |
|
1614 |
.. code-block:: console |
1615 |
|
1616 |
ASTAKOS_URL = 'https://node1.example.com/im/authenticate' |
1617 |
|
1618 |
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set |
1619 |
to point to Astakos (this should have the same value with Pithos+'s |
1620 |
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`). |
1621 |
|
1622 |
TODO: Document the Network Options here |
1623 |
|
1624 |
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``: |
1625 |
|
1626 |
.. code-block:: console |
1627 |
|
1628 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
1629 |
CLOUDBAR_ACTIVE_SERVICE = '2' |
1630 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
1631 |
CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu' |
1632 |
|
1633 |
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common |
1634 |
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are |
1635 |
used by the Cyclades Web UI to get from Astakos all the information needed to |
1636 |
fill its own cloudbar. So, we put our Astakos deployment urls there. All the |
1637 |
above should have the same values we put in the corresponding variables in |
1638 |
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous |
1639 |
:ref:`Pithos configuration <conf-pithos>` section. |
1640 |
|
1641 |
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos |
1642 |
service. You can see all :ref:`registered services <services-reg>` by running |
1643 |
on the Astakos node (node1): |
1644 |
|
1645 |
.. code-block:: console |
1646 |
|
1647 |
# snf-manage service-list |
1648 |
|
1649 |
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's |
1650 |
``id`` as shown by the above command, in our case ``2``. |
1651 |
|
1652 |
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``: |
1653 |
|
1654 |
.. code-block:: console |
1655 |
|
1656 |
BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1657 |
BACKEND_BLOCK_PATH = '/srv/pithos/data/' |
1658 |
|
1659 |
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION`` |
1660 |
denotes the Pithos+ database (where the Image files are stored). So we set that |
1661 |
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual |
1662 |
Pithos+ data location. |
1663 |
|
1664 |
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``: |
1665 |
|
1666 |
.. code-block:: console |
1667 |
|
1668 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1669 |
|
1670 |
The above settings denote the Message Queue. Those settings should have the same |
1671 |
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and |
1672 |
reflect our :ref:`Message Queue setup <rabbitmq-setup>`. |
1673 |
|
1674 |
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``: |
1675 |
|
1676 |
.. code-block:: console |
1677 |
|
1678 |
UI_LOGIN_URL = "https://node1.example.com/im/login" |
1679 |
UI_LOGOUT_URL = "https://node1.example.com/im/logout" |
1680 |
|
1681 |
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users, |
1682 |
if they are not logged in. We point that to Astakos. |
1683 |
|
1684 |
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the |
1685 |
user when he/she logs out. We point that to Astakos, too. |
1686 |
|
1687 |
Edit ``/etc/default/vncauthproxy``: |
1688 |
|
1689 |
.. code-block:: console |
1690 |
|
1691 |
CHUID="www-data:nogroup" |
1692 |
|
1693 |
We have now finished with the basic Cyclades and Plankton configuration. |
1694 |
|
1695 |
Database Initialization |
1696 |
----------------------- |
1697 |
|
1698 |
Once Cyclades is configured, we sync the database: |
1699 |
|
1700 |
.. code-block:: console |
1701 |
|
1702 |
$ snf-manage syncdb |
1703 |
$ snf-manage migrate |
1704 |
|
1705 |
and load the initial server flavors: |
1706 |
|
1707 |
.. code-block:: console |
1708 |
|
1709 |
$ snf-manage loaddata flavors |
1710 |
|
1711 |
If everything returns successfully, our database is ready. |
1712 |
|
1713 |
Add the Ganeti backend |
1714 |
---------------------- |
1715 |
|
1716 |
In our installation we assume that we only have one Ganeti cluster, the one we |
1717 |
setup earlier. At this point you have to add this backend (Ganeti cluster) to |
1718 |
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>` |
1719 |
correctly. |
1720 |
|
1721 |
.. code-block:: console |
1722 |
|
1723 |
$ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd |
1724 |
|
1725 |
You can see everything has been setup correctly by running: |
1726 |
|
1727 |
.. code-block:: console |
1728 |
|
1729 |
$ snf-manage backend-list |
1730 |
|
1731 |
If something is not set correctly, you can modify the backend with the |
1732 |
``snf-manage backend-modify`` command. If something has gone wrong, you could |
1733 |
modify the backend to reflect the Ganeti installation by running: |
1734 |
|
1735 |
.. code-block:: console |
1736 |
|
1737 |
$ snf-manage backend-modify --clustername "ganeti.node1.example.com" |
1738 |
--user=cyclades |
1739 |
--pass=example_rapi_passw0rd |
1740 |
1 |
1741 |
|
1742 |
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding |
1743 |
domain that resolves to the master IP, than the IP itself, to ensure Cyclades |
1744 |
can talk to Ganeti even after a Ganeti master-failover. |
1745 |
|
1746 |
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's |
1747 |
password. Once we setup the first backend to point at our Ganeti cluster, we |
1748 |
update the Cyclades backends status by running: |
1749 |
|
1750 |
.. code-block:: console |
1751 |
|
1752 |
$ snf-manage backend-update-status |
1753 |
|
1754 |
Cyclades can manage multiple Ganeti backends, but for the purpose of this |
1755 |
guide,we won't get into more detail regarding mulitple backends. If you want to |
1756 |
learn more please see /*TODO*/. |
1757 |
|
1758 |
Add a Public Network |
1759 |
---------------------- |
1760 |
|
1761 |
Cyclades supports different Public Networks on different Ganeti backends. |
1762 |
After connecting Cyclades with our Ganeti cluster, we need to setup a Public |
1763 |
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every |
1764 |
created NIC on a bridge. After having a bridge (e.g. br0) created in every |
1765 |
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0': |
1766 |
|
1767 |
.. code-block:: console |
1768 |
|
1769 |
$ snf-manage network-create --subnet=5.6.7.0/27 \ |
1770 |
--gateway=5.6.7.1 \ |
1771 |
--subnet6=2001:648:2FFC:1322::/64 \ |
1772 |
--gateway6=2001:648:2FFC:1322::1 \ |
1773 |
--public --dhcp --flavor=CUSTOM \ |
1774 |
--link=br0 --mode=bridged \ |
1775 |
--name=public_network \ |
1776 |
--backend-id=1 |
1777 |
|
1778 |
This will create the Public Network on both Cyclades and the Ganeti backend. To |
1779 |
make sure everything was setup correctly, also run: |
1780 |
|
1781 |
.. code-block:: console |
1782 |
|
1783 |
$ snf-manage reconcile-networks |
1784 |
|
1785 |
You can see all available networks by running: |
1786 |
|
1787 |
.. code-block:: console |
1788 |
|
1789 |
$ snf-manage network-list |
1790 |
|
1791 |
and inspect each network's state by running: |
1792 |
|
1793 |
.. code-block:: console |
1794 |
|
1795 |
$ snf-manage network-inspect <net_id> |
1796 |
|
1797 |
Finally, you can see the networks from the Ganeti perspective by running on the |
1798 |
Ganeti MASTER: |
1799 |
|
1800 |
.. code-block:: console |
1801 |
|
1802 |
$ gnt-network list |
1803 |
$ gnt-network info <network_name> |
1804 |
|
1805 |
Create pools for Private Networks |
1806 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1807 |
|
1808 |
To prevent duplicate assignment of resources to different private networks, |
1809 |
Cyclades supports two types of pools: |
1810 |
|
1811 |
- MAC prefix Pool |
1812 |
- Bridge Pool |
1813 |
|
1814 |
As long as those resourses have been provisioned, admin has to define two |
1815 |
these pools in Synnefo: |
1816 |
|
1817 |
|
1818 |
.. code-block:: console |
1819 |
|
1820 |
root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
1821 |
|
1822 |
root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20 |
1823 |
|
1824 |
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`: |
1825 |
|
1826 |
.. code-block:: console |
1827 |
|
1828 |
DEFAULT_MAC_FILTERED_BRIDGE = 'prv0' |
1829 |
|
1830 |
Servers restart |
1831 |
--------------- |
1832 |
|
1833 |
Restart gunicorn on node1: |
1834 |
|
1835 |
.. code-block:: console |
1836 |
|
1837 |
# /etc/init.d/gunicorn restart |
1838 |
|
1839 |
Now let's do the final connections of Cyclades with Ganeti. |
1840 |
|
1841 |
``snf-dispatcher`` initialization |
1842 |
--------------------------------- |
1843 |
|
1844 |
``snf-dispatcher`` dispatches all messages published to the Message Queue and |
1845 |
manages the Cyclades database accordingly. It also initializes all exchanges. By |
1846 |
default it is not enabled during installation of Cyclades, so let's enable it in |
1847 |
its configuration file ``/etc/default/snf-dispatcher``: |
1848 |
|
1849 |
.. code-block:: console |
1850 |
|
1851 |
SNF_DSPTCH_ENABLE=true |
1852 |
|
1853 |
and start the daemon: |
1854 |
|
1855 |
.. code-block:: console |
1856 |
|
1857 |
# /etc/init.d/snf-dispatcher start |
1858 |
|
1859 |
You can see that everything works correctly by tailing its log file |
1860 |
``/var/log/synnefo/dispatcher.log``. |
1861 |
|
1862 |
``snf-ganeti-eventd`` on GANETI MASTER |
1863 |
-------------------------------------- |
1864 |
|
1865 |
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd`` |
1866 |
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package). |
1867 |
The daemon is already installed on the GANETI MASTER (node1 in our case). |
1868 |
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools`` |
1869 |
installation, so we enable it in its configuration file |
1870 |
``/etc/default/snf-ganeti-eventd``: |
1871 |
|
1872 |
.. code-block:: console |
1873 |
|
1874 |
SNF_EVENTD_ENABLE=true |
1875 |
|
1876 |
and start the daemon: |
1877 |
|
1878 |
.. code-block:: console |
1879 |
|
1880 |
# /etc/init.d/snf-ganeti-eventd start |
1881 |
|
1882 |
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER |
1883 |
|
1884 |
If all the above return successfully, then you have finished with the Cyclades |
1885 |
and Plankton installation and setup. Let's test our installation now. |
1886 |
|
1887 |
|
1888 |
Testing of Cyclades (and Plankton) |
1889 |
================================== |
1890 |
|
1891 |
Cyclades Web UI |
1892 |
--------------- |
1893 |
|
1894 |
First of all we need to test that our Cyclades Web UI works correctly. Open your |
1895 |
browser and go to the Astakos home page. Login and then click 'cyclades' on the |
1896 |
top cloud bar. This should redirect you to: |
1897 |
|
1898 |
`http://node1.example.com/ui/` |
1899 |
|
1900 |
and the Cyclades home page should appear. If not, please go back and find what |
1901 |
went wrong. Do not proceed if you don't see the Cyclades home page. |
1902 |
|
1903 |
If the Cyclades home page appears, click on the orange button 'New machine'. The |
1904 |
first step of the 'New machine wizard' will appear. This step shows all the |
1905 |
available Images from which you can spawn new VMs. The list should be currently |
1906 |
empty, as we haven't registered any Images yet. Close the wizard and browse the |
1907 |
interface (not many things to see yet). If everything seems to work, let's |
1908 |
register our first Image file. |
1909 |
|
1910 |
Cyclades Images |
1911 |
--------------- |
1912 |
|
1913 |
To test our Cyclades (and Plankton) installation, we will use an Image stored on |
1914 |
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all |
1915 |
steps, even though you may already have uploaded an Image on Pithos+ from a |
1916 |
:ref:`previous <snf-image-images>` section: |
1917 |
|
1918 |
* Upload an Image file to Pithos+ |
1919 |
* Register that Image file to Plankton |
1920 |
* Spawn a new VM from that Image from the Cyclades Web UI |
1921 |
|
1922 |
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_ |
1923 |
command line client to do the uploading and registering of the Image. |
1924 |
|
1925 |
Installation of `kamaki` |
1926 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1927 |
|
1928 |
You can install `kamaki` anywhere you like, since it is a standalone client of |
1929 |
the APIs and talks to the installation over `http`. For the purpose of this |
1930 |
guide we will assume that we have downloaded the `Debian Squeeze Base Image |
1931 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's |
1932 |
``/srv/images`` directory. For that reason we will install `kamaki` on node1, |
1933 |
too. We do this by running: |
1934 |
|
1935 |
.. code-block:: console |
1936 |
|
1937 |
# apt-get install kamaki |
1938 |
|
1939 |
Configuration of kamaki |
1940 |
~~~~~~~~~~~~~~~~~~~~~~~ |
1941 |
|
1942 |
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our |
1943 |
installation. We do this by running: |
1944 |
|
1945 |
.. code-block:: console |
1946 |
|
1947 |
$ kamaki config set astakos.url "https://node1.example.com" |
1948 |
$ kamaki config set compute.url "https://node1.example.com/api/v1.1" |
1949 |
$ kamaki config set image.url "https://node1.example.com/plankton" |
1950 |
$ kamaki config set store.url "https://node2.example.com/v1" |
1951 |
$ kamaki config set global.account "user@example.com" |
1952 |
$ kamaki config set store.enable on |
1953 |
$ kamaki config set store.pithos_extensions on |
1954 |
$ kamaki config set store.url "https://node2.example.com/v1" |
1955 |
$ kamaki config set store.account USER_UUID |
1956 |
$ kamaki config set global.token USER_TOKEN |
1957 |
|
1958 |
The USER_TOKEN and USER_UUID appear on the user's (``user@example.com``) `Profile` web |
1959 |
page on the Astakos Web UI. |
1960 |
|
1961 |
You can see that the new configuration options have been applied correctly, by |
1962 |
running: |
1963 |
|
1964 |
.. code-block:: console |
1965 |
|
1966 |
$ kamaki config list |
1967 |
|
1968 |
Upload an Image file to Pithos+ |
1969 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1970 |
|
1971 |
Now, that we have set up `kamaki` we will upload the Image that we have |
1972 |
downloaded and stored under ``/srv/images/``. Although we can upload the Image |
1973 |
under the root ``Pithos`` container (as you may have done when uploading the |
1974 |
Image from the Pithos+ Web UI), we will create a new container called ``images`` |
1975 |
and store the Image under that container. We do this for two reasons: |
1976 |
|
1977 |
a) To demonstrate how to create containers other than the default ``Pithos``. |
1978 |
This can be done only with the `kamaki` client and not through the Web UI. |
1979 |
|
1980 |
b) As a best organization practise, so that you won't have your Image files |
1981 |
tangled along with all your other Pithos+ files and directory structures. |
1982 |
|
1983 |
We create the new ``images`` container by running: |
1984 |
|
1985 |
.. code-block:: console |
1986 |
|
1987 |
$ kamaki store create images |
1988 |
|
1989 |
Then, we upload the Image file to that container: |
1990 |
|
1991 |
.. code-block:: console |
1992 |
|
1993 |
$ kamaki store upload --container images \ |
1994 |
/srv/images/debian_base-6.0-7-x86_64.diskdump \ |
1995 |
debian_base-6.0-7-x86_64.diskdump |
1996 |
|
1997 |
The first is the local path and the second is the remote path on Pithos+. If |
1998 |
the new container and the file appears on the Pithos+ Web UI, then you have |
1999 |
successfully created the container and uploaded the Image file. |
2000 |
|
2001 |
Register an existing Image file to Plankton |
2002 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
2003 |
|
2004 |
Once the Image file has been successfully uploaded on Pithos+, then we register |
2005 |
it to Plankton (so that it becomes visible to Cyclades), by running: |
2006 |
|
2007 |
.. code-block:: console |
2008 |
|
2009 |
$ kamaki image register "Debian Base" \ |
2010 |
pithos://USER_UUID/images/debian_base-6.0-7-x86_64.diskdump \ |
2011 |
--public \ |
2012 |
--disk-format=diskdump \ |
2013 |
--property OSFAMILY=linux --property ROOT_PARTITION=1 \ |
2014 |
--property description="Debian Squeeze Base System" \ |
2015 |
--property size=451 --property kernel=2.6.32 --property GUI="No GUI" \ |
2016 |
--property sortorder=1 --property USERS=root --property OS=debian |
2017 |
|
2018 |
This command registers the Pithos+ file |
2019 |
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an |
2020 |
Image in Plankton. This Image will be public (``--public``), so all users will |
2021 |
be able to spawn VMs from it and is of type ``diskdump``. The first two |
2022 |
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest |
2023 |
properties are optional, but recommended, so that the Images appear nicely on |
2024 |
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The |
2025 |
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable |
2026 |
inside the ``20-snf-cyclades-app-ui.conf`` configuration file. |
2027 |
|
2028 |
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed |
2029 |
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see |
2030 |
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are |
2031 |
used to show information on the Cyclades UI. |
2032 |
|
2033 |
Spawn a VM from the Cyclades Web UI |
2034 |
----------------------------------- |
2035 |
|
2036 |
If the registration completes successfully, then go to the Cyclades Web UI from |
2037 |
your browser at: |
2038 |
|
2039 |
`https://node1.example.com/ui/` |
2040 |
|
2041 |
Click on the 'New Machine' button and the first step of the wizard will appear. |
2042 |
Click on 'My Images' (right after 'System' Images) on the left pane of the |
2043 |
wizard. Your previously registered Image "Debian Base" should appear under |
2044 |
'Available Images'. If not, something has gone wrong with the registration. Make |
2045 |
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image |
2046 |
register`` returns successfully with all options and properties as shown above. |
2047 |
|
2048 |
If the Image appears on the list, select it and complete the wizard by selecting |
2049 |
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you |
2050 |
write down your password, because you *WON'T* be able to retrieve it later. |
2051 |
|
2052 |
If everything was setup correctly, after a few minutes your new machine will go |
2053 |
to state 'Running' and you will be able to use it. Click 'Console' to connect |
2054 |
through VNC out of band, or click on the machine's icon to connect directly via |
2055 |
SSH or RDP (for windows machines). |
2056 |
|
2057 |
Congratulations. You have successfully installed the whole Synnefo stack and |
2058 |
connected all components. Go ahead in the next section to test the Network |
2059 |
functionality from inside Cyclades and discover even more features. |
2060 |
|
2061 |
|
2062 |
General Testing |
2063 |
=============== |
2064 |
|
2065 |
|
2066 |
Notes |
2067 |
===== |