root / docs / quick-install-admin-guide.rst @ 04c1254b
History | View | Annotate | Download (58.9 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
|
43 |
General Prerequisites |
44 |
===================== |
45 |
|
46 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
47 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
48 |
|
49 |
To be able to download all synnefo components you need to add the following |
50 |
lines in your ``/etc/apt/sources.list`` file: |
51 |
|
52 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
53 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
54 |
|
55 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
56 |
data inside this directory. By 'all data', we mean files, images, and pithos |
57 |
specific mapping data. If you plan to upload more than one basic image, this |
58 |
directory should have at least 50GB of free space. During this guide, we will |
59 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
60 |
to node2. Node2 has this directory mounted under ``/srv/pithos``, too. |
61 |
|
62 |
Before starting the synnefo installation, you will need basic third party |
63 |
software to be installed and configured on the physical nodes. We will describe |
64 |
each node's general prerequisites separately. Any additional configuration, |
65 |
specific to a synnefo service for each node, will be described at the service's |
66 |
section. |
67 |
|
68 |
Node1 |
69 |
----- |
70 |
|
71 |
General Synnefo dependencies |
72 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
73 |
|
74 |
* apache (http server) |
75 |
* gunicorn (WSGI http server) |
76 |
* postgresql (database) |
77 |
* rabbitmq (message queue) |
78 |
|
79 |
You can install the above by running: |
80 |
|
81 |
.. code-block:: console |
82 |
|
83 |
# apt-get install apache2 postgresql rabbitmq-server |
84 |
|
85 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
86 |
the official debian backports: |
87 |
|
88 |
.. code-block:: console |
89 |
|
90 |
# apt-get -t squeeze-backports install gunicorn |
91 |
|
92 |
On node1, we will create our databases, so you will also need the |
93 |
python-psycopg2 package: |
94 |
|
95 |
.. code-block:: console |
96 |
|
97 |
# apt-get install python-psycopg2 |
98 |
|
99 |
Database setup |
100 |
~~~~~~~~~~~~~~ |
101 |
|
102 |
On node1, we create a database called ``snf_apps``, that will host all django |
103 |
apps related tables. We also create the user ``synnefo`` and grant him all |
104 |
privileges on the database. We do this by running: |
105 |
|
106 |
.. code-block:: console |
107 |
|
108 |
root@node1:~ # su - postgres |
109 |
postgres@node1:~ $ psql |
110 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
111 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
112 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
113 |
|
114 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
115 |
grant the ``synnefo`` user all privileges on the database. This database could |
116 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
117 |
create all needed databases on node1 and then node2 will connect to them. |
118 |
|
119 |
.. code-block:: console |
120 |
|
121 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
122 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
123 |
|
124 |
Configure the database to listen to all network interfaces. You can do this by |
125 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
126 |
``listen_addresses`` to ``'*'`` : |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
listen_addresses = '*' |
131 |
|
132 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
133 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
134 |
connections:`` : |
135 |
|
136 |
.. code-block:: console |
137 |
|
138 |
host all all 4.3.2.1/32 md5 |
139 |
host all all 4.3.2.2/32 md5 |
140 |
|
141 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
142 |
actual IPs. Now, restart the server to apply the changes: |
143 |
|
144 |
.. code-block:: console |
145 |
|
146 |
# /etc/init.d/postgresql restart |
147 |
|
148 |
Gunicorn setup |
149 |
~~~~~~~~~~~~~~ |
150 |
|
151 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
152 |
|
153 |
.. code-block:: console |
154 |
|
155 |
CONFIG = { |
156 |
'mode': 'django', |
157 |
'environment': { |
158 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
159 |
}, |
160 |
'working_dir': '/etc/synnefo', |
161 |
'user': 'www-data', |
162 |
'group': 'www-data', |
163 |
'args': ( |
164 |
'--bind=127.0.0.1:8080', |
165 |
'--workers=4', |
166 |
'--log-level=debug', |
167 |
), |
168 |
} |
169 |
|
170 |
.. warning:: Do NOT start the server yet, because it won't find the |
171 |
``synnefo.settings`` module. We will start the server after successful |
172 |
installation of astakos. If the server is running:: |
173 |
|
174 |
# /etc/init.d/gunicorn stop |
175 |
|
176 |
Apache2 setup |
177 |
~~~~~~~~~~~~~ |
178 |
|
179 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
180 |
the following: |
181 |
|
182 |
.. code-block:: console |
183 |
|
184 |
<VirtualHost *:80> |
185 |
ServerName node1.example.com |
186 |
|
187 |
RewriteEngine On |
188 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
189 |
RewriteRule ^(.*)$ - [F,L] |
190 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
191 |
</VirtualHost> |
192 |
|
193 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
194 |
containing the following: |
195 |
|
196 |
.. code-block:: console |
197 |
|
198 |
<IfModule mod_ssl.c> |
199 |
<VirtualHost _default_:443> |
200 |
ServerName node1.example.com |
201 |
|
202 |
Alias /static "/usr/share/synnefo/static" |
203 |
|
204 |
# SetEnv no-gzip |
205 |
# SetEnv dont-vary |
206 |
|
207 |
AllowEncodedSlashes On |
208 |
|
209 |
RequestHeader set X-Forwarded-Protocol "https" |
210 |
|
211 |
<Proxy * > |
212 |
Order allow,deny |
213 |
Allow from all |
214 |
</Proxy> |
215 |
|
216 |
SetEnv proxy-sendchunked |
217 |
SSLProxyEngine off |
218 |
ProxyErrorOverride off |
219 |
|
220 |
ProxyPass /static ! |
221 |
ProxyPass / http://localhost:8080/ retry=0 |
222 |
ProxyPassReverse / http://localhost:8080/ |
223 |
|
224 |
RewriteEngine On |
225 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
226 |
RewriteRule ^(.*)$ - [F,L] |
227 |
RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE] |
228 |
|
229 |
SSLEngine on |
230 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
231 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
232 |
</VirtualHost> |
233 |
</IfModule> |
234 |
|
235 |
Now enable sites and modules by running: |
236 |
|
237 |
.. code-block:: console |
238 |
|
239 |
# a2enmod ssl |
240 |
# a2enmod rewrite |
241 |
# a2dissite default |
242 |
# a2ensite synnefo |
243 |
# a2ensite synnefo-ssl |
244 |
# a2enmod headers |
245 |
# a2enmod proxy_http |
246 |
|
247 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
248 |
|
249 |
# /etc/init.d/apache2 stop |
250 |
|
251 |
.. _rabbitmq-setup: |
252 |
|
253 |
Message Queue setup |
254 |
~~~~~~~~~~~~~~~~~~~ |
255 |
|
256 |
The message queue will run on node1, so we need to create the appropriate |
257 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
258 |
exchanges: |
259 |
|
260 |
.. code-block:: console |
261 |
|
262 |
# rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd" |
263 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
264 |
|
265 |
We do not need to initialize the exchanges. This will be done automatically, |
266 |
during the Cyclades setup. |
267 |
|
268 |
Pithos+ data directory setup |
269 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
270 |
|
271 |
As mentioned in the General Prerequisites section, there is a directory called |
272 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
273 |
directory inside it: |
274 |
|
275 |
.. code-block:: console |
276 |
|
277 |
# cd /srv/pithos |
278 |
# mkdir data |
279 |
# chown www-data:www-data data |
280 |
# chmod g+ws data |
281 |
|
282 |
You are now ready with all general prerequisites concerning node1. Let's go to |
283 |
node2. |
284 |
|
285 |
Node2 |
286 |
----- |
287 |
|
288 |
General Synnefo dependencies |
289 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
290 |
|
291 |
* apache (http server) |
292 |
* gunicorn (WSGI http server) |
293 |
* postgresql (database) |
294 |
|
295 |
You can install the above by running: |
296 |
|
297 |
.. code-block:: console |
298 |
|
299 |
# apt-get install apache2 postgresql |
300 |
|
301 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
302 |
the official debian backports: |
303 |
|
304 |
.. code-block:: console |
305 |
|
306 |
# apt-get -t squeeze-backports install gunicorn |
307 |
|
308 |
Node2 will connect to the databases on node1, so you will also need the |
309 |
python-psycopg2 package: |
310 |
|
311 |
.. code-block:: console |
312 |
|
313 |
# apt-get install python-psycopg2 |
314 |
|
315 |
Database setup |
316 |
~~~~~~~~~~~~~~ |
317 |
|
318 |
All databases have been created and setup on node1, so we do not need to take |
319 |
any action here. From node2, we will just connect to them. When you get familiar |
320 |
with the software you may choose to run different databases on different nodes, |
321 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
322 |
of the purpose of this guide. |
323 |
|
324 |
Gunicorn setup |
325 |
~~~~~~~~~~~~~~ |
326 |
|
327 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
328 |
(same contents as in node1; you can just copy/paste the file): |
329 |
|
330 |
.. code-block:: console |
331 |
|
332 |
CONFIG = { |
333 |
'mode': 'django', |
334 |
'environment': { |
335 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
336 |
}, |
337 |
'working_dir': '/etc/synnefo', |
338 |
'user': 'www-data', |
339 |
'group': 'www-data', |
340 |
'args': ( |
341 |
'--bind=127.0.0.1:8080', |
342 |
'--workers=4', |
343 |
'--log-level=debug', |
344 |
'--timeout=43200' |
345 |
), |
346 |
} |
347 |
|
348 |
.. warning:: Do NOT start the server yet, because it won't find the |
349 |
``synnefo.settings`` module. We will start the server after successful |
350 |
installation of astakos. If the server is running:: |
351 |
|
352 |
# /etc/init.d/gunicorn stop |
353 |
|
354 |
Apache2 setup |
355 |
~~~~~~~~~~~~~ |
356 |
|
357 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
358 |
the following: |
359 |
|
360 |
.. code-block:: console |
361 |
|
362 |
<VirtualHost *:80> |
363 |
ServerName node2.example.com |
364 |
|
365 |
RewriteEngine On |
366 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
367 |
RewriteRule ^(.*)$ - [F,L] |
368 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
369 |
</VirtualHost> |
370 |
|
371 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
372 |
containing the following: |
373 |
|
374 |
.. code-block:: console |
375 |
|
376 |
<IfModule mod_ssl.c> |
377 |
<VirtualHost _default_:443> |
378 |
ServerName node2.example.com |
379 |
|
380 |
Alias /static "/usr/share/synnefo/static" |
381 |
|
382 |
SetEnv no-gzip |
383 |
SetEnv dont-vary |
384 |
AllowEncodedSlashes On |
385 |
|
386 |
RequestHeader set X-Forwarded-Protocol "https" |
387 |
|
388 |
<Proxy * > |
389 |
Order allow,deny |
390 |
Allow from all |
391 |
</Proxy> |
392 |
|
393 |
SetEnv proxy-sendchunked |
394 |
SSLProxyEngine off |
395 |
ProxyErrorOverride off |
396 |
|
397 |
ProxyPass /static ! |
398 |
ProxyPass / http://localhost:8080/ retry=0 |
399 |
ProxyPassReverse / http://localhost:8080/ |
400 |
|
401 |
SSLEngine on |
402 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
403 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
404 |
</VirtualHost> |
405 |
</IfModule> |
406 |
|
407 |
As in node1, enable sites and modules by running: |
408 |
|
409 |
.. code-block:: console |
410 |
|
411 |
# a2enmod ssl |
412 |
# a2enmod rewrite |
413 |
# a2dissite default |
414 |
# a2ensite synnefo |
415 |
# a2ensite synnefo-ssl |
416 |
# a2enmod headers |
417 |
# a2enmod proxy_http |
418 |
|
419 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
420 |
|
421 |
# /etc/init.d/apache2 stop |
422 |
|
423 |
We are now ready with all general prerequisites for node2. Now that we have |
424 |
finished with all general prerequisites for both nodes, we can start installing |
425 |
the services. First, let's install Astakos on node1. |
426 |
|
427 |
|
428 |
Installation of Astakos on node1 |
429 |
================================ |
430 |
|
431 |
To install astakos, grab the package from our repository (make sure you made |
432 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
433 |
previously), by running: |
434 |
|
435 |
.. code-block:: console |
436 |
|
437 |
# apt-get install snf-astakos-app |
438 |
|
439 |
After successful installation of snf-astakos-app, make sure that also |
440 |
snf-webproject has been installed (marked as "Recommended" package). By default |
441 |
Debian installs "Recommended" packages, but if you have changed your |
442 |
configuration and the package didn't install automatically, you should |
443 |
explicitly install it manually running: |
444 |
|
445 |
.. code-block:: console |
446 |
|
447 |
# apt-get install snf-webproject |
448 |
|
449 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
450 |
the experienced administrator the ability to install synnefo in a custom made |
451 |
django project. This corner case concerns only very advanced users that know |
452 |
what they are doing and want to experiment with synnefo. |
453 |
|
454 |
|
455 |
.. _conf-astakos: |
456 |
|
457 |
Configuration of Astakos |
458 |
======================== |
459 |
|
460 |
Conf Files |
461 |
---------- |
462 |
|
463 |
After astakos is successfully installed, you will find the directory |
464 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
465 |
commented configuration options, which are the default options. While installing |
466 |
new snf-* components, new configuration files will appear inside the directory. |
467 |
In this guide (and for all services), we will edit only the minimum necessary |
468 |
configuration options, to reflect our setup. Everything else will remain as is. |
469 |
|
470 |
After getting familiar with synnefo, you will be able to customize the software |
471 |
as you wish and fits your needs. Many options are available, to empower the |
472 |
administrator with extensively customizable setups. |
473 |
|
474 |
For the snf-webproject component (installed as an astakos dependency), we |
475 |
need the following: |
476 |
|
477 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
478 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
479 |
|
480 |
.. code-block:: console |
481 |
|
482 |
DATABASES = { |
483 |
'default': { |
484 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
485 |
'ENGINE': 'postgresql_psycopg2', |
486 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
487 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
488 |
'NAME': 'snf_apps', |
489 |
'USER': 'synnefo', # Not used with sqlite3. |
490 |
'PASSWORD': 'examle_passw0rd', # Not used with sqlite3. |
491 |
# Set to empty string for localhost. Not used with sqlite3. |
492 |
'HOST': '4.3.2.1', |
493 |
# Set to empty string for default. Not used with sqlite3. |
494 |
'PORT': '5432', |
495 |
} |
496 |
} |
497 |
|
498 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
499 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
500 |
seed in secret-key hashing algorithms. Set this to a random string of your |
501 |
choise and keep it private: |
502 |
|
503 |
.. code-block:: console |
504 |
|
505 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
506 |
|
507 |
For astakos specific configuration, edit the following options in |
508 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
509 |
|
510 |
.. code-block:: console |
511 |
|
512 |
ASTAKOS_IM_MODULES = ['local'] |
513 |
|
514 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
515 |
|
516 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
517 |
|
518 |
ASTAKOS_SITENAME = '~okeanos demo example' |
519 |
|
520 |
ASTAKOS_CLOUD_SERVICES = ( |
521 |
{ 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' }, |
522 |
{ 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' }, |
523 |
{ 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' }) |
524 |
|
525 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
526 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
527 |
|
528 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
529 |
|
530 |
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local |
531 |
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our |
532 |
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page. |
533 |
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by |
534 |
astakos. The first element of the dictionary is used to point to a generic |
535 |
landing page for your services (cyclades, pithos). If you don't have such a |
536 |
page it can be omitted. The second and third element point to our services |
537 |
themselves (the apps) and should be set as above. |
538 |
|
539 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
540 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
541 |
|
542 |
If you are an advanced user and want to use the Shibboleth Authentication method, |
543 |
read the relative :ref:`section <shibboleth-auth>`. |
544 |
|
545 |
Servers Initialization |
546 |
---------------------- |
547 |
|
548 |
After configuration is done, we initialize the servers on node1: |
549 |
|
550 |
.. code-block:: console |
551 |
|
552 |
root@node1:~ # /etc/init.d/gunicorn restart |
553 |
root@node1:~ # /etc/init.d/apache2 restart |
554 |
|
555 |
Database Initialization |
556 |
----------------------- |
557 |
|
558 |
Then, we initialize the database by running: |
559 |
|
560 |
.. code-block:: console |
561 |
|
562 |
# snf-manage syncdb |
563 |
|
564 |
At this example we don't need to create a django superuser, so we select |
565 |
``[no]`` to the question. After a successful sync, we run the migration needed |
566 |
for astakos: |
567 |
|
568 |
.. code-block:: console |
569 |
|
570 |
# snf-manage migrate im |
571 |
|
572 |
Finally we load the pre-defined user groups |
573 |
|
574 |
.. code-block:: console |
575 |
|
576 |
# snf-manage loaddata groups |
577 |
|
578 |
You have now finished the Astakos setup. Let's test it now. |
579 |
|
580 |
|
581 |
Testing of Astakos |
582 |
================== |
583 |
|
584 |
Open your favorite browser and go to: |
585 |
|
586 |
``http://node1.example.com/im`` |
587 |
|
588 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
589 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
590 |
|
591 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
592 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
593 |
see a green box on the top, which informs you that you made a successful request |
594 |
and the request has been sent to the administrators. So far so good, let's assume |
595 |
that you created the user with username ``user@example.com``. |
596 |
|
597 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
598 |
|
599 |
.. code-block:: console |
600 |
|
601 |
root@node1:~ # snf-manage listusers |
602 |
|
603 |
This command should show you a list with only one user; the one we just created. |
604 |
This user should have an id with a value of ``1``. It should also have an |
605 |
"active" status with the value of ``0`` (inactive). Now run: |
606 |
|
607 |
.. code-block:: console |
608 |
|
609 |
root@node1:~ # snf-manage modifyuser --set-active 1 |
610 |
|
611 |
This modifies the active value to ``1``, and actually activates the user. |
612 |
When running in production, the activation is done automatically with different |
613 |
types of moderation, that Astakos supports. You can see the moderation methods |
614 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
615 |
documentation. In production, you can also manually activate a user, by sending |
616 |
him/her an activation email. See how to do this at the :ref:`User |
617 |
activation <user_activation>` section. |
618 |
|
619 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
620 |
your browser again. Try to sign in using your new credentials. If the astakos |
621 |
menu appears and you can see your profile, then you have successfully setup |
622 |
Astakos. |
623 |
|
624 |
Let's continue to install Pithos+ now. |
625 |
|
626 |
|
627 |
Installation of Pithos+ on node2 |
628 |
================================ |
629 |
|
630 |
To install pithos+, grab the packages from our repository (make sure you made |
631 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
632 |
previously), by running: |
633 |
|
634 |
.. code-block:: console |
635 |
|
636 |
# apt-get install snf-pithos-app |
637 |
|
638 |
After successful installation of snf-pithos-app, make sure that also |
639 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
640 |
the "Installation of Astakos on node1" section, if you don't remember why this |
641 |
should happen. Now, install the pithos web interface: |
642 |
|
643 |
.. code-block:: console |
644 |
|
645 |
# apt-get install snf-pithos-webclient |
646 |
|
647 |
This package provides the standalone pithos web client. The web client is the |
648 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
649 |
interface's cloudbar, at the top of the Astakos homepage. |
650 |
|
651 |
|
652 |
.. _conf-pithos: |
653 |
|
654 |
Configuration of Pithos+ |
655 |
======================== |
656 |
|
657 |
Conf Files |
658 |
---------- |
659 |
|
660 |
After pithos+ is successfully installed, you will find the directory |
661 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
662 |
after installation of astakos. Here, you will not have to change anything that |
663 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
664 |
only need to change settings that have to do with pithos+. Specifically: |
665 |
|
666 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
667 |
only the two options: |
668 |
|
669 |
.. code-block:: console |
670 |
|
671 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
672 |
|
673 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
674 |
|
675 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
676 |
PITHOS_AUTHENTICATION_USERS = None |
677 |
|
678 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
679 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
680 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
681 |
``example_passw0rd``. All those settings where setup during node1's "Database |
682 |
setup" section. |
683 |
|
684 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
685 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
686 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
687 |
directory at node1's "Pithos+ data directory setup" section. |
688 |
|
689 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
690 |
is available the astakos authentication api. If not set, pithos+ tries to |
691 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
692 |
|
693 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
694 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
695 |
|
696 |
.. code-block:: console |
697 |
|
698 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
699 |
PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback" |
700 |
|
701 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
702 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
703 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
704 |
services, so we use this one. |
705 |
|
706 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
707 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
708 |
|
709 |
.. code-block:: console |
710 |
|
711 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
712 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = 'pithos' |
713 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
714 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
715 |
|
716 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
717 |
cloudbar. |
718 |
|
719 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service |
720 |
served by astakos. It's name should be identical with the ``id`` name given at |
721 |
the astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf |
722 |
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES`` |
723 |
list, to the dictionary: ``{ 'url':'https://nod...', 'name':'pithos+', |
724 |
'id':'pithos }``. This item represents the pithos+ service. The ``id`` we set |
725 |
there, is the ``id`` we want here. |
726 |
|
727 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
728 |
pithos+ web client to get from astakos all the information needed to fill its |
729 |
own cloudbar. So we put our astakos deployment urls there. |
730 |
|
731 |
Servers Initialization |
732 |
---------------------- |
733 |
|
734 |
After configuration is done, we initialize the servers on node2: |
735 |
|
736 |
.. code-block:: console |
737 |
|
738 |
root@node2:~ # /etc/init.d/gunicorn restart |
739 |
root@node2:~ # /etc/init.d/apache2 restart |
740 |
|
741 |
You have now finished the Pithos+ setup. Let's test it now. |
742 |
|
743 |
|
744 |
Testing of Pithos+ |
745 |
================== |
746 |
|
747 |
Open your browser and go to the Astakos homepage: |
748 |
|
749 |
``http://node1.example.com/im`` |
750 |
|
751 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
752 |
top black cloudbar. If everything was setup correctly, this will redirect you |
753 |
to: |
754 |
|
755 |
``https://node2.example.com/ui`` |
756 |
|
757 |
and you will see the blue interface of the Pithos+ application. Click the |
758 |
orange "Upload" button and upload your first file. If the file gets uploaded |
759 |
successfully, then this is your first sign of a successful Pithos+ installation. |
760 |
Go ahead and experiment with the interface to make sure everything works |
761 |
correctly. |
762 |
|
763 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
764 |
|
765 |
If you don't stumble on any problems, then you have successfully installed |
766 |
Pithos+, which you can use as a standalone File Storage Service. |
767 |
|
768 |
If you would like to do more, such as: |
769 |
|
770 |
* Spawning VMs |
771 |
* Spawning VMs from Images stored on Pithos+ |
772 |
* Uploading your custom Images to Pithos+ |
773 |
* Spawning VMs from those custom Images |
774 |
* Registering existing Pithos+ files as Images |
775 |
* Connect VMs to the Internet |
776 |
* Create Private Networks |
777 |
* Add VMs to Private Networks |
778 |
|
779 |
please continue with the rest of the guide. |
780 |
|
781 |
|
782 |
Cyclades (and Plankton) Prerequisites |
783 |
===================================== |
784 |
|
785 |
Before proceeding with the Cyclades (and Plankton) installation, make sure you |
786 |
have successfully set up Astakos and Pithos+ first, because Cyclades depends |
787 |
on them. If you don't have a working Astakos and Pithos+ installation yet, |
788 |
please return to the :ref:`top <quick-install-admin-guide>` of this guide. |
789 |
|
790 |
Besides Astakos and Pithos+, you will also need a number of additional working |
791 |
prerequisites, before you start the Cyclades installation. |
792 |
|
793 |
Ganeti |
794 |
------ |
795 |
|
796 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
797 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
798 |
Please refer to the |
799 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the |
800 |
gory details. A successful Ganeti installation concludes with a working |
801 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
802 |
<GANETI_NODES>`. |
803 |
|
804 |
The above Ganeti cluster can run on different physical machines than node1 and |
805 |
node2 and can scale independently, according to your needs. |
806 |
|
807 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
808 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
809 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
810 |
|
811 |
We highly recommend that you read the official Ganeti documentation, if you are |
812 |
not familiar with Ganeti. If you are extremely impatient, you can result with |
813 |
the above assumed setup by running: |
814 |
|
815 |
.. code-block:: console |
816 |
|
817 |
root@node1:~ # apt-get install ganeti2 |
818 |
root@node1:~ # apt-get install ganeti-htools |
819 |
root@node2:~ # apt-get install ganeti2 |
820 |
root@node2:~ # apt-get install ganeti-htools |
821 |
|
822 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
823 |
both nodes, choose a domain name that resolves to a valid floating IP (let's say |
824 |
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access |
825 |
between each other using ssh keys and not passwords. Also, make sure there is an |
826 |
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup |
827 |
a bridge interface on the host machines (e.g:: br0). Then run on node1: |
828 |
|
829 |
.. code-block:: console |
830 |
|
831 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init |
832 |
--no-etc-hosts --vg-name=ganeti |
833 |
--nic-parameters link=br0 --master-netdev eth0 |
834 |
ganeti.node1.example.com |
835 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
836 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
837 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
838 |
|
839 |
root@node1:~ # gnt-node add --no-node-setup --master-capable=yes |
840 |
--vm-capable=yes node2.example.com |
841 |
|
842 |
For any problems you may stumble upon installing Ganeti, please refer to the |
843 |
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation |
844 |
of Ganeti is out of the scope of this guide. |
845 |
|
846 |
.. _cyclades-install-snfimage: |
847 |
|
848 |
snf-image |
849 |
--------- |
850 |
|
851 |
Installation |
852 |
~~~~~~~~~~~~ |
853 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
854 |
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all* |
855 |
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on |
856 |
node1 and node2. You can do this by running on *both* nodes: |
857 |
|
858 |
.. code-block:: console |
859 |
|
860 |
# apt-get install snf-image-host |
861 |
|
862 |
Now, you need to download and save the corresponding helper package. Please see |
863 |
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's |
864 |
assume that you installed snf-image-host version 0.3.5-1. Then, you need |
865 |
snf-image-helper v0.3.5-1 on *both* nodes: |
866 |
|
867 |
.. code-block:: console |
868 |
|
869 |
# cd /var/lib/snf-image/helper/ |
870 |
# wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.3.5-1_all.deb |
871 |
|
872 |
.. warning:: Be careful: Do NOT install the snf-image-helper debian package. |
873 |
Just put it under /var/lib/snf-image/helper/ |
874 |
|
875 |
Once, you have downloaded the snf-image-helper package, create the helper VM by |
876 |
running on *both* nodes: |
877 |
|
878 |
.. code-block:: console |
879 |
|
880 |
# ln -s snf-image-helper_0.3.5-1_all.deb snf-image-helper.deb |
881 |
# snf-image-update-helper |
882 |
|
883 |
This will create all the needed files under ``/var/lib/snf-image/helper/`` for |
884 |
snf-image-host to run successfully. |
885 |
|
886 |
Configuration |
887 |
~~~~~~~~~~~~~ |
888 |
snf-image supports native access to Images stored on Pithos+. This means that |
889 |
snf-image can talk directly to the Pithos+ backend, without the need of providing |
890 |
a public URL. More details, are described in the next section. For now, the only |
891 |
thing we need to do, is configure snf-image to access our Pithos+ backend. |
892 |
|
893 |
To do this, we need to set the corresponding variables in |
894 |
``/etc/default/snf-image``, to reflect our Pithos+ setup: |
895 |
|
896 |
.. code-block:: console |
897 |
|
898 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
899 |
|
900 |
PITHOS_DATA="/srv/pithos/data" |
901 |
|
902 |
If you have installed your Ganeti cluster on different nodes than node1 and node2 make |
903 |
sure that ``/srv/pithos/data`` is visible by all of them. |
904 |
|
905 |
If you would like to use Images that are also/only stored locally, you need to |
906 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
907 |
Pithos+. |
908 |
|
909 |
Testing |
910 |
~~~~~~~ |
911 |
You can test that snf-image is successfully installed by running on the |
912 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
913 |
|
914 |
.. code-block:: console |
915 |
|
916 |
# gnt-os diagnose |
917 |
|
918 |
This should return ``valid`` for snf-image. |
919 |
|
920 |
If you are interested to learn more about snf-image's internals (and even use |
921 |
it alongside Ganeti without Synnefo), please see |
922 |
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning |
923 |
installation instructions, documentation on the design and implementation, and |
924 |
supported Image formats. |
925 |
|
926 |
snf-image's actual Images |
927 |
------------------------- |
928 |
|
929 |
Now that snf-image is installed successfully we need to provide it with some |
930 |
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``, |
931 |
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump`` |
932 |
format. For more information about snf-image's Image formats see `here |
933 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_. |
934 |
|
935 |
:ref:`snf-image <snf-image>` also supports three (3) different locations for the |
936 |
above Images to be stored: |
937 |
|
938 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in |
939 |
:file:`/etc/default/snf-image`) |
940 |
* On a remote host (accessible via a public URL e.g: http://... or ftp://...) |
941 |
* On Pithos+ (accessible natively, not only by its public URL) |
942 |
|
943 |
For the purpose of this guide, we will use the `Debian Squeeze Base Image |
944 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official |
945 |
`snf-image page |
946 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is |
947 |
of type ``diskdump``. We will store it in our new Pithos+ installation. |
948 |
|
949 |
To do so, do the following: |
950 |
|
951 |
a) Download the Image from the official snf-image page (`image link |
952 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_). |
953 |
|
954 |
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI |
955 |
or the command line client `kamaki |
956 |
<http://docs.dev.grnet.gr/kamaki/latest/index.html>`_. |
957 |
|
958 |
Once the Image is uploaded successfully, download the Image's metadata file |
959 |
from the official snf-image page (`image_metadata link |
960 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for |
961 |
spawning a VM from Ganeti, in the next section. |
962 |
|
963 |
Of course, you can repeat the procedure to upload more Images, available from the |
964 |
`official snf-image page |
965 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. |
966 |
|
967 |
Spawning a VM from a Pithos+ Image, using Ganeti |
968 |
------------------------------------------------ |
969 |
|
970 |
Now, it is time to test our installation so far. So, we have Astakos and |
971 |
Pithos+ installed, we have a working Ganeti installation, the snf-image |
972 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
973 |
Pithos+. Make sure you also have the `metadata file |
974 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
975 |
|
976 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
977 |
|
978 |
.. code-block:: console |
979 |
|
980 |
# gnt-instance add -o snf-image+default --os-parameters |
981 |
img_passwd=my_vm_example_passw0rd, |
982 |
img_format=diskdump, |
983 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
984 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
985 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
986 |
testvm1 |
987 |
|
988 |
In the above command: |
989 |
|
990 |
* ``img_passwd``: the arbitrary root password of your new instance |
991 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
992 |
* ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this |
993 |
should have the format |
994 |
``pithos://<username>/<container>/<filename>``: |
995 |
* ``username``: ``user@example.com`` (defined during Astakos sign up) |
996 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
997 |
* ``filename``: the name of file (visible also from the Web UI) |
998 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
999 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1000 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_ |
1001 |
|
1002 |
If the ``gnt-instance add`` command returns successfully, then run: |
1003 |
|
1004 |
.. code-block:: console |
1005 |
|
1006 |
# gnt-instance info testvm1 | grep "console connection" |
1007 |
|
1008 |
to find out where to connect using VNC. If you can connect successfully and can |
1009 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
1010 |
then everything works as expected and you have your new Debian Base VM up and |
1011 |
running. |
1012 |
|
1013 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
1014 |
to access the Pithos+ database and the Pithos+ backend data. Also, make sure |
1015 |
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add`` |
1016 |
succeeds but you cannot connect, again find out what went wrong. Do *NOT* |
1017 |
proceed to the next steps unless you are sure everything works till this point. |
1018 |
|
1019 |
If everything works, you have successfully connected Ganeti with Pithos+. Let's |
1020 |
move on to networking now. |
1021 |
|
1022 |
.. warning:: |
1023 |
You can bypass the networking sections and go straight to |
1024 |
:ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup |
1025 |
the Cyclades Network Service, but only the Cyclades Compute Service |
1026 |
(recommended for now). |
1027 |
|
1028 |
Network setup overview |
1029 |
---------------------- |
1030 |
|
1031 |
This part is deployment-specific and must be customized based on the specific |
1032 |
needs of the system administrator. However, to do so, the administrator needs |
1033 |
to understand how each level handles Virtual Networks, to be able to setup the |
1034 |
backend appropriately, before installing Cyclades. |
1035 |
|
1036 |
Network @ Cyclades level |
1037 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1038 |
|
1039 |
Cyclades understands two types of Virtual Networks: |
1040 |
|
1041 |
a) One common Public Network (Internet) |
1042 |
b) One or more distinct Private Networks (L2) |
1043 |
|
1044 |
a) When a new VM is created, it instantly gets connected to the Public Network |
1045 |
(Internet). This means it gets a public IPv4 and IPv6 and has access to the |
1046 |
public Internet. |
1047 |
|
1048 |
b) Then each user, is able to create one or more Private Networks manually and |
1049 |
add VMs inside those Private Networks. Private Networks provide Layer 2 |
1050 |
connectivity. All VMs inside a Private Network are completely isolated. |
1051 |
|
1052 |
From the VM perspective, every Network corresponds to a distinct NIC. So, the |
1053 |
above are translated as follows: |
1054 |
|
1055 |
a) Every newly created VM, needs at least one NIC. This NIC, connects the VM |
1056 |
to the Public Network and thus should get a public IPv4 and IPv6. |
1057 |
|
1058 |
b) For every Private Network, the VM gets a new NIC, which is added during the |
1059 |
connection of the VM to the Private Network (without an IP). This NIC should |
1060 |
have L2 connectivity with all other NICs connected to this Private Network. |
1061 |
|
1062 |
To achieve the above, first of all, we need Network and IP Pool management support |
1063 |
at Ganeti level, for Cyclades to be able to issue the corresponding commands. |
1064 |
|
1065 |
Network @ Ganeti level |
1066 |
~~~~~~~~~~~~~~~~~~~~~~ |
1067 |
|
1068 |
Currently, Ganeti does not support IP Pool management. However, we've been |
1069 |
actively in touch with the official Ganeti team, who are reviewing a relatively |
1070 |
big patchset that implements this functionality (you can find it at the |
1071 |
ganeti-devel mailing list). We hope that the functionality will be merged to |
1072 |
the Ganeti master branch soon and appear on Ganeti 2.7. |
1073 |
|
1074 |
Furthermore, currently the `~okeanos service <http://okeanos.grnet.gr>`_ uses |
1075 |
the same patchset with slight differencies on top of Ganeti 2.4.5. Cyclades |
1076 |
0.9 are compatible with this old patchset and we do not guarantee that will |
1077 |
work with the updated patchset sent to ganeti-devel. |
1078 |
|
1079 |
We do *NOT* recommend you to apply the patchset yourself on the current Ganeti |
1080 |
master, unless you are an experienced Cyclades and Ganeti integrator and you |
1081 |
really know what you are doing. |
1082 |
|
1083 |
Instead, be a little patient and we hope that everything will work out of the |
1084 |
box, once the patchset makes it into the Ganeti master. When so, Cyclades will |
1085 |
get updated to become compatible with that Ganeti version. |
1086 |
|
1087 |
Network @ Physical host level |
1088 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1089 |
|
1090 |
We talked about the two types of Network from the Cyclades perspective, from the |
1091 |
VMs perspective and from Ganeti's perspective. Finally, we need to talk about |
1092 |
the Networks from the physical (VM container) host's perspective. |
1093 |
|
1094 |
If your version of Ganeti supports IP pool management, then you need to setup |
1095 |
your physical hosts for the two types of Networks. For the second type |
1096 |
(Private Networks), our reference installation uses a number of pre-provisioned |
1097 |
bridges (one for each Network), which are connected to the corresponding number |
1098 |
of pre-provisioned vlans on each physical host (node1 and node2). For the first |
1099 |
type (Public Network), our reference installation uses routing over one |
1100 |
preprovisioned vlan on each host (node1 and node2). It also uses the `NFDHCPD` |
1101 |
package for dynamically serving specific public IPs managed by Ganeti. |
1102 |
|
1103 |
Public Network setup |
1104 |
-------------------- |
1105 |
|
1106 |
Physical hosts' public network setup |
1107 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1108 |
|
1109 |
The physical hosts' setup is out of the scope of this guide. |
1110 |
|
1111 |
However, two common cases that you may want to consider (and choose from) are: |
1112 |
|
1113 |
a) One public bridge, where all VMs' public tap interfaces will connect. |
1114 |
b) IP-less routing over the same vlan on every host. |
1115 |
|
1116 |
When you setup your physical hosts (node1 and node2) for the Public Network, |
1117 |
then you need to inform Ganeti about the Network's IP range. |
1118 |
|
1119 |
Add the public network to Ganeti |
1120 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1121 |
|
1122 |
Once you have Ganeti with IP pool management up and running, you need to choose |
1123 |
the public network for your VMs and add it to Ganeti. Let's assume, that you |
1124 |
want to assign IPs from the ``5.6.7.0/27`` range to your new VMs, with |
1125 |
``5.6.7.1`` as their gateway. You can add the network by running: |
1126 |
|
1127 |
.. code-block:: console |
1128 |
|
1129 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 public_network |
1130 |
|
1131 |
Then, connect the network to all your nodegroups. We assume that we only have |
1132 |
one nodegroup (``default``) in our Ganeti cluster: |
1133 |
|
1134 |
.. code-block:: console |
1135 |
|
1136 |
# gnt-network connect public_network default public_link |
1137 |
|
1138 |
Your new network is now ready from the Ganeti perspective. Now, we need to setup |
1139 |
`NFDHCPD` to actually reply with the correct IPs (that Ganeti will choose for |
1140 |
each NIC). |
1141 |
|
1142 |
NFDHCPD |
1143 |
~~~~~~~ |
1144 |
|
1145 |
At this point, Ganeti knows about your preferred network, it can manage the IP |
1146 |
pool and choose a specific IP for each new VM's NIC. However, the actual |
1147 |
assignment of the IP to the NIC is not done by Ganeti. It is done after the VM |
1148 |
boots and its dhcp client makes a request. When this is done, `NFDHCPD` will |
1149 |
reply to the request with Ganeti's chosen IP. So, we need to install `NFDHCPD` |
1150 |
on all VM-capable nodes of the Ganeti cluster (node1 and node2 in our case) and |
1151 |
connect it to Ganeti: |
1152 |
|
1153 |
.. code-block:: console |
1154 |
|
1155 |
# apt-get install nfdhcpd |
1156 |
|
1157 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1158 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1159 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new |
1160 |
VMs. Once you are finished, restart the server on all nodes: |
1161 |
|
1162 |
.. code-block:: console |
1163 |
|
1164 |
# /etc/init.d/nfdhcpd restart |
1165 |
|
1166 |
If you are using ``ferm``, then you need to run the following: |
1167 |
|
1168 |
.. code-block:: console |
1169 |
|
1170 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
1171 |
# /etc/init.d/ferm restart |
1172 |
|
1173 |
Now, you need to connect `NFDHCPD` with Ganeti. To do that, you need to install |
1174 |
a custom KVM ifup script for use by Ganeti, as ``/etc/ganeti/kvm-vif-bridge``, |
1175 |
on all VM-capable GANETI-NODEs (node1 and node2). A sample implementation is |
1176 |
provided along with `snf-cyclades-gtools <snf-cyclades-gtools>`, that will |
1177 |
be installed in the next sections, however you will probably need to write your |
1178 |
own, according to your underlying network configuration. |
1179 |
|
1180 |
Testing the Public Network |
1181 |
~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1182 |
|
1183 |
So, we have setup the bridges/vlans on the physical hosts appropriately, we have |
1184 |
added the desired network to Ganeti, we have installed nfdhcpd and installed the |
1185 |
appropriate ``kvm-vif-bridge`` script under ``/etc/ganeti``. |
1186 |
|
1187 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1188 |
the Public Network. We assume to have used the (b) method on setting up the |
1189 |
physical hosts. We will add a new VM, the same way we did it on the previous |
1190 |
testing section. However, now will also add one NIC, configured to be managed |
1191 |
from our previously defined network. Run on the GANETI-MASTER (node1): |
1192 |
|
1193 |
.. code-block:: console |
1194 |
|
1195 |
# gnt-instance add -o snf-image+default --os-parameters |
1196 |
img_passwd=my_vm_example_passw0rd, |
1197 |
img_format=diskdump, |
1198 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1199 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1200 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1201 |
--net 0:ip=pool,mode=routed,link=public_link |
1202 |
testvm2 |
1203 |
|
1204 |
If the above returns successfully, connect to the new VM and run: |
1205 |
|
1206 |
.. code-block:: console |
1207 |
|
1208 |
root@testvm2:~ # ifconfig -a |
1209 |
|
1210 |
If a network interface appears with an IP from you Public Network's range |
1211 |
(``5.6.7.0/27``) and the corresponding gateway, then you have successfully |
1212 |
connected Ganeti with `NFDHCPD` (and ``kvm-vif-bridge`` works correctly). |
1213 |
|
1214 |
Now ping the outside world. If this works too, then you have also configured |
1215 |
correctly your physical hosts' networking. |
1216 |
|
1217 |
Later, Cyclades will create the first NIC of every new VM by issuing an |
1218 |
analogous command. The first NIC of the instance will be the NIC connected to |
1219 |
the Public Network. The ``link`` variable will be set accordingly in the |
1220 |
Cyclades conf files later on the guide. |
1221 |
|
1222 |
Make sure everything works as expected, before proceeding with the Private |
1223 |
Networks setup. |
1224 |
|
1225 |
.. _private-networks-setup: |
1226 |
|
1227 |
Private Networks setup |
1228 |
---------------------- |
1229 |
|
1230 |
Physical hosts' private networks setup |
1231 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1232 |
|
1233 |
At the physical host's level, it is the administrator's responsibility to |
1234 |
configure the network appropriately, according to his/her needs (as for the |
1235 |
Public Network). |
1236 |
|
1237 |
However we propose the following setup: |
1238 |
|
1239 |
For every possible Private Network we assume a pre-provisioned bridge interface |
1240 |
exists on every host with the same name. Every Private Network will be |
1241 |
associated with one of the pre-provisioned bridges. Then the instance's new NIC |
1242 |
(while connecting to the Private Network) will be connected to that bridge. All |
1243 |
instances' tap interfaces that reside in the same Private Network will be |
1244 |
connected in the corresponding bridge of that network. Furthermore, every |
1245 |
bridge will be connected to a corresponding vlan. So, lets assume that our |
1246 |
Cyclades installation allows for 20 Private Networks to be setup. We should |
1247 |
pre-provision the corresponding bridges and vlans to all the hosts. We can do |
1248 |
this by running on all VM-capable Ganeti nodes (in our case node1 and node2): |
1249 |
|
1250 |
.. code-block:: console |
1251 |
|
1252 |
# $iface=eth0 |
1253 |
# for prv in $(seq 1 20); do |
1254 |
vlan=$prv |
1255 |
bridge=prv$prv |
1256 |
vconfig add $iface $vlan |
1257 |
ifconfig $iface.$vlan up |
1258 |
brctl addbr $bridge |
1259 |
brctl setfd $bridge 0 |
1260 |
brctl addif $bridge $iface.$vlan |
1261 |
ifconfig $bridge up |
1262 |
done |
1263 |
|
1264 |
The above will do the following (assuming ``eth0`` exists on both hosts): |
1265 |
|
1266 |
* provision 20 new bridges: ``prv1`` - ``prv20`` |
1267 |
* provision 20 new vlans: ``eth0.1`` - ``eth0.20`` |
1268 |
* add the corresponding vlan to the equivelant bridge |
1269 |
|
1270 |
You can run ``brctl show`` on both nodes to see if everything was setup |
1271 |
correctly. |
1272 |
|
1273 |
Everything is now setup to support the 20 Cyclades Private Networks. Later, |
1274 |
we will configure Cyclades to talk to those 20 pre-provisioned bridges. |
1275 |
|
1276 |
Testing the Private Networks |
1277 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1278 |
|
1279 |
To test the Private Networks, we will create two instances and put them in the |
1280 |
same Private Network (``prv1``). This means that the instances will have a |
1281 |
second NIC connected to the ``prv1`` pre-provisioned bridge. |
1282 |
|
1283 |
We run the same command as in the Public Network testing section, but with one |
1284 |
more argument for the second NIC: |
1285 |
|
1286 |
.. code-block:: console |
1287 |
|
1288 |
# gnt-instance add -o snf-image+default --os-parameters |
1289 |
img_passwd=my_vm_example_passw0rd, |
1290 |
img_format=diskdump, |
1291 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1292 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1293 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1294 |
--net 0:ip=pool,mode=routed,link=public_link |
1295 |
--net 1:ip=none,mode=bridged,link=prv1 |
1296 |
testvm3 |
1297 |
|
1298 |
# gnt-instance add -o snf-image+default --os-parameters |
1299 |
img_passwd=my_vm_example_passw0rd, |
1300 |
img_format=diskdump, |
1301 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1302 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1303 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1304 |
--net 0:ip=pool,mode=routed,link=public_link |
1305 |
--net 1:ip=none,mode=bridged,link=prv1 |
1306 |
testvm4 |
1307 |
|
1308 |
Above, we create two instances with their first NIC connected to the Public |
1309 |
Network and their second NIC connected to the first Private Network (``prv1``). |
1310 |
Now, connect to the instances using VNC and make sure everything works as |
1311 |
expected: |
1312 |
|
1313 |
a) The instances have access to the public internet through their first eth |
1314 |
interface (``eth0``), which has been automatically assigned a public IP. |
1315 |
|
1316 |
b) Setup the second eth interface of the instances (``eth1``), by assigning two |
1317 |
different private IPs (e.g.: ``10.0.0.1`` and ``10.0.0.2``) and the |
1318 |
corresponding netmask. If they ``ping`` each other successfully, then |
1319 |
the Private Network works. |
1320 |
|
1321 |
Repeat the procedure with more instances connected in different Private Networks |
1322 |
(``prv{1-20}``), by adding more NICs on each instance. e.g.: We add an instance |
1323 |
connected to the Public Network and Private Networks 1, 3 and 19: |
1324 |
|
1325 |
.. code-block:: console |
1326 |
|
1327 |
# gnt-instance add -o snf-image+default --os-parameters |
1328 |
img_passwd=my_vm_example_passw0rd, |
1329 |
img_format=diskdump, |
1330 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1331 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1332 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1333 |
--net 0:ip=pool,mode=routed,link=public_link |
1334 |
--net 1:ip=none,mode=bridged,link=prv1 |
1335 |
--net 2:ip=none,mode=bridged,link=prv3 |
1336 |
--net 3:ip=none,mode=bridged,link=prv19 |
1337 |
testvm5 |
1338 |
|
1339 |
If everything works as expected, then you have finished the Network Setup at the |
1340 |
backend for both types of Networks (Public & Private). |
1341 |
|
1342 |
.. _cyclades-gtools: |
1343 |
|
1344 |
Cyclades Ganeti tools |
1345 |
--------------------- |
1346 |
|
1347 |
In order for Ganeti to be connected with Cyclades later on, we need the |
1348 |
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our |
1349 |
case). You can install them by running in both nodes: |
1350 |
|
1351 |
.. code-block:: console |
1352 |
|
1353 |
# apt-get install snf-cyclades-gtools |
1354 |
|
1355 |
This will install the following: |
1356 |
|
1357 |
* ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ) |
1358 |
* ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``) |
1359 |
* ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages) |
1360 |
* ``kvm-vif-bridge`` (installed under ``/etc/ganeti`` to connect Ganeti with |
1361 |
NFDHCPD) |
1362 |
|
1363 |
Configure ``snf-cyclades-gtools`` |
1364 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1365 |
|
1366 |
The package will install the ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` |
1367 |
configuration file. At least we need to set the RabbitMQ endpoint for all tools |
1368 |
that need it: |
1369 |
|
1370 |
.. code-block:: console |
1371 |
|
1372 |
RABBIT_HOST = "node1.example.com:5672" |
1373 |
RABBIT_USERNAME = "synnefo" |
1374 |
RABBIT_PASSWORD = "example_rabbitmq_passw0rd" |
1375 |
|
1376 |
The above variables should reflect your :ref:`Message Queue setup |
1377 |
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes. |
1378 |
|
1379 |
Connect ``snf-image`` with ``snf-progress-monitor`` |
1380 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1381 |
|
1382 |
Finally, we need to configure ``snf-image`` to publish progress messages during |
1383 |
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and |
1384 |
set the corresponding variable to ``snf-progress-monitor``: |
1385 |
|
1386 |
.. code-block:: console |
1387 |
|
1388 |
PROGRESS_MONITOR="snf-progress-monitor" |
1389 |
|
1390 |
This file should be editted in all Ganeti nodes. |
1391 |
|
1392 |
.. _rapi-user: |
1393 |
|
1394 |
Synnefo RAPI user |
1395 |
----------------- |
1396 |
|
1397 |
As a last step before installing Cyclades, create a new RAPI user that will |
1398 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1399 |
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``. |
1400 |
You can do this, by first running: |
1401 |
|
1402 |
.. code-block:: console |
1403 |
|
1404 |
# echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5 |
1405 |
|
1406 |
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows: |
1407 |
|
1408 |
.. code-block:: console |
1409 |
|
1410 |
cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write |
1411 |
|
1412 |
More about Ganeti's RAPI users `here. |
1413 |
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_ |
1414 |
|
1415 |
You have now finished with all needed Prerequisites for Cyclades (and |
1416 |
Plankton). Let's move on to the actual Cyclades installation. |
1417 |
|
1418 |
|
1419 |
Installation of Cyclades (and Plankton) on node1 |
1420 |
================================================ |
1421 |
|
1422 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
1423 |
Compute service. Plankton (the Image Registry service) will get installed |
1424 |
automatically along with Cyclades, because it is contained in the same Synnefo |
1425 |
component right now. |
1426 |
|
1427 |
We will install Cyclades (and Plankton) on node1. To do so, we install the |
1428 |
corresponding package by running on node1: |
1429 |
|
1430 |
.. code-block:: console |
1431 |
|
1432 |
# apt-get install snf-cyclades-app |
1433 |
|
1434 |
If the package installs successfully, then Cyclades and Plankton are installed |
1435 |
and we proceed with their configuration. |
1436 |
|
1437 |
|
1438 |
Configuration of Cyclades (and Plankton) |
1439 |
======================================== |
1440 |
|
1441 |
Conf files |
1442 |
---------- |
1443 |
|
1444 |
After installing Cyclades, a number of new configuration files will appear under |
1445 |
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will descibe here |
1446 |
only the minimal needed changes to result with a working system. In general, sane |
1447 |
defaults have been chosen for the most of the options, to cover most of the |
1448 |
common scenarios. However, if you want to tweak Cyclades feel free to do so, |
1449 |
once you get familiar with the different options. |
1450 |
|
1451 |
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``: |
1452 |
|
1453 |
.. code-block:: console |
1454 |
|
1455 |
GANETI_MAX_LINK_NUMBER = 20 |
1456 |
ASTAKOS_URL = 'https://accounts.node1.example.com/im/authenticate' |
1457 |
|
1458 |
The ``GANETI_MAX_LINK_NUMBER`` is used to construct the names of the bridges |
1459 |
already pre-provisioned for the Private Networks. Thus we set it to ``20``, to |
1460 |
reflect our :ref:`Private Networks setup in the host machines |
1461 |
<private-networks-setup>`. These numbers will suffix the |
1462 |
``GANETI_LINK_PREFIX``, which is already set to ``prv`` and doesn't need to be |
1463 |
changed. With those two variables Cyclades will construct the names of the |
1464 |
available bridges ``prv1`` to ``prv20``, which are the real pre-provisioned |
1465 |
bridges in the backend. |
1466 |
|
1467 |
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set |
1468 |
to point to Astakos (this should have the same value with Pithos+'s |
1469 |
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`). |
1470 |
|
1471 |
Edit ``/etc/synnefo/20-snf-cyclades-app-backend.conf``: |
1472 |
|
1473 |
.. code-block:: console |
1474 |
|
1475 |
GANETI_MASTER_IP = "ganeti.node1.example.com" |
1476 |
GANETI_CLUSTER_INFO = (GANETI_MASTER_IP, 5080, "cyclades", "example_rapi_passw0rd") |
1477 |
|
1478 |
``GANETI_MASTER_IP`` denotes the Ganeti-master's floating IP. We provide the |
1479 |
corresponding domain that resolves to that IP, than the IP itself, to ensure |
1480 |
Cyclades can talk to Ganeti even after a Ganeti master-failover. |
1481 |
|
1482 |
``GANETI_CLUSTER_INFO`` is a tuple containing the ``GANETI_MASTER_IP``, the RAPI |
1483 |
port, the RAPI user's username and the RAPI user's password. We set the above to |
1484 |
reflect our :ref:`RAPI User setup <rapi-user>`. |
1485 |
|
1486 |
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``: |
1487 |
|
1488 |
.. code-block:: console |
1489 |
|
1490 |
CLOUDBAR_LOCATION = 'https://accounts.node1.example.com/static/im/cloudbar/' |
1491 |
CLOUDBAR_ACTIVE_SERVICE = 'cyclades' |
1492 |
CLOUDBAR_SERVICES_URL = 'https://accounts.node1.example.com/im/get_services' |
1493 |
CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu' |
1494 |
|
1495 |
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common |
1496 |
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are |
1497 |
used by the Cyclades Web UI to get from Astakos all the information needed to |
1498 |
fill its own cloudbar. So, we put our Astakos deployment urls there. All the |
1499 |
above should have the same values we put in the corresponding variables in |
1500 |
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous |
1501 |
:ref:`Pithos configuration <conf-pithos>` section. |
1502 |
|
1503 |
The ``CLOUDBAR_ACTIVE_SERVICE`` registers Cyclades as a new service served by |
1504 |
Astakos. It’s name should be identical with the id name given at the Astakos’ |
1505 |
``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos :ref:`Conf Files |
1506 |
<conf-astakos>` section, we actually set the second item of the |
1507 |
``ASTAKOS_CLOUD_SERVICES`` list, to the dictionary: { 'url':'https://nod...', |
1508 |
'name':'cyclades', 'id':'cyclades' }. This item represents the Cyclades service. |
1509 |
The ``id`` we set there, is the ``id`` we want here. |
1510 |
|
1511 |
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``: |
1512 |
|
1513 |
.. code-block:: console |
1514 |
|
1515 |
BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1516 |
BACKEND_BLOCK_PATH = '/srv/pithos/data/' |
1517 |
|
1518 |
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION`` |
1519 |
denotes the Pithos+ database (where the Image files are stored). So we set that |
1520 |
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual |
1521 |
Pithos+ data location. |
1522 |
|
1523 |
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``: |
1524 |
|
1525 |
.. code-block:: console |
1526 |
|
1527 |
RABBIT_HOST = "node1.example.com:5672" |
1528 |
RABBIT_USERNAME = "synnefo" |
1529 |
RABBIT_PASSWORD = "example_rabbitmq_passw0rd" |
1530 |
|
1531 |
The above settings denote the Message Queue. Those settings should have the same |
1532 |
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and |
1533 |
reflect our :ref:`Message Queue setup <rabbitmq-setup>`. |
1534 |
|
1535 |
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``: |
1536 |
|
1537 |
.. code-block:: console |
1538 |
|
1539 |
UI_MEDIA_URL = '/static/ui/static/snf/' |
1540 |
UI_LOGIN_URL = "https://accounts.node1.example.com/im/login" |
1541 |
UI_LOGOUT_URL = "https://accounts.node1.example.com/im/logout" |
1542 |
|
1543 |
``UI_MEDIA_URL`` denotes the location of the UI's static files. |
1544 |
|
1545 |
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users, |
1546 |
if they are not logged in. We point that to Astakos. |
1547 |
|
1548 |
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the |
1549 |
user when he/she logs out. We point that to Astakos, too. |
1550 |
|
1551 |
We have now finished with the basic Cyclades and Plankton configuration. |
1552 |
|
1553 |
Database Initialization |
1554 |
----------------------- |
1555 |
|
1556 |
Once Cyclades is configured, we sync the database: |
1557 |
|
1558 |
.. code-block:: console |
1559 |
|
1560 |
$ snf-manage syncdb |
1561 |
$ snf-manage migrate |
1562 |
|
1563 |
and load the initial server flavors: |
1564 |
|
1565 |
.. code-block:: console |
1566 |
|
1567 |
$ snf-manage loaddata flavors |
1568 |
|
1569 |
If everything returns successfully, our database is ready. |
1570 |
|
1571 |
Servers restart |
1572 |
--------------- |
1573 |
|
1574 |
We also need to restart gunicorn on node1: |
1575 |
|
1576 |
.. code-block:: console |
1577 |
|
1578 |
# /etc/init.d/gunicorn restart |
1579 |
|
1580 |
Now let's do the final connections of Cyclades with Ganeti. |
1581 |
|
1582 |
``snf-dispatcher`` initialization |
1583 |
--------------------------------- |
1584 |
|
1585 |
``snf-dispatcher`` dispatches all messages published to the Message Queue and |
1586 |
manages the Cyclades database accordingly. It also initializes all exchanges. By |
1587 |
default it is not enabled during installation of Cyclades, so let's enable it in |
1588 |
its configuration file ``/etc/default/snf-dispatcher``: |
1589 |
|
1590 |
.. code-block:: console |
1591 |
|
1592 |
SNF_DSPTCH_ENABLE=true |
1593 |
|
1594 |
and start the daemon: |
1595 |
|
1596 |
.. code-block:: console |
1597 |
|
1598 |
# /etc/init.d/snf-dispatcher start |
1599 |
|
1600 |
You can see that everything works correctly by tailing its log file |
1601 |
``/var/log/synnefo/dispatcher.log``. |
1602 |
|
1603 |
``snf-ganeti-eventd`` on GANETI MASTER |
1604 |
-------------------------------------- |
1605 |
|
1606 |
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd`` |
1607 |
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package). |
1608 |
The daemon is already installed on the GANETI MASTER (node1 in our case). |
1609 |
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools`` |
1610 |
installation, so we enable it in its configuration file |
1611 |
``/etc/default/snf-ganeti-eventd``: |
1612 |
|
1613 |
.. code-block:: console |
1614 |
|
1615 |
SNF_EVENTD_ENABLE=true |
1616 |
|
1617 |
and start the daemon: |
1618 |
|
1619 |
.. code-block:: console |
1620 |
|
1621 |
# /etc/init.d/snf-ganeti-eventd start |
1622 |
|
1623 |
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER |
1624 |
|
1625 |
If all the above return successfully, then you have finished with the Cyclades |
1626 |
and Plankton installation and setup. Let's test our installation now. |
1627 |
|
1628 |
|
1629 |
Testing of Cyclades (and Plankton) |
1630 |
================================== |
1631 |
|
1632 |
|
1633 |
General Testing |
1634 |
=============== |
1635 |
|
1636 |
|
1637 |
Notes |
1638 |
===== |