root / docs / quick-install-admin-guide.rst @ ec9862dd
History | View | Annotate | Download (65.3 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
|
43 |
General Prerequisites |
44 |
===================== |
45 |
|
46 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
47 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
48 |
|
49 |
To be able to download all synnefo components you need to add the following |
50 |
lines in your ``/etc/apt/sources.list`` file: |
51 |
|
52 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
53 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
54 |
|
55 |
and import the repo's GPG key: |
56 |
|
57 |
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -`` |
58 |
|
59 |
Also add the following line to enable the ``squeeze-backports`` repository, |
60 |
which may provide more recent versions of certain packages. The repository |
61 |
is deactivated by default and must be specified expicitly in ``apt-get`` |
62 |
operations: |
63 |
|
64 |
| ``deb http://backports.debian.org/debian-backports squeeze-backports main`` |
65 |
|
66 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
67 |
data inside this directory. By 'all data', we mean files, images, and pithos |
68 |
specific mapping data. If you plan to upload more than one basic image, this |
69 |
directory should have at least 50GB of free space. During this guide, we will |
70 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
71 |
to node2. Node2 has this directory mounted under ``/srv/pithos``, too. |
72 |
|
73 |
Before starting the synnefo installation, you will need basic third party |
74 |
software to be installed and configured on the physical nodes. We will describe |
75 |
each node's general prerequisites separately. Any additional configuration, |
76 |
specific to a synnefo service for each node, will be described at the service's |
77 |
section. |
78 |
|
79 |
Node1 |
80 |
----- |
81 |
|
82 |
General Synnefo dependencies |
83 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
84 |
|
85 |
* apache (http server) |
86 |
* gunicorn (WSGI http server) |
87 |
* postgresql (database) |
88 |
* rabbitmq (message queue) |
89 |
|
90 |
You can install the above by running: |
91 |
|
92 |
.. code-block:: console |
93 |
|
94 |
# apt-get install apache2 postgresql rabbitmq-server |
95 |
|
96 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
97 |
the official debian backports: |
98 |
|
99 |
.. code-block:: console |
100 |
|
101 |
# apt-get -t squeeze-backports install gunicorn |
102 |
|
103 |
On node1, we will create our databases, so you will also need the |
104 |
python-psycopg2 package: |
105 |
|
106 |
.. code-block:: console |
107 |
|
108 |
# apt-get install python-psycopg2 |
109 |
|
110 |
Database setup |
111 |
~~~~~~~~~~~~~~ |
112 |
|
113 |
On node1, we create a database called ``snf_apps``, that will host all django |
114 |
apps related tables. We also create the user ``synnefo`` and grant him all |
115 |
privileges on the database. We do this by running: |
116 |
|
117 |
.. code-block:: console |
118 |
|
119 |
root@node1:~ # su - postgres |
120 |
postgres@node1:~ $ psql |
121 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
122 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
123 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
124 |
|
125 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
126 |
grant the ``synnefo`` user all privileges on the database. This database could |
127 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
128 |
create all needed databases on node1 and then node2 will connect to them. |
129 |
|
130 |
.. code-block:: console |
131 |
|
132 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
133 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
134 |
|
135 |
Configure the database to listen to all network interfaces. You can do this by |
136 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
137 |
``listen_addresses`` to ``'*'`` : |
138 |
|
139 |
.. code-block:: console |
140 |
|
141 |
listen_addresses = '*' |
142 |
|
143 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
144 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
145 |
connections:`` : |
146 |
|
147 |
.. code-block:: console |
148 |
|
149 |
host all all 4.3.2.1/32 md5 |
150 |
host all all 4.3.2.2/32 md5 |
151 |
|
152 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
153 |
actual IPs. Now, restart the server to apply the changes: |
154 |
|
155 |
.. code-block:: console |
156 |
|
157 |
# /etc/init.d/postgresql restart |
158 |
|
159 |
Gunicorn setup |
160 |
~~~~~~~~~~~~~~ |
161 |
|
162 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
163 |
|
164 |
.. code-block:: console |
165 |
|
166 |
CONFIG = { |
167 |
'mode': 'django', |
168 |
'environment': { |
169 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
170 |
}, |
171 |
'working_dir': '/etc/synnefo', |
172 |
'user': 'www-data', |
173 |
'group': 'www-data', |
174 |
'args': ( |
175 |
'--bind=127.0.0.1:8080', |
176 |
'--workers=4', |
177 |
'--log-level=debug', |
178 |
), |
179 |
} |
180 |
|
181 |
.. warning:: Do NOT start the server yet, because it won't find the |
182 |
``synnefo.settings`` module. We will start the server after successful |
183 |
installation of astakos. If the server is running:: |
184 |
|
185 |
# /etc/init.d/gunicorn stop |
186 |
|
187 |
Apache2 setup |
188 |
~~~~~~~~~~~~~ |
189 |
|
190 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
191 |
the following: |
192 |
|
193 |
.. code-block:: console |
194 |
|
195 |
<VirtualHost *:80> |
196 |
ServerName node1.example.com |
197 |
|
198 |
RewriteEngine On |
199 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
200 |
RewriteRule ^(.*)$ - [F,L] |
201 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
202 |
</VirtualHost> |
203 |
|
204 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
205 |
containing the following: |
206 |
|
207 |
.. code-block:: console |
208 |
|
209 |
<IfModule mod_ssl.c> |
210 |
<VirtualHost _default_:443> |
211 |
ServerName node1.example.com |
212 |
|
213 |
Alias /static "/usr/share/synnefo/static" |
214 |
|
215 |
# SetEnv no-gzip |
216 |
# SetEnv dont-vary |
217 |
|
218 |
AllowEncodedSlashes On |
219 |
|
220 |
RequestHeader set X-Forwarded-Protocol "https" |
221 |
|
222 |
<Proxy * > |
223 |
Order allow,deny |
224 |
Allow from all |
225 |
</Proxy> |
226 |
|
227 |
SetEnv proxy-sendchunked |
228 |
SSLProxyEngine off |
229 |
ProxyErrorOverride off |
230 |
|
231 |
ProxyPass /static ! |
232 |
ProxyPass / http://localhost:8080/ retry=0 |
233 |
ProxyPassReverse / http://localhost:8080/ |
234 |
|
235 |
RewriteEngine On |
236 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
237 |
RewriteRule ^(.*)$ - [F,L] |
238 |
RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE] |
239 |
|
240 |
SSLEngine on |
241 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
242 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
243 |
</VirtualHost> |
244 |
</IfModule> |
245 |
|
246 |
Now enable sites and modules by running: |
247 |
|
248 |
.. code-block:: console |
249 |
|
250 |
# a2enmod ssl |
251 |
# a2enmod rewrite |
252 |
# a2dissite default |
253 |
# a2ensite synnefo |
254 |
# a2ensite synnefo-ssl |
255 |
# a2enmod headers |
256 |
# a2enmod proxy_http |
257 |
|
258 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
259 |
|
260 |
# /etc/init.d/apache2 stop |
261 |
|
262 |
.. _rabbitmq-setup: |
263 |
|
264 |
Message Queue setup |
265 |
~~~~~~~~~~~~~~~~~~~ |
266 |
|
267 |
The message queue will run on node1, so we need to create the appropriate |
268 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
269 |
exchanges: |
270 |
|
271 |
.. code-block:: console |
272 |
|
273 |
# rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd" |
274 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
275 |
|
276 |
We do not need to initialize the exchanges. This will be done automatically, |
277 |
during the Cyclades setup. |
278 |
|
279 |
Pithos+ data directory setup |
280 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
281 |
|
282 |
As mentioned in the General Prerequisites section, there is a directory called |
283 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
284 |
directory inside it: |
285 |
|
286 |
.. code-block:: console |
287 |
|
288 |
# cd /srv/pithos |
289 |
# mkdir data |
290 |
# chown www-data:www-data data |
291 |
# chmod g+ws data |
292 |
|
293 |
You are now ready with all general prerequisites concerning node1. Let's go to |
294 |
node2. |
295 |
|
296 |
Node2 |
297 |
----- |
298 |
|
299 |
General Synnefo dependencies |
300 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
301 |
|
302 |
* apache (http server) |
303 |
* gunicorn (WSGI http server) |
304 |
* postgresql (database) |
305 |
|
306 |
You can install the above by running: |
307 |
|
308 |
.. code-block:: console |
309 |
|
310 |
# apt-get install apache2 postgresql |
311 |
|
312 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
313 |
the official debian backports: |
314 |
|
315 |
.. code-block:: console |
316 |
|
317 |
# apt-get -t squeeze-backports install gunicorn |
318 |
|
319 |
Node2 will connect to the databases on node1, so you will also need the |
320 |
python-psycopg2 package: |
321 |
|
322 |
.. code-block:: console |
323 |
|
324 |
# apt-get install python-psycopg2 |
325 |
|
326 |
Database setup |
327 |
~~~~~~~~~~~~~~ |
328 |
|
329 |
All databases have been created and setup on node1, so we do not need to take |
330 |
any action here. From node2, we will just connect to them. When you get familiar |
331 |
with the software you may choose to run different databases on different nodes, |
332 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
333 |
of the purpose of this guide. |
334 |
|
335 |
Gunicorn setup |
336 |
~~~~~~~~~~~~~~ |
337 |
|
338 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
339 |
(same contents as in node1; you can just copy/paste the file): |
340 |
|
341 |
.. code-block:: console |
342 |
|
343 |
CONFIG = { |
344 |
'mode': 'django', |
345 |
'environment': { |
346 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
347 |
}, |
348 |
'working_dir': '/etc/synnefo', |
349 |
'user': 'www-data', |
350 |
'group': 'www-data', |
351 |
'args': ( |
352 |
'--bind=127.0.0.1:8080', |
353 |
'--workers=4', |
354 |
'--log-level=debug', |
355 |
'--timeout=43200' |
356 |
), |
357 |
} |
358 |
|
359 |
.. warning:: Do NOT start the server yet, because it won't find the |
360 |
``synnefo.settings`` module. We will start the server after successful |
361 |
installation of astakos. If the server is running:: |
362 |
|
363 |
# /etc/init.d/gunicorn stop |
364 |
|
365 |
Apache2 setup |
366 |
~~~~~~~~~~~~~ |
367 |
|
368 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
369 |
the following: |
370 |
|
371 |
.. code-block:: console |
372 |
|
373 |
<VirtualHost *:80> |
374 |
ServerName node2.example.com |
375 |
|
376 |
RewriteEngine On |
377 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
378 |
RewriteRule ^(.*)$ - [F,L] |
379 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
380 |
</VirtualHost> |
381 |
|
382 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
383 |
containing the following: |
384 |
|
385 |
.. code-block:: console |
386 |
|
387 |
<IfModule mod_ssl.c> |
388 |
<VirtualHost _default_:443> |
389 |
ServerName node2.example.com |
390 |
|
391 |
Alias /static "/usr/share/synnefo/static" |
392 |
|
393 |
SetEnv no-gzip |
394 |
SetEnv dont-vary |
395 |
AllowEncodedSlashes On |
396 |
|
397 |
RequestHeader set X-Forwarded-Protocol "https" |
398 |
|
399 |
<Proxy * > |
400 |
Order allow,deny |
401 |
Allow from all |
402 |
</Proxy> |
403 |
|
404 |
SetEnv proxy-sendchunked |
405 |
SSLProxyEngine off |
406 |
ProxyErrorOverride off |
407 |
|
408 |
ProxyPass /static ! |
409 |
ProxyPass / http://localhost:8080/ retry=0 |
410 |
ProxyPassReverse / http://localhost:8080/ |
411 |
|
412 |
SSLEngine on |
413 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
414 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
415 |
</VirtualHost> |
416 |
</IfModule> |
417 |
|
418 |
As in node1, enable sites and modules by running: |
419 |
|
420 |
.. code-block:: console |
421 |
|
422 |
# a2enmod ssl |
423 |
# a2enmod rewrite |
424 |
# a2dissite default |
425 |
# a2ensite synnefo |
426 |
# a2ensite synnefo-ssl |
427 |
# a2enmod headers |
428 |
# a2enmod proxy_http |
429 |
|
430 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
431 |
|
432 |
# /etc/init.d/apache2 stop |
433 |
|
434 |
We are now ready with all general prerequisites for node2. Now that we have |
435 |
finished with all general prerequisites for both nodes, we can start installing |
436 |
the services. First, let's install Astakos on node1. |
437 |
|
438 |
|
439 |
Installation of Astakos on node1 |
440 |
================================ |
441 |
|
442 |
To install astakos, grab the package from our repository (make sure you made |
443 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
444 |
previously), by running: |
445 |
|
446 |
.. code-block:: console |
447 |
|
448 |
# apt-get install snf-astakos-app |
449 |
|
450 |
After successful installation of snf-astakos-app, make sure that also |
451 |
snf-webproject has been installed (marked as "Recommended" package). By default |
452 |
Debian installs "Recommended" packages, but if you have changed your |
453 |
configuration and the package didn't install automatically, you should |
454 |
explicitly install it manually running: |
455 |
|
456 |
.. code-block:: console |
457 |
|
458 |
# apt-get install snf-webproject |
459 |
|
460 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
461 |
the experienced administrator the ability to install synnefo in a custom made |
462 |
django project. This corner case concerns only very advanced users that know |
463 |
what they are doing and want to experiment with synnefo. |
464 |
|
465 |
|
466 |
.. _conf-astakos: |
467 |
|
468 |
Configuration of Astakos |
469 |
======================== |
470 |
|
471 |
Conf Files |
472 |
---------- |
473 |
|
474 |
After astakos is successfully installed, you will find the directory |
475 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
476 |
commented configuration options, which are the default options. While installing |
477 |
new snf-* components, new configuration files will appear inside the directory. |
478 |
In this guide (and for all services), we will edit only the minimum necessary |
479 |
configuration options, to reflect our setup. Everything else will remain as is. |
480 |
|
481 |
After getting familiar with synnefo, you will be able to customize the software |
482 |
as you wish and fits your needs. Many options are available, to empower the |
483 |
administrator with extensively customizable setups. |
484 |
|
485 |
For the snf-webproject component (installed as an astakos dependency), we |
486 |
need the following: |
487 |
|
488 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
489 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
490 |
|
491 |
.. code-block:: console |
492 |
|
493 |
DATABASES = { |
494 |
'default': { |
495 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
496 |
'ENGINE': 'postgresql_psycopg2', |
497 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
498 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
499 |
'NAME': 'snf_apps', |
500 |
'USER': 'synnefo', # Not used with sqlite3. |
501 |
'PASSWORD': 'examle_passw0rd', # Not used with sqlite3. |
502 |
# Set to empty string for localhost. Not used with sqlite3. |
503 |
'HOST': '4.3.2.1', |
504 |
# Set to empty string for default. Not used with sqlite3. |
505 |
'PORT': '5432', |
506 |
} |
507 |
} |
508 |
|
509 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
510 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
511 |
seed in secret-key hashing algorithms. Set this to a random string of your |
512 |
choise and keep it private: |
513 |
|
514 |
.. code-block:: console |
515 |
|
516 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
517 |
|
518 |
For astakos specific configuration, edit the following options in |
519 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
520 |
|
521 |
.. code-block:: console |
522 |
|
523 |
ASTAKOS_IM_MODULES = ['local'] |
524 |
|
525 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
526 |
|
527 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
528 |
|
529 |
ASTAKOS_SITENAME = '~okeanos demo example' |
530 |
|
531 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
532 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
533 |
|
534 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
535 |
|
536 |
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local |
537 |
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our |
538 |
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page. |
539 |
|
540 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
541 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
542 |
|
543 |
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` : |
544 |
|
545 |
.. code-block:: console |
546 |
|
547 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
548 |
|
549 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
550 |
|
551 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
552 |
|
553 |
Those settings have to do with the black cloudbar endpoints and will be described |
554 |
in more detail later on in this guide. For now, just edit the domain to point at |
555 |
node1 which is where we have installed Astakos. |
556 |
|
557 |
If you are an advanced user and want to use the Shibboleth Authentication method, |
558 |
read the relative :ref:`section <shibboleth-auth>`. |
559 |
|
560 |
Database Initialization |
561 |
----------------------- |
562 |
|
563 |
After configuration is done, we initialize the database by running: |
564 |
|
565 |
.. code-block:: console |
566 |
|
567 |
# snf-manage syncdb |
568 |
|
569 |
At this example we don't need to create a django superuser, so we select |
570 |
``[no]`` to the question. After a successful sync, we run the migration needed |
571 |
for astakos: |
572 |
|
573 |
.. code-block:: console |
574 |
|
575 |
# snf-manage migrate im |
576 |
|
577 |
Then, we load the pre-defined user groups |
578 |
|
579 |
.. code-block:: console |
580 |
|
581 |
# snf-manage loaddata groups |
582 |
|
583 |
.. _services-reg: |
584 |
|
585 |
Services Registration |
586 |
--------------------- |
587 |
|
588 |
When the database is ready, we configure the elements of the Astakos cloudbar, |
589 |
to point to our future services: |
590 |
|
591 |
.. code-block:: console |
592 |
|
593 |
# snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png |
594 |
# snf-manage service-add "cyclades" https://node1.example.com/ui/ |
595 |
# snf-manage service-add "pithos+" https://node2.example.com/ui/ |
596 |
|
597 |
Servers Initialization |
598 |
---------------------- |
599 |
|
600 |
Finally, we initialize the servers on node1: |
601 |
|
602 |
.. code-block:: console |
603 |
|
604 |
root@node1:~ # /etc/init.d/gunicorn restart |
605 |
root@node1:~ # /etc/init.d/apache2 restart |
606 |
|
607 |
We have now finished the Astakos setup. Let's test it now. |
608 |
|
609 |
|
610 |
Testing of Astakos |
611 |
================== |
612 |
|
613 |
Open your favorite browser and go to: |
614 |
|
615 |
``http://node1.example.com/im`` |
616 |
|
617 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
618 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
619 |
|
620 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
621 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
622 |
see a green box on the top, which informs you that you made a successful request |
623 |
and the request has been sent to the administrators. So far so good, let's assume |
624 |
that you created the user with username ``user@example.com``. |
625 |
|
626 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
627 |
|
628 |
.. code-block:: console |
629 |
|
630 |
root@node1:~ # snf-manage user-list |
631 |
|
632 |
This command should show you a list with only one user; the one we just created. |
633 |
This user should have an id with a value of ``1``. It should also have an |
634 |
"active" status with the value of ``0`` (inactive). Now run: |
635 |
|
636 |
.. code-block:: console |
637 |
|
638 |
root@node1:~ # snf-manage user-modify --set-active 1 |
639 |
|
640 |
This modifies the active value to ``1``, and actually activates the user. |
641 |
When running in production, the activation is done automatically with different |
642 |
types of moderation, that Astakos supports. You can see the moderation methods |
643 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
644 |
documentation. In production, you can also manually activate a user, by sending |
645 |
him/her an activation email. See how to do this at the :ref:`User |
646 |
activation <user_activation>` section. |
647 |
|
648 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
649 |
your browser again. Try to sign in using your new credentials. If the astakos |
650 |
menu appears and you can see your profile, then you have successfully setup |
651 |
Astakos. |
652 |
|
653 |
Let's continue to install Pithos+ now. |
654 |
|
655 |
|
656 |
Installation of Pithos+ on node2 |
657 |
================================ |
658 |
|
659 |
To install pithos+, grab the packages from our repository (make sure you made |
660 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
661 |
previously), by running: |
662 |
|
663 |
.. code-block:: console |
664 |
|
665 |
# apt-get install snf-pithos-app |
666 |
|
667 |
After successful installation of snf-pithos-app, make sure that also |
668 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
669 |
the "Installation of Astakos on node1" section, if you don't remember why this |
670 |
should happen. Now, install the pithos web interface: |
671 |
|
672 |
.. code-block:: console |
673 |
|
674 |
# apt-get install snf-pithos-webclient |
675 |
|
676 |
This package provides the standalone pithos web client. The web client is the |
677 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
678 |
interface's cloudbar, at the top of the Astakos homepage. |
679 |
|
680 |
|
681 |
.. _conf-pithos: |
682 |
|
683 |
Configuration of Pithos+ |
684 |
======================== |
685 |
|
686 |
Conf Files |
687 |
---------- |
688 |
|
689 |
After pithos+ is successfully installed, you will find the directory |
690 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
691 |
after installation of astakos. Here, you will not have to change anything that |
692 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
693 |
only need to change settings that have to do with pithos+. Specifically: |
694 |
|
695 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
696 |
only the two options: |
697 |
|
698 |
.. code-block:: console |
699 |
|
700 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
701 |
|
702 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
703 |
|
704 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
705 |
PITHOS_AUTHENTICATION_USERS = None |
706 |
|
707 |
PITHOS_SERVICE_TOKEN = 'pithos_service_token22w==' |
708 |
|
709 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
710 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
711 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
712 |
``example_passw0rd``. All those settings where setup during node1's "Database |
713 |
setup" section. |
714 |
|
715 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
716 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
717 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
718 |
directory at node1's "Pithos+ data directory setup" section. |
719 |
|
720 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
721 |
is available the astakos authentication api. If not set, pithos+ tries to |
722 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
723 |
|
724 |
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on |
725 |
the Astakos node (node1 in our case): |
726 |
|
727 |
.. code-block:: console |
728 |
|
729 |
# snf-manage service-list |
730 |
|
731 |
The token has been generated automatically during the :ref:`Pithos+ service |
732 |
registration <services-reg>`. |
733 |
|
734 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
735 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
736 |
|
737 |
.. code-block:: console |
738 |
|
739 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
740 |
PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback" |
741 |
|
742 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
743 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
744 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
745 |
services, so we use this one. |
746 |
|
747 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
748 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
749 |
|
750 |
.. code-block:: console |
751 |
|
752 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
753 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3' |
754 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
755 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
756 |
|
757 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
758 |
cloudbar. |
759 |
|
760 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered |
761 |
Astakos service. You can see all :ref:`registered services <services-reg>` by |
762 |
running on the Astakos node (node1): |
763 |
|
764 |
.. code-block:: console |
765 |
|
766 |
# snf-manage service-list |
767 |
|
768 |
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos service's |
769 |
``id`` as shown by the above command, in our case ``3``. |
770 |
|
771 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
772 |
pithos+ web client to get from astakos all the information needed to fill its |
773 |
own cloudbar. So we put our astakos deployment urls there. |
774 |
|
775 |
Servers Initialization |
776 |
---------------------- |
777 |
|
778 |
After configuration is done, we initialize the servers on node2: |
779 |
|
780 |
.. code-block:: console |
781 |
|
782 |
root@node2:~ # /etc/init.d/gunicorn restart |
783 |
root@node2:~ # /etc/init.d/apache2 restart |
784 |
|
785 |
You have now finished the Pithos+ setup. Let's test it now. |
786 |
|
787 |
|
788 |
Testing of Pithos+ |
789 |
================== |
790 |
|
791 |
Open your browser and go to the Astakos homepage: |
792 |
|
793 |
``http://node1.example.com/im`` |
794 |
|
795 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
796 |
top black cloudbar. If everything was setup correctly, this will redirect you |
797 |
to: |
798 |
|
799 |
``https://node2.example.com/ui`` |
800 |
|
801 |
and you will see the blue interface of the Pithos+ application. Click the |
802 |
orange "Upload" button and upload your first file. If the file gets uploaded |
803 |
successfully, then this is your first sign of a successful Pithos+ installation. |
804 |
Go ahead and experiment with the interface to make sure everything works |
805 |
correctly. |
806 |
|
807 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
808 |
|
809 |
If you don't stumble on any problems, then you have successfully installed |
810 |
Pithos+, which you can use as a standalone File Storage Service. |
811 |
|
812 |
If you would like to do more, such as: |
813 |
|
814 |
* Spawning VMs |
815 |
* Spawning VMs from Images stored on Pithos+ |
816 |
* Uploading your custom Images to Pithos+ |
817 |
* Spawning VMs from those custom Images |
818 |
* Registering existing Pithos+ files as Images |
819 |
* Connect VMs to the Internet |
820 |
* Create Private Networks |
821 |
* Add VMs to Private Networks |
822 |
|
823 |
please continue with the rest of the guide. |
824 |
|
825 |
|
826 |
Cyclades (and Plankton) Prerequisites |
827 |
===================================== |
828 |
|
829 |
Before proceeding with the Cyclades (and Plankton) installation, make sure you |
830 |
have successfully set up Astakos and Pithos+ first, because Cyclades depends |
831 |
on them. If you don't have a working Astakos and Pithos+ installation yet, |
832 |
please return to the :ref:`top <quick-install-admin-guide>` of this guide. |
833 |
|
834 |
Besides Astakos and Pithos+, you will also need a number of additional working |
835 |
prerequisites, before you start the Cyclades installation. |
836 |
|
837 |
Ganeti |
838 |
------ |
839 |
|
840 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
841 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
842 |
Please refer to the |
843 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the |
844 |
gory details. A successful Ganeti installation concludes with a working |
845 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
846 |
<GANETI_NODES>`. |
847 |
|
848 |
The above Ganeti cluster can run on different physical machines than node1 and |
849 |
node2 and can scale independently, according to your needs. |
850 |
|
851 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
852 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
853 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
854 |
|
855 |
We highly recommend that you read the official Ganeti documentation, if you are |
856 |
not familiar with Ganeti. If you are extremely impatient, you can result with |
857 |
the above assumed setup by running: |
858 |
|
859 |
.. code-block:: console |
860 |
|
861 |
root@node1:~ # apt-get install ganeti2 |
862 |
root@node1:~ # apt-get install ganeti-htools |
863 |
root@node2:~ # apt-get install ganeti2 |
864 |
root@node2:~ # apt-get install ganeti-htools |
865 |
|
866 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
867 |
both nodes, choose a domain name that resolves to a valid floating IP (let's say |
868 |
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access |
869 |
between each other using ssh keys and not passwords. Also, make sure there is an |
870 |
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup |
871 |
a bridge interface on the host machines (e.g:: br0). Then run on node1: |
872 |
|
873 |
.. code-block:: console |
874 |
|
875 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init |
876 |
--no-etc-hosts --vg-name=ganeti |
877 |
--nic-parameters link=br0 --master-netdev eth0 |
878 |
ganeti.node1.example.com |
879 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
880 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
881 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
882 |
|
883 |
root@node1:~ # gnt-node add --no-node-setup --master-capable=yes |
884 |
--vm-capable=yes node2.example.com |
885 |
|
886 |
For any problems you may stumble upon installing Ganeti, please refer to the |
887 |
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation |
888 |
of Ganeti is out of the scope of this guide. |
889 |
|
890 |
.. _cyclades-install-snfimage: |
891 |
|
892 |
snf-image |
893 |
--------- |
894 |
|
895 |
Installation |
896 |
~~~~~~~~~~~~ |
897 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
898 |
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all* |
899 |
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on |
900 |
node1 and node2. You can do this by running on *both* nodes: |
901 |
|
902 |
.. code-block:: console |
903 |
|
904 |
# apt-get install snf-image-host snf-pithos-backend python-psycopg2 |
905 |
|
906 |
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able to |
907 |
handle image files stored on Pithos+. It also needs `python-psycopg2` to be able |
908 |
to access the Pithos+ database. This is why, we also install them on *all* |
909 |
VM-capable Ganeti nodes. |
910 |
|
911 |
Now, you need to download and save the corresponding helper package. Please see |
912 |
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's |
913 |
assume that you installed snf-image-host version 0.4.4-1. Then, you need |
914 |
snf-image-helper v0.4.4-1 on *both* nodes: |
915 |
|
916 |
.. code-block:: console |
917 |
|
918 |
# cd /var/lib/snf-image/helper/ |
919 |
# wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.4.4-1_all.deb |
920 |
|
921 |
.. warning:: Be careful: Do NOT install the snf-image-helper debian package. |
922 |
Just put it under /var/lib/snf-image/helper/ |
923 |
|
924 |
Once, you have downloaded the snf-image-helper package, create the helper VM by |
925 |
running on *both* nodes: |
926 |
|
927 |
.. code-block:: console |
928 |
|
929 |
# ln -s snf-image-helper_0.4.4-1_all.deb snf-image-helper.deb |
930 |
# snf-image-update-helper |
931 |
|
932 |
This will create all the needed files under ``/var/lib/snf-image/helper/`` for |
933 |
snf-image-host to run successfully. |
934 |
|
935 |
Configuration |
936 |
~~~~~~~~~~~~~ |
937 |
snf-image supports native access to Images stored on Pithos+. This means that |
938 |
snf-image can talk directly to the Pithos+ backend, without the need of providing |
939 |
a public URL. More details, are described in the next section. For now, the only |
940 |
thing we need to do, is configure snf-image to access our Pithos+ backend. |
941 |
|
942 |
To do this, we need to set the corresponding variables in |
943 |
``/etc/default/snf-image``, to reflect our Pithos+ setup: |
944 |
|
945 |
.. code-block:: console |
946 |
|
947 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
948 |
|
949 |
PITHOS_DATA="/srv/pithos/data" |
950 |
|
951 |
If you have installed your Ganeti cluster on different nodes than node1 and node2 make |
952 |
sure that ``/srv/pithos/data`` is visible by all of them. |
953 |
|
954 |
If you would like to use Images that are also/only stored locally, you need to |
955 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
956 |
Pithos+. |
957 |
|
958 |
Testing |
959 |
~~~~~~~ |
960 |
You can test that snf-image is successfully installed by running on the |
961 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
962 |
|
963 |
.. code-block:: console |
964 |
|
965 |
# gnt-os diagnose |
966 |
|
967 |
This should return ``valid`` for snf-image. |
968 |
|
969 |
If you are interested to learn more about snf-image's internals (and even use |
970 |
it alongside Ganeti without Synnefo), please see |
971 |
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning |
972 |
installation instructions, documentation on the design and implementation, and |
973 |
supported Image formats. |
974 |
|
975 |
.. _snf-image-images: |
976 |
|
977 |
snf-image's actual Images |
978 |
------------------------- |
979 |
|
980 |
Now that snf-image is installed successfully we need to provide it with some |
981 |
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``, |
982 |
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump`` |
983 |
format. For more information about snf-image's Image formats see `here |
984 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_. |
985 |
|
986 |
:ref:`snf-image <snf-image>` also supports three (3) different locations for the |
987 |
above Images to be stored: |
988 |
|
989 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in |
990 |
:file:`/etc/default/snf-image`) |
991 |
* On a remote host (accessible via a public URL e.g: http://... or ftp://...) |
992 |
* On Pithos+ (accessible natively, not only by its public URL) |
993 |
|
994 |
For the purpose of this guide, we will use the `Debian Squeeze Base Image |
995 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official |
996 |
`snf-image page |
997 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is |
998 |
of type ``diskdump``. We will store it in our new Pithos+ installation. |
999 |
|
1000 |
To do so, do the following: |
1001 |
|
1002 |
a) Download the Image from the official snf-image page (`image link |
1003 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_). |
1004 |
|
1005 |
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI |
1006 |
or the command line client `kamaki |
1007 |
<http://docs.dev.grnet.gr/kamaki/latest/index.html>`_. |
1008 |
|
1009 |
Once the Image is uploaded successfully, download the Image's metadata file |
1010 |
from the official snf-image page (`image_metadata link |
1011 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for |
1012 |
spawning a VM from Ganeti, in the next section. |
1013 |
|
1014 |
Of course, you can repeat the procedure to upload more Images, available from the |
1015 |
`official snf-image page |
1016 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. |
1017 |
|
1018 |
.. _ganeti-with-pithos-images: |
1019 |
|
1020 |
Spawning a VM from a Pithos+ Image, using Ganeti |
1021 |
------------------------------------------------ |
1022 |
|
1023 |
Now, it is time to test our installation so far. So, we have Astakos and |
1024 |
Pithos+ installed, we have a working Ganeti installation, the snf-image |
1025 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
1026 |
Pithos+. Make sure you also have the `metadata file |
1027 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
1028 |
|
1029 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
1030 |
|
1031 |
.. code-block:: console |
1032 |
|
1033 |
# gnt-instance add -o snf-image+default --os-parameters |
1034 |
img_passwd=my_vm_example_passw0rd, |
1035 |
img_format=diskdump, |
1036 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1037 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1038 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1039 |
testvm1 |
1040 |
|
1041 |
In the above command: |
1042 |
|
1043 |
* ``img_passwd``: the arbitrary root password of your new instance |
1044 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
1045 |
* ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this |
1046 |
should have the format |
1047 |
``pithos://<username>/<container>/<filename>``: |
1048 |
* ``username``: ``user@example.com`` (defined during Astakos sign up) |
1049 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
1050 |
* ``filename``: the name of file (visible also from the Web UI) |
1051 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
1052 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1053 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_ |
1054 |
|
1055 |
If the ``gnt-instance add`` command returns successfully, then run: |
1056 |
|
1057 |
.. code-block:: console |
1058 |
|
1059 |
# gnt-instance info testvm1 | grep "console connection" |
1060 |
|
1061 |
to find out where to connect using VNC. If you can connect successfully and can |
1062 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
1063 |
then everything works as expected and you have your new Debian Base VM up and |
1064 |
running. |
1065 |
|
1066 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
1067 |
to access the Pithos+ database and the Pithos+ backend data. Also, make sure |
1068 |
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add`` |
1069 |
succeeds but you cannot connect, again find out what went wrong. Do *NOT* |
1070 |
proceed to the next steps unless you are sure everything works till this point. |
1071 |
|
1072 |
If everything works, you have successfully connected Ganeti with Pithos+. Let's |
1073 |
move on to networking now. |
1074 |
|
1075 |
.. warning:: |
1076 |
You can bypass the networking sections and go straight to |
1077 |
:ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup |
1078 |
the Cyclades Network Service, but only the Cyclades Compute Service |
1079 |
(recommended for now). |
1080 |
|
1081 |
Network setup overview |
1082 |
---------------------- |
1083 |
|
1084 |
This part is deployment-specific and must be customized based on the specific |
1085 |
needs of the system administrator. However, to do so, the administrator needs |
1086 |
to understand how each level handles Virtual Networks, to be able to setup the |
1087 |
backend appropriately, before installing Cyclades. To do so, please read the |
1088 |
:ref:`Network <networks>` section before proceeding. |
1089 |
|
1090 |
Public Network setup |
1091 |
-------------------- |
1092 |
|
1093 |
Physical hosts' public network setup |
1094 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1095 |
|
1096 |
The physical hosts' setup is out of the scope of this guide. |
1097 |
|
1098 |
However, two common cases that you may want to consider (and choose from) are: |
1099 |
|
1100 |
a) One public bridge, where all VMs' public tap interfaces will connect. |
1101 |
b) IP-less routing over the same vlan on every host. |
1102 |
|
1103 |
When you setup your physical hosts (node1 and node2) for the Public Network, |
1104 |
then you need to inform Ganeti about the Network's IP range. |
1105 |
|
1106 |
Add the public network to Ganeti |
1107 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1108 |
|
1109 |
Once you have Ganeti with IP pool management up and running, you need to choose |
1110 |
the public network for your VMs and add it to Ganeti. Let's assume, that you |
1111 |
want to assign IPs from the ``5.6.7.0/27`` range to your new VMs, with |
1112 |
``5.6.7.1`` as their gateway. You can add the network by running: |
1113 |
|
1114 |
.. code-block:: console |
1115 |
|
1116 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 public_network |
1117 |
|
1118 |
Then, connect the network to all your nodegroups. We assume that we only have |
1119 |
one nodegroup (``default``) in our Ganeti cluster: |
1120 |
|
1121 |
.. code-block:: console |
1122 |
|
1123 |
# gnt-network connect public_network default public_link |
1124 |
|
1125 |
Your new network is now ready from the Ganeti perspective. Now, we need to setup |
1126 |
`NFDHCPD` to actually reply with the correct IPs (that Ganeti will choose for |
1127 |
each NIC). |
1128 |
|
1129 |
NFDHCPD |
1130 |
~~~~~~~ |
1131 |
|
1132 |
At this point, Ganeti knows about your preferred network, it can manage the IP |
1133 |
pool and choose a specific IP for each new VM's NIC. However, the actual |
1134 |
assignment of the IP to the NIC is not done by Ganeti. It is done after the VM |
1135 |
boots and its dhcp client makes a request. When this is done, `NFDHCPD` will |
1136 |
reply to the request with Ganeti's chosen IP. So, we need to install `NFDHCPD` |
1137 |
on all VM-capable nodes of the Ganeti cluster (node1 and node2 in our case) and |
1138 |
connect it to Ganeti: |
1139 |
|
1140 |
.. code-block:: console |
1141 |
|
1142 |
# apt-get install nfdhcpd |
1143 |
|
1144 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1145 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1146 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new |
1147 |
VMs. Once you are finished, restart the server on all nodes: |
1148 |
|
1149 |
.. code-block:: console |
1150 |
|
1151 |
# /etc/init.d/nfdhcpd restart |
1152 |
|
1153 |
If you are using ``ferm``, then you need to run the following: |
1154 |
|
1155 |
.. code-block:: console |
1156 |
|
1157 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
1158 |
# /etc/init.d/ferm restart |
1159 |
|
1160 |
Now, you need to connect `NFDHCPD` with Ganeti. To do that, you need to install |
1161 |
a custom KVM ifup script for use by Ganeti, as ``/etc/ganeti/kvm-vif-bridge``, |
1162 |
on all VM-capable GANETI-NODEs (node1 and node2). A sample implementation is |
1163 |
provided along with `snf-cyclades-gtools <snf-cyclades-gtools>`, that will |
1164 |
be installed in the next sections, however you will probably need to write your |
1165 |
own, according to your underlying network configuration. |
1166 |
|
1167 |
Testing the Public Network |
1168 |
~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1169 |
|
1170 |
So, we have setup the bridges/vlans on the physical hosts appropriately, we have |
1171 |
added the desired network to Ganeti, we have installed nfdhcpd and installed the |
1172 |
appropriate ``kvm-vif-bridge`` script under ``/etc/ganeti``. |
1173 |
|
1174 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1175 |
the Public Network. We assume to have used the (b) method on setting up the |
1176 |
physical hosts. We will add a new VM, the same way we did it on the previous |
1177 |
testing section. However, now will also add one NIC, configured to be managed |
1178 |
from our previously defined network. Run on the GANETI-MASTER (node1): |
1179 |
|
1180 |
.. code-block:: console |
1181 |
|
1182 |
# gnt-instance add -o snf-image+default --os-parameters |
1183 |
img_passwd=my_vm_example_passw0rd, |
1184 |
img_format=diskdump, |
1185 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1186 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1187 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1188 |
--net 0:ip=pool,mode=routed,link=public_link |
1189 |
testvm2 |
1190 |
|
1191 |
If the above returns successfully, connect to the new VM and run: |
1192 |
|
1193 |
.. code-block:: console |
1194 |
|
1195 |
root@testvm2:~ # ifconfig -a |
1196 |
|
1197 |
If a network interface appears with an IP from you Public Network's range |
1198 |
(``5.6.7.0/27``) and the corresponding gateway, then you have successfully |
1199 |
connected Ganeti with `NFDHCPD` (and ``kvm-vif-bridge`` works correctly). |
1200 |
|
1201 |
Now ping the outside world. If this works too, then you have also configured |
1202 |
correctly your physical hosts' networking. |
1203 |
|
1204 |
Later, Cyclades will create the first NIC of every new VM by issuing an |
1205 |
analogous command. The first NIC of the instance will be the NIC connected to |
1206 |
the Public Network. The ``link`` variable will be set accordingly in the |
1207 |
Cyclades conf files later on the guide. |
1208 |
|
1209 |
Make sure everything works as expected, before proceeding with the Private |
1210 |
Networks setup. |
1211 |
|
1212 |
.. _private-networks-setup: |
1213 |
|
1214 |
Private Networks setup |
1215 |
---------------------- |
1216 |
|
1217 |
Physical hosts' private networks setup |
1218 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1219 |
|
1220 |
At the physical host's level, it is the administrator's responsibility to |
1221 |
configure the network appropriately, according to his/her needs (as for the |
1222 |
Public Network). |
1223 |
|
1224 |
However we propose the following setup: |
1225 |
|
1226 |
For every possible Private Network we assume a pre-provisioned bridge interface |
1227 |
exists on every host with the same name. Every Private Network will be |
1228 |
associated with one of the pre-provisioned bridges. Then the instance's new NIC |
1229 |
(while connecting to the Private Network) will be connected to that bridge. All |
1230 |
instances' tap interfaces that reside in the same Private Network will be |
1231 |
connected in the corresponding bridge of that network. Furthermore, every |
1232 |
bridge will be connected to a corresponding vlan. So, lets assume that our |
1233 |
Cyclades installation allows for 20 Private Networks to be setup. We should |
1234 |
pre-provision the corresponding bridges and vlans to all the hosts. We can do |
1235 |
this by running on all VM-capable Ganeti nodes (in our case node1 and node2): |
1236 |
|
1237 |
.. code-block:: console |
1238 |
|
1239 |
# $iface=eth0 |
1240 |
# for prv in $(seq 1 20); do |
1241 |
vlan=$prv |
1242 |
bridge=prv$prv |
1243 |
vconfig add $iface $vlan |
1244 |
ifconfig $iface.$vlan up |
1245 |
brctl addbr $bridge |
1246 |
brctl setfd $bridge 0 |
1247 |
brctl addif $bridge $iface.$vlan |
1248 |
ifconfig $bridge up |
1249 |
done |
1250 |
|
1251 |
The above will do the following (assuming ``eth0`` exists on both hosts): |
1252 |
|
1253 |
* provision 20 new bridges: ``prv1`` - ``prv20`` |
1254 |
* provision 20 new vlans: ``eth0.1`` - ``eth0.20`` |
1255 |
* add the corresponding vlan to the equivelant bridge |
1256 |
|
1257 |
You can run ``brctl show`` on both nodes to see if everything was setup |
1258 |
correctly. |
1259 |
|
1260 |
Everything is now setup to support the 20 Cyclades Private Networks. Later, |
1261 |
we will configure Cyclades to talk to those 20 pre-provisioned bridges. |
1262 |
|
1263 |
Testing the Private Networks |
1264 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1265 |
|
1266 |
To test the Private Networks, we will create two instances and put them in the |
1267 |
same Private Network (``prv1``). This means that the instances will have a |
1268 |
second NIC connected to the ``prv1`` pre-provisioned bridge. |
1269 |
|
1270 |
We run the same command as in the Public Network testing section, but with one |
1271 |
more argument for the second NIC: |
1272 |
|
1273 |
.. code-block:: console |
1274 |
|
1275 |
# gnt-instance add -o snf-image+default --os-parameters |
1276 |
img_passwd=my_vm_example_passw0rd, |
1277 |
img_format=diskdump, |
1278 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1279 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1280 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1281 |
--net 0:ip=pool,mode=routed,link=public_link |
1282 |
--net 1:ip=none,mode=bridged,link=prv1 |
1283 |
testvm3 |
1284 |
|
1285 |
# gnt-instance add -o snf-image+default --os-parameters |
1286 |
img_passwd=my_vm_example_passw0rd, |
1287 |
img_format=diskdump, |
1288 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1289 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1290 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1291 |
--net 0:ip=pool,mode=routed,link=public_link |
1292 |
--net 1:ip=none,mode=bridged,link=prv1 |
1293 |
testvm4 |
1294 |
|
1295 |
Above, we create two instances with their first NIC connected to the Public |
1296 |
Network and their second NIC connected to the first Private Network (``prv1``). |
1297 |
Now, connect to the instances using VNC and make sure everything works as |
1298 |
expected: |
1299 |
|
1300 |
a) The instances have access to the public internet through their first eth |
1301 |
interface (``eth0``), which has been automatically assigned a public IP. |
1302 |
|
1303 |
b) Setup the second eth interface of the instances (``eth1``), by assigning two |
1304 |
different private IPs (e.g.: ``10.0.0.1`` and ``10.0.0.2``) and the |
1305 |
corresponding netmask. If they ``ping`` each other successfully, then |
1306 |
the Private Network works. |
1307 |
|
1308 |
Repeat the procedure with more instances connected in different Private Networks |
1309 |
(``prv{1-20}``), by adding more NICs on each instance. e.g.: We add an instance |
1310 |
connected to the Public Network and Private Networks 1, 3 and 19: |
1311 |
|
1312 |
.. code-block:: console |
1313 |
|
1314 |
# gnt-instance add -o snf-image+default --os-parameters |
1315 |
img_passwd=my_vm_example_passw0rd, |
1316 |
img_format=diskdump, |
1317 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
1318 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
1319 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
1320 |
--net 0:ip=pool,mode=routed,link=public_link |
1321 |
--net 1:ip=none,mode=bridged,link=prv1 |
1322 |
--net 2:ip=none,mode=bridged,link=prv3 |
1323 |
--net 3:ip=none,mode=bridged,link=prv19 |
1324 |
testvm5 |
1325 |
|
1326 |
If everything works as expected, then you have finished the Network Setup at the |
1327 |
backend for both types of Networks (Public & Private). |
1328 |
|
1329 |
.. _cyclades-gtools: |
1330 |
|
1331 |
Cyclades Ganeti tools |
1332 |
--------------------- |
1333 |
|
1334 |
In order for Ganeti to be connected with Cyclades later on, we need the |
1335 |
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our |
1336 |
case). You can install them by running in both nodes: |
1337 |
|
1338 |
.. code-block:: console |
1339 |
|
1340 |
# apt-get install snf-cyclades-gtools |
1341 |
|
1342 |
This will install the following: |
1343 |
|
1344 |
* ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ) |
1345 |
* ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``) |
1346 |
* ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages) |
1347 |
|
1348 |
Configure ``snf-cyclades-gtools`` |
1349 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1350 |
|
1351 |
The package will install the ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` |
1352 |
configuration file. At least we need to set the RabbitMQ endpoint for all tools |
1353 |
that need it: |
1354 |
|
1355 |
.. code-block:: console |
1356 |
|
1357 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1358 |
|
1359 |
The above variables should reflect your :ref:`Message Queue setup |
1360 |
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes. |
1361 |
|
1362 |
Connect ``snf-image`` with ``snf-progress-monitor`` |
1363 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1364 |
|
1365 |
Finally, we need to configure ``snf-image`` to publish progress messages during |
1366 |
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and |
1367 |
set the corresponding variable to ``snf-progress-monitor``: |
1368 |
|
1369 |
.. code-block:: console |
1370 |
|
1371 |
PROGRESS_MONITOR="snf-progress-monitor" |
1372 |
|
1373 |
This file should be editted in all Ganeti nodes. |
1374 |
|
1375 |
.. _rapi-user: |
1376 |
|
1377 |
Synnefo RAPI user |
1378 |
----------------- |
1379 |
|
1380 |
As a last step before installing Cyclades, create a new RAPI user that will |
1381 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1382 |
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``. |
1383 |
You can do this, by first running: |
1384 |
|
1385 |
.. code-block:: console |
1386 |
|
1387 |
# echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5 |
1388 |
|
1389 |
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows: |
1390 |
|
1391 |
.. code-block:: console |
1392 |
|
1393 |
cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write |
1394 |
|
1395 |
More about Ganeti's RAPI users `here. |
1396 |
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_ |
1397 |
|
1398 |
You have now finished with all needed Prerequisites for Cyclades (and |
1399 |
Plankton). Let's move on to the actual Cyclades installation. |
1400 |
|
1401 |
|
1402 |
Installation of Cyclades (and Plankton) on node1 |
1403 |
================================================ |
1404 |
|
1405 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
1406 |
Compute service. Plankton (the Image Registry service) will get installed |
1407 |
automatically along with Cyclades, because it is contained in the same Synnefo |
1408 |
component right now. |
1409 |
|
1410 |
We will install Cyclades (and Plankton) on node1. To do so, we install the |
1411 |
corresponding package by running on node1: |
1412 |
|
1413 |
.. code-block:: console |
1414 |
|
1415 |
# apt-get install snf-cyclades-app |
1416 |
|
1417 |
.. warning:: Make sure you have installed ``python-gevent`` version >= 0.13.6. |
1418 |
This version is available at squeeze-backports and can be installed by |
1419 |
running: ``apt-get install -t squeeze-backports python-gevent`` |
1420 |
|
1421 |
If all packages install successfully, then Cyclades and Plankton are installed |
1422 |
and we proceed with their configuration. |
1423 |
|
1424 |
|
1425 |
Configuration of Cyclades (and Plankton) |
1426 |
======================================== |
1427 |
|
1428 |
Conf files |
1429 |
---------- |
1430 |
|
1431 |
After installing Cyclades, a number of new configuration files will appear under |
1432 |
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will descibe here |
1433 |
only the minimal needed changes to result with a working system. In general, sane |
1434 |
defaults have been chosen for the most of the options, to cover most of the |
1435 |
common scenarios. However, if you want to tweak Cyclades feel free to do so, |
1436 |
once you get familiar with the different options. |
1437 |
|
1438 |
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``: |
1439 |
|
1440 |
.. code-block:: console |
1441 |
|
1442 |
ASTAKOS_URL = 'https://node1.example.com/im/authenticate' |
1443 |
|
1444 |
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set |
1445 |
to point to Astakos (this should have the same value with Pithos+'s |
1446 |
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`). |
1447 |
|
1448 |
TODO: Document the Network Options here |
1449 |
|
1450 |
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``: |
1451 |
|
1452 |
.. code-block:: console |
1453 |
|
1454 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
1455 |
CLOUDBAR_ACTIVE_SERVICE = '2' |
1456 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
1457 |
CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu' |
1458 |
|
1459 |
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common |
1460 |
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are |
1461 |
used by the Cyclades Web UI to get from Astakos all the information needed to |
1462 |
fill its own cloudbar. So, we put our Astakos deployment urls there. All the |
1463 |
above should have the same values we put in the corresponding variables in |
1464 |
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous |
1465 |
:ref:`Pithos configuration <conf-pithos>` section. |
1466 |
|
1467 |
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos |
1468 |
service. You can see all :ref:`registered services <services-reg>` by running |
1469 |
on the Astakos node (node1): |
1470 |
|
1471 |
.. code-block:: console |
1472 |
|
1473 |
# snf-manage service-list |
1474 |
|
1475 |
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's |
1476 |
``id`` as shown by the above command, in our case ``2``. |
1477 |
|
1478 |
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``: |
1479 |
|
1480 |
.. code-block:: console |
1481 |
|
1482 |
BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1483 |
BACKEND_BLOCK_PATH = '/srv/pithos/data/' |
1484 |
|
1485 |
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION`` |
1486 |
denotes the Pithos+ database (where the Image files are stored). So we set that |
1487 |
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual |
1488 |
Pithos+ data location. |
1489 |
|
1490 |
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``: |
1491 |
|
1492 |
.. code-block:: console |
1493 |
|
1494 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1495 |
|
1496 |
The above settings denote the Message Queue. Those settings should have the same |
1497 |
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and |
1498 |
reflect our :ref:`Message Queue setup <rabbitmq-setup>`. |
1499 |
|
1500 |
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``: |
1501 |
|
1502 |
.. code-block:: console |
1503 |
|
1504 |
UI_LOGIN_URL = "https://node1.example.com/im/login" |
1505 |
UI_LOGOUT_URL = "https://node1.example.com/im/logout" |
1506 |
|
1507 |
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users, |
1508 |
if they are not logged in. We point that to Astakos. |
1509 |
|
1510 |
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the |
1511 |
user when he/she logs out. We point that to Astakos, too. |
1512 |
|
1513 |
Edit ``/etc/default/vncauthproxy``: |
1514 |
|
1515 |
.. code-block:: console |
1516 |
|
1517 |
CHUID="www-data:nogroup" |
1518 |
|
1519 |
We have now finished with the basic Cyclades and Plankton configuration. |
1520 |
|
1521 |
Database Initialization |
1522 |
----------------------- |
1523 |
|
1524 |
Once Cyclades is configured, we sync the database: |
1525 |
|
1526 |
.. code-block:: console |
1527 |
|
1528 |
$ snf-manage syncdb |
1529 |
$ snf-manage migrate |
1530 |
|
1531 |
and load the initial server flavors: |
1532 |
|
1533 |
.. code-block:: console |
1534 |
|
1535 |
$ snf-manage loaddata flavors |
1536 |
|
1537 |
If everything returns successfully, our database is ready. |
1538 |
|
1539 |
Add the Ganeti backend |
1540 |
---------------------- |
1541 |
|
1542 |
In our installation we assume that we only have one Ganeti cluster. Cyclades can |
1543 |
manage multiple Ganeti backends, but for the purpose of this guide, we won't get |
1544 |
into more detail regarding mulitple backends. |
1545 |
|
1546 |
By default, when you install Cyclades, it sets up a dummy first backend. You can |
1547 |
see it by running: |
1548 |
|
1549 |
.. code-block:: console |
1550 |
|
1551 |
$ snf-manage backend-list |
1552 |
|
1553 |
We modify this backend to reflect our already setup Ganeti cluster: |
1554 |
|
1555 |
.. code-block:: console |
1556 |
|
1557 |
$ snf-manage backend-modify --clustername "ganeti.node1.example.com" |
1558 |
--username=cyclades |
1559 |
--password=example_rapi_passw0rd |
1560 |
1 |
1561 |
|
1562 |
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding |
1563 |
domain that resolves to the master IP, than the IP itself, to ensure Cyclades |
1564 |
can talk to Ganeti even after a Ganeti master-failover. |
1565 |
|
1566 |
``username`` and ``password`` denote the RAPI user's username and the RAPI |
1567 |
user's password. We set the above to reflect our :ref:`RAPI User setup |
1568 |
<rapi-user>`. The port is already set to the default RAPI port; you need to |
1569 |
change it, only if you have changed it in your Ganeti cluster setup. |
1570 |
|
1571 |
Once we setup the first backend to point at our Ganeti cluster, we update the |
1572 |
Cyclades backends status by running: |
1573 |
|
1574 |
.. code-block:: console |
1575 |
|
1576 |
$ snf-manage backend-update-status |
1577 |
|
1578 |
Add the Public Network |
1579 |
---------------------- |
1580 |
|
1581 |
After connecting Cyclades with our Ganeti cluster, we need to setup the Public |
1582 |
Network: |
1583 |
|
1584 |
.. code-block:: console |
1585 |
|
1586 |
$ snf-manage network-create --subnet=5.6.7.0/27 |
1587 |
--gateway=5.6.7.1 |
1588 |
--subnet6=2001:648:2FFC:1322::/64 |
1589 |
--gateway6=2001:648:2FFC:1322::1 |
1590 |
--public --dhcp --type=PUBLIC_ROUTED |
1591 |
--name=public_network |
1592 |
|
1593 |
This will create the Public Network on both Cyclades and the Ganeti backend. To |
1594 |
make sure everything was setup correctly, also run: |
1595 |
|
1596 |
.. code-block:: console |
1597 |
|
1598 |
$ snf-manage reconcile-networks |
1599 |
$ snf-manage reconcile-pools |
1600 |
|
1601 |
You can see all available networks by running: |
1602 |
|
1603 |
.. code-block:: console |
1604 |
|
1605 |
$ snf-manage listnetworks |
1606 |
|
1607 |
and inspect each network's state by running: |
1608 |
|
1609 |
.. code-block:: console |
1610 |
|
1611 |
$ snf-manage network-inspect <net_id> |
1612 |
|
1613 |
Finally, you can see the networks from the Ganeti perspective by running on the |
1614 |
Ganeti MASTER: |
1615 |
|
1616 |
.. code-block:: console |
1617 |
|
1618 |
$ gnt-network list |
1619 |
$ gnt-network info <network_name> |
1620 |
|
1621 |
Servers restart |
1622 |
--------------- |
1623 |
|
1624 |
Restart gunicorn on node1: |
1625 |
|
1626 |
.. code-block:: console |
1627 |
|
1628 |
# /etc/init.d/gunicorn restart |
1629 |
|
1630 |
Now let's do the final connections of Cyclades with Ganeti. |
1631 |
|
1632 |
``snf-dispatcher`` initialization |
1633 |
--------------------------------- |
1634 |
|
1635 |
``snf-dispatcher`` dispatches all messages published to the Message Queue and |
1636 |
manages the Cyclades database accordingly. It also initializes all exchanges. By |
1637 |
default it is not enabled during installation of Cyclades, so let's enable it in |
1638 |
its configuration file ``/etc/default/snf-dispatcher``: |
1639 |
|
1640 |
.. code-block:: console |
1641 |
|
1642 |
SNF_DSPTCH_ENABLE=true |
1643 |
|
1644 |
and start the daemon: |
1645 |
|
1646 |
.. code-block:: console |
1647 |
|
1648 |
# /etc/init.d/snf-dispatcher start |
1649 |
|
1650 |
You can see that everything works correctly by tailing its log file |
1651 |
``/var/log/synnefo/dispatcher.log``. |
1652 |
|
1653 |
``snf-ganeti-eventd`` on GANETI MASTER |
1654 |
-------------------------------------- |
1655 |
|
1656 |
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd`` |
1657 |
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package). |
1658 |
The daemon is already installed on the GANETI MASTER (node1 in our case). |
1659 |
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools`` |
1660 |
installation, so we enable it in its configuration file |
1661 |
``/etc/default/snf-ganeti-eventd``: |
1662 |
|
1663 |
.. code-block:: console |
1664 |
|
1665 |
SNF_EVENTD_ENABLE=true |
1666 |
|
1667 |
and start the daemon: |
1668 |
|
1669 |
.. code-block:: console |
1670 |
|
1671 |
# /etc/init.d/snf-ganeti-eventd start |
1672 |
|
1673 |
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER |
1674 |
|
1675 |
If all the above return successfully, then you have finished with the Cyclades |
1676 |
and Plankton installation and setup. Let's test our installation now. |
1677 |
|
1678 |
|
1679 |
Testing of Cyclades (and Plankton) |
1680 |
================================== |
1681 |
|
1682 |
Cyclades Web UI |
1683 |
--------------- |
1684 |
|
1685 |
First of all we need to test that our Cyclades Web UI works correctly. Open your |
1686 |
browser and go to the Astakos home page. Login and then click 'cyclades' on the |
1687 |
top cloud bar. This should redirect you to: |
1688 |
|
1689 |
`http://node1.example.com/ui/` |
1690 |
|
1691 |
and the Cyclades home page should appear. If not, please go back and find what |
1692 |
went wrong. Do not proceed if you don't see the Cyclades home page. |
1693 |
|
1694 |
If the Cyclades home page appears, click on the orange button 'New machine'. The |
1695 |
first step of the 'New machine wizard' will appear. This step shows all the |
1696 |
available Images from which you can spawn new VMs. The list should be currently |
1697 |
empty, as we haven't registered any Images yet. Close the wizard and browse the |
1698 |
interface (not many things to see yet). If everything seems to work, let's |
1699 |
register our first Image file. |
1700 |
|
1701 |
Cyclades Images |
1702 |
--------------- |
1703 |
|
1704 |
To test our Cyclades (and Plankton) installation, we will use an Image stored on |
1705 |
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all |
1706 |
steps, even though you may already have uploaded an Image on Pithos+ from a |
1707 |
:ref:`previous <snf-image-images>` section: |
1708 |
|
1709 |
* Upload an Image file to Pithos+ |
1710 |
* Register that Image file to Plankton |
1711 |
* Spawn a new VM from that Image from the Cyclades Web UI |
1712 |
|
1713 |
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_ |
1714 |
command line client to do the uploading and registering of the Image. |
1715 |
|
1716 |
Installation of `kamaki` |
1717 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1718 |
|
1719 |
You can install `kamaki` anywhere you like, since it is a standalone client of |
1720 |
the APIs and talks to the installation over `http`. For the purpose of this |
1721 |
guide we will assume that we have downloaded the `Debian Squeeze Base Image |
1722 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's |
1723 |
``/srv/images`` directory. For that reason we will install `kamaki` on node1, |
1724 |
too. We do this by running: |
1725 |
|
1726 |
.. code-block:: console |
1727 |
|
1728 |
# apt-get install kamaki |
1729 |
|
1730 |
Configuration of kamaki |
1731 |
~~~~~~~~~~~~~~~~~~~~~~~ |
1732 |
|
1733 |
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our |
1734 |
installation. We do this by running: |
1735 |
|
1736 |
.. code-block:: console |
1737 |
|
1738 |
$ kamaki config set astakos.url "https://node1.example.com" |
1739 |
$ kamaki config set compute.url="https://node1.example.com/api/v1.1" |
1740 |
$ kamaki config set image.url "https://node1.examle.com/plankton" |
1741 |
$ kamaki config set storage.url "https://node2.example.com/v1" |
1742 |
$ kamaki config set storage.account "user@example.com" |
1743 |
$ kamaki config set global.token "bdY_example_user_tokenYUff==" |
1744 |
|
1745 |
The token at the last kamaki command is our user's (``user@example.com``) token, |
1746 |
as it appears on the user's `Profile` web page on the Astakos Web UI. |
1747 |
|
1748 |
You can see that the new configuration options have been applied correctly, by |
1749 |
running: |
1750 |
|
1751 |
.. code-block:: console |
1752 |
|
1753 |
$ kamaki config list |
1754 |
|
1755 |
Upload an Image file to Pithos+ |
1756 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1757 |
|
1758 |
Now, that we have set up `kamaki` we will upload the Image that we have |
1759 |
downloaded and stored under ``/srv/images/``. Although we can upload the Image |
1760 |
under the root ``Pithos`` container (as you may have done when uploading the |
1761 |
Image from the Pithos+ Web UI), we will create a new container called ``images`` |
1762 |
and store the Image under that container. We do this for two reasons: |
1763 |
|
1764 |
a) To demonstrate how to create containers other than the default ``Pithos``. |
1765 |
This can be done only with the `kamaki` client and not through the Web UI. |
1766 |
|
1767 |
b) As a best organization practise, so that you won't have your Image files |
1768 |
tangled along with all your other Pithos+ files and directory structures. |
1769 |
|
1770 |
We create the new ``images`` container by running: |
1771 |
|
1772 |
.. code-block:: console |
1773 |
|
1774 |
$ kamaki store create images |
1775 |
|
1776 |
Then, we upload the Image file to that container: |
1777 |
|
1778 |
.. code-block:: console |
1779 |
|
1780 |
$ kamaki store upload --container images \ |
1781 |
/srv/images/debian_base-6.0-7-x86_64.diskdump \ |
1782 |
debian_base-6.0-7-x86_64.diskdump |
1783 |
|
1784 |
The first is the local path and the second is the remote path on Pithos+. If |
1785 |
the new container and the file appears on the Pithos+ Web UI, then you have |
1786 |
successfully created the container and uploaded the Image file. |
1787 |
|
1788 |
Register an existing Image file to Plankton |
1789 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1790 |
|
1791 |
Once the Image file has been successfully uploaded on Pithos+, then we register |
1792 |
it to Plankton (so that it becomes visible to Cyclades), by running: |
1793 |
|
1794 |
.. code-block:: console |
1795 |
|
1796 |
$ kamaki image register "Debian Base" |
1797 |
pithos://user@examle.com/images/debian_base-6.0-7-x86_64.diskdump |
1798 |
--public |
1799 |
--disk-format=diskdump |
1800 |
--property OSFAMILY=linux --property ROOT_PARTITION=1 |
1801 |
--property description="Debian Squeeze Base System" |
1802 |
--property size=451 --property kernel=2.6.32 --property GUI="No GUI" |
1803 |
--property sortorder=1 --property USERS=root --property OS=debian |
1804 |
|
1805 |
This command registers the Pithos+ file |
1806 |
``pithos://user@examle.com/images/debian_base-6.0-7-x86_64.diskdump`` as an |
1807 |
Image in Plankton. This Image will be public (``--public``), so all users will |
1808 |
be able to spawn VMs from it and is of type ``diskdump``. The first two |
1809 |
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest |
1810 |
properties are optional, but recommended, so that the Images appear nicely on |
1811 |
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The |
1812 |
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable |
1813 |
inside the ``20-snf-cyclades-app-ui.conf`` configuration file. |
1814 |
|
1815 |
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed |
1816 |
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see |
1817 |
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are |
1818 |
used to show information on the Cyclades UI. |
1819 |
|
1820 |
Spawn a VM from the Cyclades Web UI |
1821 |
----------------------------------- |
1822 |
|
1823 |
If the registration completes successfully, then go to the Cyclades Web UI from |
1824 |
your browser at: |
1825 |
|
1826 |
`https://node1.example.com/ui/` |
1827 |
|
1828 |
Click on the 'New Machine' button and the first step of the wizard will appear. |
1829 |
Click on 'My Images' (right after 'System' Images) on the left pane of the |
1830 |
wizard. Your previously registered Image "Debian Base" should appear under |
1831 |
'Available Images'. If not, something has gone wrong with the registration. Make |
1832 |
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image |
1833 |
register`` returns successfully with all options and properties as shown above. |
1834 |
|
1835 |
If the Image appears on the list, select it and complete the wizard by selecting |
1836 |
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you |
1837 |
write down your password, because you *WON'T* be able to retrieve it later. |
1838 |
|
1839 |
If everything was setup correctly, after a few minutes your new machine will go |
1840 |
to state 'Running' and you will be able to use it. Click 'Console' to connect |
1841 |
through VNC out of band, or click on the machine's icon to connect directly via |
1842 |
SSH or RDP (for windows machines). |
1843 |
|
1844 |
Congratulations. You have successfully installed the whole Synnefo stack and |
1845 |
connected all components. Go ahead in the next section to test the Network |
1846 |
functionality from inside Cyclades and discover even more features. |
1847 |
|
1848 |
|
1849 |
General Testing |
1850 |
=============== |
1851 |
|
1852 |
|
1853 |
Notes |
1854 |
===== |