root / docs / quick-install-admin-guide.rst @ 73ff1d54
History | View | Annotate | Download (30.2 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
|
43 |
General Prerequisites |
44 |
===================== |
45 |
|
46 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
47 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
48 |
|
49 |
To be able to download all synnefo components you need to add the following |
50 |
lines in your ``/etc/apt/sources.list`` file: |
51 |
|
52 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
53 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
54 |
|
55 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
56 |
data inside this directory. By 'all data', we mean files, images, and pithos |
57 |
specific mapping data. If you plan to upload more than one basic image, this |
58 |
directory should have at least 50GB of free space. During this guide, we will |
59 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
60 |
to node2. Node2 has this directory mounted under ``/srv/pithos``, too. |
61 |
|
62 |
Before starting the synnefo installation, you will need basic third party |
63 |
software to be installed and configured on the physical nodes. We will describe |
64 |
each node's general prerequisites separately. Any additional configuration, |
65 |
specific to a synnefo service for each node, will be described at the service's |
66 |
section. |
67 |
|
68 |
Node1 |
69 |
----- |
70 |
|
71 |
General Synnefo dependencies |
72 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
73 |
|
74 |
* apache (http server) |
75 |
* gunicorn (WSGI http server) |
76 |
* postgresql (database) |
77 |
* rabbitmq (message queue) |
78 |
|
79 |
You can install the above by running: |
80 |
|
81 |
.. code-block:: console |
82 |
|
83 |
# apt-get install apache2 postgresql |
84 |
|
85 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
86 |
the official debian backports: |
87 |
|
88 |
.. code-block:: console |
89 |
|
90 |
# apt-get -t squeeze-backports install gunicorn |
91 |
|
92 |
On node1, we will create our databases, so you will also need the |
93 |
python-psycopg2 package: |
94 |
|
95 |
.. code-block:: console |
96 |
|
97 |
# apt-get install python-psycopg2 |
98 |
|
99 |
Database setup |
100 |
~~~~~~~~~~~~~~ |
101 |
|
102 |
On node1, we create a database called ``snf_apps``, that will host all django |
103 |
apps related tables. We also create the user ``synnefo`` and grant him all |
104 |
privileges on the database. We do this by running: |
105 |
|
106 |
.. code-block:: console |
107 |
|
108 |
root@node1:~ # su - postgres |
109 |
postgres@node1:~ $ psql |
110 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
111 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
112 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
113 |
|
114 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
115 |
grant the ``synnefo`` user all privileges on the database. This database could |
116 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
117 |
create all needed databases on node1 and then node2 will connect to them. |
118 |
|
119 |
.. code-block:: console |
120 |
|
121 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
122 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
123 |
|
124 |
Configure the database to listen to all network interfaces. You can do this by |
125 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
126 |
``listen_addresses`` to ``'*'`` : |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
listen_addresses = '*' |
131 |
|
132 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
133 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
134 |
connections:`` : |
135 |
|
136 |
.. code-block:: console |
137 |
|
138 |
host all all 4.3.2.1/32 md5 |
139 |
host all all 4.3.2.2/32 md5 |
140 |
|
141 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
142 |
actual IPs. Now, restart the server to apply the changes: |
143 |
|
144 |
.. code-block:: console |
145 |
|
146 |
# /etc/init.d/postgresql restart |
147 |
|
148 |
Gunicorn setup |
149 |
~~~~~~~~~~~~~~ |
150 |
|
151 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
152 |
|
153 |
.. code-block:: console |
154 |
|
155 |
CONFIG = { |
156 |
'mode': 'django', |
157 |
'environment': { |
158 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
159 |
}, |
160 |
'working_dir': '/etc/synnefo', |
161 |
'user': 'www-data', |
162 |
'group': 'www-data', |
163 |
'args': ( |
164 |
'--bind=127.0.0.1:8080', |
165 |
'--workers=4', |
166 |
'--log-level=debug', |
167 |
), |
168 |
} |
169 |
|
170 |
.. warning:: Do NOT start the server yet, because it won't find the |
171 |
``synnefo.settings`` module. We will start the server after successful |
172 |
installation of astakos. If the server is running: |
173 |
|
174 |
.. code-block:: console |
175 |
|
176 |
# /etc/init.d/gunicorn stop |
177 |
|
178 |
Apache2 setup |
179 |
~~~~~~~~~~~~~ |
180 |
|
181 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
182 |
the following: |
183 |
|
184 |
.. code-block:: console |
185 |
|
186 |
<VirtualHost *:80> |
187 |
ServerName node1.example.com |
188 |
|
189 |
RewriteEngine On |
190 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
191 |
</VirtualHost> |
192 |
|
193 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
194 |
containing the following: |
195 |
|
196 |
.. code-block:: console |
197 |
|
198 |
<IfModule mod_ssl.c> |
199 |
<VirtualHost _default_:443> |
200 |
ServerName node1.example.com |
201 |
|
202 |
Alias /static "/usr/share/synnefo/static" |
203 |
|
204 |
# SetEnv no-gzip |
205 |
# SetEnv dont-vary |
206 |
|
207 |
AllowEncodedSlashes On |
208 |
|
209 |
RequestHeader set X-Forwarded-Protocol "https" |
210 |
|
211 |
<Proxy * > |
212 |
Order allow,deny |
213 |
Allow from all |
214 |
</Proxy> |
215 |
|
216 |
SetEnv proxy-sendchunked |
217 |
SSLProxyEngine off |
218 |
ProxyErrorOverride off |
219 |
|
220 |
ProxyPass /static ! |
221 |
ProxyPass / http://localhost:8080/ retry=0 |
222 |
ProxyPassReverse / http://localhost:8080/ |
223 |
|
224 |
RewriteEngine On |
225 |
RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE] |
226 |
|
227 |
SSLEngine on |
228 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
229 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
230 |
</VirtualHost> |
231 |
</IfModule> |
232 |
|
233 |
Now enable sites and modules by running: |
234 |
|
235 |
.. code-block:: console |
236 |
|
237 |
# a2enmod ssl |
238 |
# a2enmod rewrite |
239 |
# a2dissite default |
240 |
# a2ensite synnefo |
241 |
# a2ensite synnefo-ssl |
242 |
# a2enmod headers |
243 |
# a2enmod proxy_http |
244 |
|
245 |
.. warning:: Do NOT start/restart the server yet. If the server is running: |
246 |
|
247 |
.. code-block:: console |
248 |
|
249 |
# /etc/init.d/apache2 stop |
250 |
|
251 |
Pithos+ data directory setup |
252 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
253 |
|
254 |
As mentioned in the General Prerequisites section, there is a directory called |
255 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
256 |
directory inside it: |
257 |
|
258 |
.. code-block:: console |
259 |
|
260 |
# cd /srv/pithos |
261 |
# mkdir data |
262 |
# chown www-data:www-data data |
263 |
# chmod g+ws data |
264 |
|
265 |
You are now ready with all general prerequisites concerning node1. Let's go to |
266 |
node2. |
267 |
|
268 |
Node2 |
269 |
----- |
270 |
|
271 |
General Synnefo dependencies |
272 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
273 |
|
274 |
* apache (http server) |
275 |
* gunicorn (WSGI http server) |
276 |
* postgresql (database) |
277 |
* rabbitmq (message queue) |
278 |
|
279 |
You can install the above by running: |
280 |
|
281 |
.. code-block:: console |
282 |
|
283 |
# apt-get install apache2 postgresql |
284 |
|
285 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
286 |
the official debian backports: |
287 |
|
288 |
.. code-block:: console |
289 |
|
290 |
# apt-get -t squeeze-backports install gunicorn |
291 |
|
292 |
Node2 will connect to the databases on node1, so you will also need the |
293 |
python-psycopg2 package: |
294 |
|
295 |
.. code-block:: console |
296 |
|
297 |
# apt-get install python-psycopg2 |
298 |
|
299 |
Database setup |
300 |
~~~~~~~~~~~~~~ |
301 |
|
302 |
All databases have been created and setup on node1, so we do not need to take |
303 |
any action here. From node2, we will just connect to them. When you get familiar |
304 |
with the software you may choose to run different databases on different nodes, |
305 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
306 |
of the purpose of this guide. |
307 |
|
308 |
Gunicorn setup |
309 |
~~~~~~~~~~~~~~ |
310 |
|
311 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
312 |
(same contents as in node1; you can just copy/paste the file): |
313 |
|
314 |
.. code-block:: console |
315 |
|
316 |
CONFIG = { |
317 |
'mode': 'django', |
318 |
'environment': { |
319 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
320 |
}, |
321 |
'working_dir': '/etc/synnefo', |
322 |
'user': 'www-data', |
323 |
'group': 'www-data', |
324 |
'args': ( |
325 |
'--bind=127.0.0.1:8080', |
326 |
'--workers=4', |
327 |
'--log-level=debug', |
328 |
), |
329 |
} |
330 |
|
331 |
.. warning:: Do NOT start the server yet, because it won't find the |
332 |
``synnefo.settings`` module. We will start the server after successful |
333 |
installation of astakos. If the server is running: |
334 |
|
335 |
.. code-block:: console |
336 |
|
337 |
# /etc/init.d/gunicorn stop |
338 |
|
339 |
Apache2 setup |
340 |
~~~~~~~~~~~~~ |
341 |
|
342 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
343 |
the following: |
344 |
|
345 |
.. code-block:: console |
346 |
|
347 |
<VirtualHost *:80> |
348 |
ServerName node2.example.com |
349 |
|
350 |
RewriteEngine On |
351 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
352 |
</VirtualHost> |
353 |
|
354 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
355 |
containing the following: |
356 |
|
357 |
.. code-block:: console |
358 |
|
359 |
<IfModule mod_ssl.c> |
360 |
<VirtualHost _default_:443> |
361 |
ServerName node2.example.com |
362 |
|
363 |
Alias /static "/usr/share/synnefo/static" |
364 |
|
365 |
SetEnv no-gzip |
366 |
SetEnv dont-vary |
367 |
AllowEncodedSlashes On |
368 |
|
369 |
RequestHeader set X-Forwarded-Protocol "https" |
370 |
|
371 |
<Proxy * > |
372 |
Order allow,deny |
373 |
Allow from all |
374 |
</Proxy> |
375 |
|
376 |
SetEnv proxy-sendchunked |
377 |
SSLProxyEngine off |
378 |
ProxyErrorOverride off |
379 |
|
380 |
ProxyPass /static ! |
381 |
ProxyPass / http://localhost:8080/ retry=0 |
382 |
ProxyPassReverse / http://localhost:8080/ |
383 |
|
384 |
SSLEngine on |
385 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
386 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
387 |
</VirtualHost> |
388 |
</IfModule> |
389 |
|
390 |
As in node1, enable sites and modules by running: |
391 |
|
392 |
.. code-block:: console |
393 |
|
394 |
# a2enmod ssl |
395 |
# a2enmod rewrite |
396 |
# a2dissite default |
397 |
# a2ensite synnefo |
398 |
# a2ensite synnefo-ssl |
399 |
# a2enmod headers |
400 |
# a2enmod proxy_http |
401 |
|
402 |
.. warning:: Do NOT start/restart the server yet. If the server is running: |
403 |
|
404 |
.. code-block:: console |
405 |
|
406 |
# /etc/init.d/apache2 stop |
407 |
|
408 |
We are now ready with all general prerequisites for node2. Now that we have |
409 |
finished with all general prerequisites for both nodes, we can start installing |
410 |
the services. First, let's install Astakos on node1. |
411 |
|
412 |
|
413 |
Installation of Astakos on node1 |
414 |
================================ |
415 |
|
416 |
To install astakos, grab the package from our repository (make sure you made |
417 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
418 |
previously), by running: |
419 |
|
420 |
.. code-block:: console |
421 |
|
422 |
# apt-get install snf-astakos-app |
423 |
|
424 |
After successful installation of snf-astakos-app, make sure that also |
425 |
snf-webproject has been installed (marked as "Recommended" package). By default |
426 |
Debian installs "Recommended" packages, but if you have changed your |
427 |
configuration and the package didn't install automatically, you should |
428 |
explicitly install it manually running: |
429 |
|
430 |
.. code-block:: console |
431 |
|
432 |
# apt-get install snf-webproject |
433 |
|
434 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
435 |
the experienced administrator the ability to install synnefo in a custom made |
436 |
django project. This corner case concerns only very advanced users that know |
437 |
what they are doing and want to experiment with synnefo. |
438 |
|
439 |
|
440 |
Configuration of Astakos |
441 |
======================== |
442 |
|
443 |
Conf Files |
444 |
---------- |
445 |
|
446 |
After astakos is successfully installed, you will find the directory |
447 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
448 |
commented configuration options, which are the default options. While installing |
449 |
new snf-* components, new configuration files will appear inside the directory. |
450 |
In this guide (and for all services), we will edit only the minimum necessary |
451 |
configuration options, to reflect our setup. Everything else will remain as is. |
452 |
|
453 |
After getting familiar with synnefo, you will be able to customize the software |
454 |
as you wish and fits your needs. Many options are available, to empower the |
455 |
administrator with extensively customizable setups. |
456 |
|
457 |
For the snf-webproject component (installed as an astakos dependency), we |
458 |
need the following: |
459 |
|
460 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
461 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
462 |
|
463 |
.. code-block:: console |
464 |
|
465 |
DATABASES = { |
466 |
'default': { |
467 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
468 |
'ENGINE': 'postgresql_psycopg2', |
469 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
470 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
471 |
'NAME': 'snf_apps', |
472 |
'USER': 'synnefo', # Not used with sqlite3. |
473 |
'PASSWORD': 'examle_passw0rd', # Not used with sqlite3. |
474 |
# Set to empty string for localhost. Not used with sqlite3. |
475 |
'HOST': '4.3.2.1', |
476 |
# Set to empty string for default. Not used with sqlite3. |
477 |
'PORT': '5432', |
478 |
} |
479 |
} |
480 |
|
481 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
482 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
483 |
seed in secret-key hashing algorithms. Set this to a random string of your |
484 |
choise and keep it private: |
485 |
|
486 |
.. code-block:: console |
487 |
|
488 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
489 |
|
490 |
For astakos specific configuration, edit the following options in |
491 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
492 |
|
493 |
.. code-block:: console |
494 |
|
495 |
ASTAKOS_IM_MODULES = ['local'] |
496 |
|
497 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
498 |
|
499 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
500 |
|
501 |
ASTAKOS_SITENAME = '~okeanos demo example' |
502 |
|
503 |
ASTAKOS_CLOUD_SERVICES = ( |
504 |
{ 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' }, |
505 |
{ 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' }, |
506 |
{ 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' }) |
507 |
|
508 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
509 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
510 |
|
511 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
512 |
|
513 |
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local |
514 |
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our |
515 |
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page. |
516 |
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by |
517 |
astakos. The first element of the dictionary is used to point to a generic |
518 |
landing page for your services (cyclades, pithos). If you don't have such a |
519 |
page it can be omitted. The second and third element point to our services |
520 |
themselves (the apps) and should be set as above. |
521 |
|
522 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
523 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
524 |
|
525 |
Servers Initialization |
526 |
---------------------- |
527 |
|
528 |
After configuration is done, we initialize the servers on node1: |
529 |
|
530 |
.. code-block:: console |
531 |
|
532 |
root@node1:~ # /etc/init.d/gunicorn restart |
533 |
root@node1:~ # /etc/init.d/apache2 restart |
534 |
|
535 |
Database Initialization |
536 |
----------------------- |
537 |
|
538 |
Then, we initialize the database by running: |
539 |
|
540 |
.. code-block:: console |
541 |
|
542 |
# snf-manage syncdb |
543 |
|
544 |
At this example we don't need to create a django superuser, so we select |
545 |
``[no]`` to the question. After a successful sync, we run the migration needed |
546 |
for astakos: |
547 |
|
548 |
.. code-block:: console |
549 |
|
550 |
# snf-manage migrate im |
551 |
|
552 |
You have now finished the Astakos setup. Let's test it now. |
553 |
|
554 |
|
555 |
Testing of Astakos |
556 |
================== |
557 |
|
558 |
Open your favorite browser and go to: |
559 |
|
560 |
``http://node1.example.com/im`` |
561 |
|
562 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
563 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
564 |
|
565 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
566 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
567 |
see a green box on the top, which informs you that you made a successful request |
568 |
and the request has been sent to the administrators. So far so good. |
569 |
|
570 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
571 |
|
572 |
.. code-block:: console |
573 |
|
574 |
root@node1:~ # snf-manage listusers |
575 |
|
576 |
This command should show you a list with only one user; the one we just created. |
577 |
This user should have an id with a value of ``1``. It should also have an |
578 |
"active" status with the value of ``0`` (inactive). Now run: |
579 |
|
580 |
.. code-block:: console |
581 |
|
582 |
root@node1:~ # snf-manage modifyuser --set-active 1 |
583 |
|
584 |
This modifies the active value to ``1``, and actually activates the user. |
585 |
When running in production, the activation is done automatically with different |
586 |
types of moderation, that Astakos supports. You can see the moderation methods |
587 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
588 |
documentation. In production, you can also manually activate a user, by sending |
589 |
him/her an activation email. See how to do this at the :ref:`User |
590 |
activation <user_activation>` section. |
591 |
|
592 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
593 |
your browser again. Try to sign in using your new credentials. If the astakos |
594 |
menu appears and you can see your profile, then you have successfully setup |
595 |
Astakos. |
596 |
|
597 |
Let's continue to install Pithos+ now. |
598 |
|
599 |
|
600 |
Installation of Pithos+ on node2 |
601 |
================================ |
602 |
|
603 |
To install pithos+, grab the packages from our repository (make sure you made |
604 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
605 |
previously), by running: |
606 |
|
607 |
.. code-block:: console |
608 |
|
609 |
# apt-get install snf-pithos-app |
610 |
|
611 |
After successful installation of snf-pithos-app, make sure that also |
612 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
613 |
the "Installation of Astakos on node1" section, if you don't remember why this |
614 |
should happen. Now, install the pithos web interface: |
615 |
|
616 |
.. code-block:: console |
617 |
|
618 |
# apt-get install snf-pithos-webclient |
619 |
|
620 |
This package provides the standalone pithos web client. The web client is the |
621 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
622 |
interface's cloudbar, at the top of the Astakos homepage. |
623 |
|
624 |
Configuration of Pithos+ |
625 |
======================== |
626 |
|
627 |
Conf Files |
628 |
---------- |
629 |
|
630 |
After pithos+ is successfully installed, you will find the directory |
631 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
632 |
after installation of astakos. Here, you will not have to change anything that |
633 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
634 |
only need to change settings that have to do with pithos+. Specifically: |
635 |
|
636 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
637 |
only the two options: |
638 |
|
639 |
.. code-block:: console |
640 |
|
641 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
642 |
|
643 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
644 |
|
645 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ backend where |
646 |
to find its database. Above we tell pithos+ that its database is ``snf_pithos`` |
647 |
at node1 and to connect as user ``synnefo`` with password ``example_passw0rd``. |
648 |
All those settings where setup during node1's "Database setup" section. |
649 |
|
650 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ backend where to |
651 |
store its data. Above we tell pithos+ to store its data under |
652 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
653 |
directory at node1's "Pithos+ data directory setup" section. |
654 |
|
655 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
656 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
657 |
|
658 |
.. code-block:: console |
659 |
|
660 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
661 |
PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback" |
662 |
|
663 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
664 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
665 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
666 |
services, so we use this one. |
667 |
|
668 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
669 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
670 |
|
671 |
.. code-block:: console |
672 |
|
673 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
674 |
CLOUDBAR_ACTIVE_SERVICE = 'pithos' |
675 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
676 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
677 |
|
678 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
679 |
cloudbar. |
680 |
|
681 |
The ``CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service served by |
682 |
astakos. It's name should be identical with the ``id`` name given at the |
683 |
astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf |
684 |
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES`` |
685 |
list, to the dictionary: |
686 |
``{ 'url':'https://nod...', 'name':'pithos+', 'id':'pithos }``. This item |
687 |
represents the pithos+ service. The ``id`` we set there, is the ``id`` we want |
688 |
here. |
689 |
|
690 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
691 |
pithos+ web client to get from astakos all the information needed to fill its |
692 |
own cloudbar. So we put our astakos deployment urls there. |
693 |
|
694 |
Servers Initialization |
695 |
---------------------- |
696 |
|
697 |
After configuration is done, we initialize the servers on node2: |
698 |
|
699 |
.. code-block:: console |
700 |
|
701 |
root@node2:~ # /etc/init.d/gunicorn restart |
702 |
root@node2:~ # /etc/init.d/apache2 restart |
703 |
|
704 |
You have now finished the Pithos+ setup. Let's test it now. |
705 |
|
706 |
|
707 |
Testing of Pithos+ |
708 |
================== |
709 |
|
710 |
|
711 |
Installation of Cyclades (and Plankton) on node1 |
712 |
================================================ |
713 |
|
714 |
Installation of cyclades is a two step process: |
715 |
|
716 |
1. install the external services (prerequisites) on which cyclades depends |
717 |
2. install the synnefo software components associated with cyclades |
718 |
|
719 |
Prerequisites |
720 |
------------- |
721 |
.. _cyclades-install-ganeti: |
722 |
|
723 |
Ganeti installation |
724 |
~~~~~~~~~~~~~~~~~~~ |
725 |
|
726 |
Synnefo requires a working Ganeti installation at the backend. Installation |
727 |
of Ganeti is not covered by this document, please refer to |
728 |
`ganeti documentation <http://docs.ganeti.org/ganeti/current/html>`_ for all the |
729 |
gory details. A successful Ganeti installation concludes with a working |
730 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs <GANETI_NODES>`. |
731 |
|
732 |
.. _cyclades-install-db: |
733 |
|
734 |
Database |
735 |
~~~~~~~~ |
736 |
|
737 |
Database installation is done as part of the |
738 |
:ref:`snf-webproject <snf-webproject>` component. |
739 |
|
740 |
.. _cyclades-install-rabbitmq: |
741 |
|
742 |
RabbitMQ |
743 |
~~~~~~~~ |
744 |
|
745 |
RabbitMQ is used as a generic message broker for cyclades. It should be |
746 |
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability |
747 |
configuration as described here: |
748 |
|
749 |
http://www.rabbitmq.com/pacemaker.html |
750 |
|
751 |
After installation, create a user and set its permissions: |
752 |
|
753 |
.. code-block:: console |
754 |
|
755 |
$ rabbitmqctl add_user <username> <password> |
756 |
$ rabbitmqctl set_permissions -p / <username> "^.*" ".*" ".*" |
757 |
|
758 |
The values set for the user and password must be mirrored in the |
759 |
``RABBIT_*`` variables in your settings, as managed by |
760 |
:ref:`snf-common <snf-common>`. |
761 |
|
762 |
.. todo:: Document an active-active configuration based on the latest version |
763 |
of RabbitMQ. |
764 |
|
765 |
.. _cyclades-install-vncauthproxy: |
766 |
|
767 |
vncauthproxy |
768 |
~~~~~~~~~~~~ |
769 |
|
770 |
To support OOB console access to the VMs over VNC, the vncauthproxy |
771 |
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node. |
772 |
|
773 |
.. note:: The Debian package for vncauthproxy undertakes all configuration |
774 |
automatically. |
775 |
|
776 |
Download and install the latest vncauthproxy from its own repository, |
777 |
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit: |
778 |
|
779 |
.. code-block:: console |
780 |
|
781 |
$ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy |
782 |
|
783 |
Create ``/var/log/vncauthproxy`` and set its permissions appropriately. |
784 |
|
785 |
Alternatively, build and install Debian packages. |
786 |
|
787 |
.. code-block:: console |
788 |
|
789 |
$ git checkout debian |
790 |
$ dpkg-buildpackage -b -uc -us |
791 |
# dpkg -i ../vncauthproxy_1.0-1_all.deb |
792 |
|
793 |
.. warning:: |
794 |
**Failure to build the package on the Mac.** |
795 |
|
796 |
``libevent``, a requirement for gevent which in turn is a requirement for |
797 |
vncauthproxy is not included in `MacOSX` by default and installing it with |
798 |
MacPorts does not lead to a version that can be found by the gevent |
799 |
build process. A quick workaround is to execute the following commands:: |
800 |
|
801 |
$ cd $SYNNEFO |
802 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
803 |
<the above fails> |
804 |
$ cd build/gevent |
805 |
$ sudo python setup.py -I/opt/local/include -L/opt/local/lib build |
806 |
$ cd $SYNNEFO |
807 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
808 |
|
809 |
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip |
810 |
.. todo:: kpap: fix installation commands |
811 |
|
812 |
.. _cyclades-install-nfdhcpd: |
813 |
|
814 |
NFDHCPD |
815 |
~~~~~~~ |
816 |
|
817 |
Setup Synnefo-specific networking on the Ganeti backend. |
818 |
This part is deployment-specific and must be customized based on the |
819 |
specific needs of the system administrators. |
820 |
|
821 |
A reference installation will use a Synnefo-specific KVM ifup script, |
822 |
NFDHCPD and pre-provisioned Linux bridges to support public and private |
823 |
network functionality. For this: |
824 |
|
825 |
Grab NFDHCPD from its own repository (https://code.grnet.gr/git/nfdhcpd), |
826 |
install it, modify ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network |
827 |
configuration. |
828 |
|
829 |
Install a custom KVM ifup script for use by Ganeti, as |
830 |
``/etc/ganeti/kvm-vif-bridge``, on GANETI-NODEs. A sample implementation is |
831 |
provided under ``/contrib/ganeti-hooks``. Set ``NFDHCPD_STATE_DIR`` to point |
832 |
to NFDHCPD's state directory, usually ``/var/lib/nfdhcpd``. |
833 |
|
834 |
.. todo:: soc: document NFDHCPD installation, settle on KVM ifup script |
835 |
|
836 |
.. _cyclades-install-snfimage: |
837 |
|
838 |
snf-image |
839 |
~~~~~~~~~ |
840 |
|
841 |
Install the :ref:`snf-image <snf-image>` Ganeti OS provider for image |
842 |
deployment. |
843 |
|
844 |
For :ref:`cyclades <cyclades>` to be able to launch VMs from specified |
845 |
Images, you need the snf-image OS Provider installed on *all* Ganeti nodes. |
846 |
|
847 |
Please see `https://code.grnet.gr/projects/snf-image/wiki`_ |
848 |
for installation instructions and documentation on the design |
849 |
and implementation of snf-image. |
850 |
|
851 |
Please see `https://code.grnet.gr/projects/snf-image/files` |
852 |
for the latest packages. |
853 |
|
854 |
Images should be stored in ``extdump``, or ``diskdump`` format in a directory |
855 |
of your choice, configurable as ``IMAGE_DIR`` in |
856 |
:file:`/etc/default/snf-image`. |
857 |
|
858 |
synnefo components |
859 |
------------------ |
860 |
|
861 |
You need to install the appropriate synnefo software components on each node, |
862 |
depending on its type, see :ref:`Architecture <cyclades-architecture>`. |
863 |
|
864 |
Most synnefo components have dependencies on additional Python packages. |
865 |
The dependencies are described inside each package, and are setup |
866 |
automatically when installing using :command:`pip`, or when installing |
867 |
using your system's package manager. |
868 |
|
869 |
Please see the page of each synnefo software component for specific |
870 |
installation instructions, where applicable. |
871 |
|
872 |
Install the following synnefo components: |
873 |
|
874 |
Nodes of type :ref:`APISERVER <APISERVER_NODE>` |
875 |
Components |
876 |
:ref:`snf-common <snf-common>`, |
877 |
:ref:`snf-webproject <snf-webproject>`, |
878 |
:ref:`snf-cyclades-app <snf-cyclades-app>` |
879 |
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>` |
880 |
Components |
881 |
:ref:`snf-common <snf-common>`, |
882 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` |
883 |
Nodes of type :ref:`LOGIC <LOGIC_NODE>` |
884 |
Components |
885 |
:ref:`snf-common <snf-common>`, |
886 |
:ref:`snf-webproject <snf-webproject>`, |
887 |
:ref:`snf-cyclades-app <snf-cyclades-app>`. |
888 |
|
889 |
|
890 |
Configuration of Cyclades (and Plankton) |
891 |
======================================== |
892 |
|
893 |
This section targets the configuration of the prerequisites for cyclades, |
894 |
and the configuration of the associated synnefo software components. |
895 |
|
896 |
synnefo components |
897 |
------------------ |
898 |
|
899 |
cyclades uses :ref:`snf-common <snf-common>` for settings. |
900 |
Please refer to the configuration sections of |
901 |
:ref:`snf-webproject <snf-webproject>`, |
902 |
:ref:`snf-cyclades-app <snf-cyclades-app>`, |
903 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more |
904 |
information on their configuration. |
905 |
|
906 |
Ganeti |
907 |
~~~~~~ |
908 |
|
909 |
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on |
910 |
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the |
911 |
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``. |
912 |
|
913 |
Database |
914 |
~~~~~~~~ |
915 |
|
916 |
Once all components are installed and configured, |
917 |
initialize the Django DB: |
918 |
|
919 |
.. code-block:: console |
920 |
|
921 |
$ snf-manage syncdb |
922 |
$ snf-manage migrate |
923 |
|
924 |
and load fixtures ``{users, flavors, images}``, |
925 |
which make the API usable by end users by defining a sample set of users, |
926 |
hardware configurations (flavors) and OS images: |
927 |
|
928 |
.. code-block:: console |
929 |
|
930 |
$ snf-manage loaddata /path/to/users.json |
931 |
$ snf-manage loaddata flavors |
932 |
$ snf-manage loaddata images |
933 |
|
934 |
.. warning:: |
935 |
Be sure to load a custom users.json and select a unique token |
936 |
for each of the initial and any other users defined in this file. |
937 |
**DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed |
938 |
configurations. |
939 |
|
940 |
sample users.json file: |
941 |
|
942 |
.. literalinclude:: ../../synnefo/db/fixtures/users.json |
943 |
|
944 |
`download <../_static/users.json>`_ |
945 |
|
946 |
RabbitMQ |
947 |
~~~~~~~~ |
948 |
|
949 |
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup |
950 |
<cyclades-install-rabbitmq>`. |
951 |
|
952 |
.. include:: ../../Changelog |
953 |
|
954 |
|
955 |
Testing of Cyclades (and Plankton) |
956 |
================================== |
957 |
|
958 |
|
959 |
General Testing |
960 |
=============== |
961 |
|
962 |
|
963 |
Notes |
964 |
===== |