root / docs / quick-install-admin-guide.rst @ 699c8773
History | View | Annotate | Download (32 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
|
43 |
General Prerequisites |
44 |
===================== |
45 |
|
46 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
47 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
48 |
|
49 |
To be able to download all synnefo components you need to add the following |
50 |
lines in your ``/etc/apt/sources.list`` file: |
51 |
|
52 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
53 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
54 |
|
55 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
56 |
data inside this directory. By 'all data', we mean files, images, and pithos |
57 |
specific mapping data. If you plan to upload more than one basic image, this |
58 |
directory should have at least 50GB of free space. During this guide, we will |
59 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
60 |
to node2. Node2 has this directory mounted under ``/srv/pithos``, too. |
61 |
|
62 |
Before starting the synnefo installation, you will need basic third party |
63 |
software to be installed and configured on the physical nodes. We will describe |
64 |
each node's general prerequisites separately. Any additional configuration, |
65 |
specific to a synnefo service for each node, will be described at the service's |
66 |
section. |
67 |
|
68 |
Node1 |
69 |
----- |
70 |
|
71 |
General Synnefo dependencies |
72 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
73 |
|
74 |
* apache (http server) |
75 |
* gunicorn (WSGI http server) |
76 |
* postgresql (database) |
77 |
* rabbitmq (message queue) |
78 |
|
79 |
You can install the above by running: |
80 |
|
81 |
.. code-block:: console |
82 |
|
83 |
# apt-get install apache2 postgresql rabbitmq-server |
84 |
|
85 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
86 |
the official debian backports: |
87 |
|
88 |
.. code-block:: console |
89 |
|
90 |
# apt-get -t squeeze-backports install gunicorn |
91 |
|
92 |
On node1, we will create our databases, so you will also need the |
93 |
python-psycopg2 package: |
94 |
|
95 |
.. code-block:: console |
96 |
|
97 |
# apt-get install python-psycopg2 |
98 |
|
99 |
Database setup |
100 |
~~~~~~~~~~~~~~ |
101 |
|
102 |
On node1, we create a database called ``snf_apps``, that will host all django |
103 |
apps related tables. We also create the user ``synnefo`` and grant him all |
104 |
privileges on the database. We do this by running: |
105 |
|
106 |
.. code-block:: console |
107 |
|
108 |
root@node1:~ # su - postgres |
109 |
postgres@node1:~ $ psql |
110 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
111 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
112 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
113 |
|
114 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
115 |
grant the ``synnefo`` user all privileges on the database. This database could |
116 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
117 |
create all needed databases on node1 and then node2 will connect to them. |
118 |
|
119 |
.. code-block:: console |
120 |
|
121 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
122 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
123 |
|
124 |
Configure the database to listen to all network interfaces. You can do this by |
125 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
126 |
``listen_addresses`` to ``'*'`` : |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
listen_addresses = '*' |
131 |
|
132 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
133 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
134 |
connections:`` : |
135 |
|
136 |
.. code-block:: console |
137 |
|
138 |
host all all 4.3.2.1/32 md5 |
139 |
host all all 4.3.2.2/32 md5 |
140 |
|
141 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
142 |
actual IPs. Now, restart the server to apply the changes: |
143 |
|
144 |
.. code-block:: console |
145 |
|
146 |
# /etc/init.d/postgresql restart |
147 |
|
148 |
Gunicorn setup |
149 |
~~~~~~~~~~~~~~ |
150 |
|
151 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
152 |
|
153 |
.. code-block:: console |
154 |
|
155 |
CONFIG = { |
156 |
'mode': 'django', |
157 |
'environment': { |
158 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
159 |
}, |
160 |
'working_dir': '/etc/synnefo', |
161 |
'user': 'www-data', |
162 |
'group': 'www-data', |
163 |
'args': ( |
164 |
'--bind=127.0.0.1:8080', |
165 |
'--workers=4', |
166 |
'--log-level=debug', |
167 |
), |
168 |
} |
169 |
|
170 |
.. warning:: Do NOT start the server yet, because it won't find the |
171 |
``synnefo.settings`` module. We will start the server after successful |
172 |
installation of astakos. If the server is running:: |
173 |
|
174 |
# /etc/init.d/gunicorn stop |
175 |
|
176 |
Apache2 setup |
177 |
~~~~~~~~~~~~~ |
178 |
|
179 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
180 |
the following: |
181 |
|
182 |
.. code-block:: console |
183 |
|
184 |
<VirtualHost *:80> |
185 |
ServerName node1.example.com |
186 |
|
187 |
RewriteEngine On |
188 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
189 |
</VirtualHost> |
190 |
|
191 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
192 |
containing the following: |
193 |
|
194 |
.. code-block:: console |
195 |
|
196 |
<IfModule mod_ssl.c> |
197 |
<VirtualHost _default_:443> |
198 |
ServerName node1.example.com |
199 |
|
200 |
Alias /static "/usr/share/synnefo/static" |
201 |
|
202 |
# SetEnv no-gzip |
203 |
# SetEnv dont-vary |
204 |
|
205 |
AllowEncodedSlashes On |
206 |
|
207 |
RequestHeader set X-Forwarded-Protocol "https" |
208 |
|
209 |
<Proxy * > |
210 |
Order allow,deny |
211 |
Allow from all |
212 |
</Proxy> |
213 |
|
214 |
SetEnv proxy-sendchunked |
215 |
SSLProxyEngine off |
216 |
ProxyErrorOverride off |
217 |
|
218 |
ProxyPass /static ! |
219 |
ProxyPass / http://localhost:8080/ retry=0 |
220 |
ProxyPassReverse / http://localhost:8080/ |
221 |
|
222 |
RewriteEngine On |
223 |
RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE] |
224 |
|
225 |
SSLEngine on |
226 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
227 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
228 |
</VirtualHost> |
229 |
</IfModule> |
230 |
|
231 |
Now enable sites and modules by running: |
232 |
|
233 |
.. code-block:: console |
234 |
|
235 |
# a2enmod ssl |
236 |
# a2enmod rewrite |
237 |
# a2dissite default |
238 |
# a2ensite synnefo |
239 |
# a2ensite synnefo-ssl |
240 |
# a2enmod headers |
241 |
# a2enmod proxy_http |
242 |
|
243 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
244 |
|
245 |
# /etc/init.d/apache2 stop |
246 |
|
247 |
Message Queue setup |
248 |
~~~~~~~~~~~~~~~~~~~ |
249 |
|
250 |
The message queue will run on node1, so we need to create the appropriate |
251 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
252 |
exchanges: |
253 |
|
254 |
.. code-block:: console |
255 |
|
256 |
# rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd" |
257 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
258 |
|
259 |
We do not need to initialize the exchanges. This will be done automatically, |
260 |
during the Cyclades setup. |
261 |
|
262 |
Pithos+ data directory setup |
263 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
264 |
|
265 |
As mentioned in the General Prerequisites section, there is a directory called |
266 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
267 |
directory inside it: |
268 |
|
269 |
.. code-block:: console |
270 |
|
271 |
# cd /srv/pithos |
272 |
# mkdir data |
273 |
# chown www-data:www-data data |
274 |
# chmod g+ws data |
275 |
|
276 |
You are now ready with all general prerequisites concerning node1. Let's go to |
277 |
node2. |
278 |
|
279 |
Node2 |
280 |
----- |
281 |
|
282 |
General Synnefo dependencies |
283 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
284 |
|
285 |
* apache (http server) |
286 |
* gunicorn (WSGI http server) |
287 |
* postgresql (database) |
288 |
|
289 |
You can install the above by running: |
290 |
|
291 |
.. code-block:: console |
292 |
|
293 |
# apt-get install apache2 postgresql |
294 |
|
295 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
296 |
the official debian backports: |
297 |
|
298 |
.. code-block:: console |
299 |
|
300 |
# apt-get -t squeeze-backports install gunicorn |
301 |
|
302 |
Node2 will connect to the databases on node1, so you will also need the |
303 |
python-psycopg2 package: |
304 |
|
305 |
.. code-block:: console |
306 |
|
307 |
# apt-get install python-psycopg2 |
308 |
|
309 |
Database setup |
310 |
~~~~~~~~~~~~~~ |
311 |
|
312 |
All databases have been created and setup on node1, so we do not need to take |
313 |
any action here. From node2, we will just connect to them. When you get familiar |
314 |
with the software you may choose to run different databases on different nodes, |
315 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
316 |
of the purpose of this guide. |
317 |
|
318 |
Gunicorn setup |
319 |
~~~~~~~~~~~~~~ |
320 |
|
321 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
322 |
(same contents as in node1; you can just copy/paste the file): |
323 |
|
324 |
.. code-block:: console |
325 |
|
326 |
CONFIG = { |
327 |
'mode': 'django', |
328 |
'environment': { |
329 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
330 |
}, |
331 |
'working_dir': '/etc/synnefo', |
332 |
'user': 'www-data', |
333 |
'group': 'www-data', |
334 |
'args': ( |
335 |
'--bind=127.0.0.1:8080', |
336 |
'--workers=4', |
337 |
'--log-level=debug', |
338 |
'--timeout=43200' |
339 |
), |
340 |
} |
341 |
|
342 |
.. warning:: Do NOT start the server yet, because it won't find the |
343 |
``synnefo.settings`` module. We will start the server after successful |
344 |
installation of astakos. If the server is running:: |
345 |
|
346 |
# /etc/init.d/gunicorn stop |
347 |
|
348 |
Apache2 setup |
349 |
~~~~~~~~~~~~~ |
350 |
|
351 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
352 |
the following: |
353 |
|
354 |
.. code-block:: console |
355 |
|
356 |
<VirtualHost *:80> |
357 |
ServerName node2.example.com |
358 |
|
359 |
RewriteEngine On |
360 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
361 |
</VirtualHost> |
362 |
|
363 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
364 |
containing the following: |
365 |
|
366 |
.. code-block:: console |
367 |
|
368 |
<IfModule mod_ssl.c> |
369 |
<VirtualHost _default_:443> |
370 |
ServerName node2.example.com |
371 |
|
372 |
Alias /static "/usr/share/synnefo/static" |
373 |
|
374 |
SetEnv no-gzip |
375 |
SetEnv dont-vary |
376 |
AllowEncodedSlashes On |
377 |
|
378 |
RequestHeader set X-Forwarded-Protocol "https" |
379 |
|
380 |
<Proxy * > |
381 |
Order allow,deny |
382 |
Allow from all |
383 |
</Proxy> |
384 |
|
385 |
SetEnv proxy-sendchunked |
386 |
SSLProxyEngine off |
387 |
ProxyErrorOverride off |
388 |
|
389 |
ProxyPass /static ! |
390 |
ProxyPass / http://localhost:8080/ retry=0 |
391 |
ProxyPassReverse / http://localhost:8080/ |
392 |
|
393 |
SSLEngine on |
394 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
395 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
396 |
</VirtualHost> |
397 |
</IfModule> |
398 |
|
399 |
As in node1, enable sites and modules by running: |
400 |
|
401 |
.. code-block:: console |
402 |
|
403 |
# a2enmod ssl |
404 |
# a2enmod rewrite |
405 |
# a2dissite default |
406 |
# a2ensite synnefo |
407 |
# a2ensite synnefo-ssl |
408 |
# a2enmod headers |
409 |
# a2enmod proxy_http |
410 |
|
411 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
412 |
|
413 |
# /etc/init.d/apache2 stop |
414 |
|
415 |
We are now ready with all general prerequisites for node2. Now that we have |
416 |
finished with all general prerequisites for both nodes, we can start installing |
417 |
the services. First, let's install Astakos on node1. |
418 |
|
419 |
|
420 |
Installation of Astakos on node1 |
421 |
================================ |
422 |
|
423 |
To install astakos, grab the package from our repository (make sure you made |
424 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
425 |
previously), by running: |
426 |
|
427 |
.. code-block:: console |
428 |
|
429 |
# apt-get install snf-astakos-app |
430 |
|
431 |
After successful installation of snf-astakos-app, make sure that also |
432 |
snf-webproject has been installed (marked as "Recommended" package). By default |
433 |
Debian installs "Recommended" packages, but if you have changed your |
434 |
configuration and the package didn't install automatically, you should |
435 |
explicitly install it manually running: |
436 |
|
437 |
.. code-block:: console |
438 |
|
439 |
# apt-get install snf-webproject |
440 |
|
441 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
442 |
the experienced administrator the ability to install synnefo in a custom made |
443 |
django project. This corner case concerns only very advanced users that know |
444 |
what they are doing and want to experiment with synnefo. |
445 |
|
446 |
|
447 |
Configuration of Astakos |
448 |
======================== |
449 |
|
450 |
Conf Files |
451 |
---------- |
452 |
|
453 |
After astakos is successfully installed, you will find the directory |
454 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
455 |
commented configuration options, which are the default options. While installing |
456 |
new snf-* components, new configuration files will appear inside the directory. |
457 |
In this guide (and for all services), we will edit only the minimum necessary |
458 |
configuration options, to reflect our setup. Everything else will remain as is. |
459 |
|
460 |
After getting familiar with synnefo, you will be able to customize the software |
461 |
as you wish and fits your needs. Many options are available, to empower the |
462 |
administrator with extensively customizable setups. |
463 |
|
464 |
For the snf-webproject component (installed as an astakos dependency), we |
465 |
need the following: |
466 |
|
467 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
468 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
469 |
|
470 |
.. code-block:: console |
471 |
|
472 |
DATABASES = { |
473 |
'default': { |
474 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
475 |
'ENGINE': 'postgresql_psycopg2', |
476 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
477 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
478 |
'NAME': 'snf_apps', |
479 |
'USER': 'synnefo', # Not used with sqlite3. |
480 |
'PASSWORD': 'examle_passw0rd', # Not used with sqlite3. |
481 |
# Set to empty string for localhost. Not used with sqlite3. |
482 |
'HOST': '4.3.2.1', |
483 |
# Set to empty string for default. Not used with sqlite3. |
484 |
'PORT': '5432', |
485 |
} |
486 |
} |
487 |
|
488 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
489 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
490 |
seed in secret-key hashing algorithms. Set this to a random string of your |
491 |
choise and keep it private: |
492 |
|
493 |
.. code-block:: console |
494 |
|
495 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
496 |
|
497 |
For astakos specific configuration, edit the following options in |
498 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
499 |
|
500 |
.. code-block:: console |
501 |
|
502 |
ASTAKOS_IM_MODULES = ['local'] |
503 |
|
504 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
505 |
|
506 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
507 |
|
508 |
ASTAKOS_SITENAME = '~okeanos demo example' |
509 |
|
510 |
ASTAKOS_CLOUD_SERVICES = ( |
511 |
{ 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' }, |
512 |
{ 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' }, |
513 |
{ 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' }) |
514 |
|
515 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
516 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
517 |
|
518 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
519 |
|
520 |
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local |
521 |
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our |
522 |
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page. |
523 |
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by |
524 |
astakos. The first element of the dictionary is used to point to a generic |
525 |
landing page for your services (cyclades, pithos). If you don't have such a |
526 |
page it can be omitted. The second and third element point to our services |
527 |
themselves (the apps) and should be set as above. |
528 |
|
529 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
530 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
531 |
|
532 |
Servers Initialization |
533 |
---------------------- |
534 |
|
535 |
After configuration is done, we initialize the servers on node1: |
536 |
|
537 |
.. code-block:: console |
538 |
|
539 |
root@node1:~ # /etc/init.d/gunicorn restart |
540 |
root@node1:~ # /etc/init.d/apache2 restart |
541 |
|
542 |
Database Initialization |
543 |
----------------------- |
544 |
|
545 |
Then, we initialize the database by running: |
546 |
|
547 |
.. code-block:: console |
548 |
|
549 |
# snf-manage syncdb |
550 |
|
551 |
At this example we don't need to create a django superuser, so we select |
552 |
``[no]`` to the question. After a successful sync, we run the migration needed |
553 |
for astakos: |
554 |
|
555 |
.. code-block:: console |
556 |
|
557 |
# snf-manage migrate im |
558 |
|
559 |
You have now finished the Astakos setup. Let's test it now. |
560 |
|
561 |
|
562 |
Testing of Astakos |
563 |
================== |
564 |
|
565 |
Open your favorite browser and go to: |
566 |
|
567 |
``http://node1.example.com/im`` |
568 |
|
569 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
570 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
571 |
|
572 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
573 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
574 |
see a green box on the top, which informs you that you made a successful request |
575 |
and the request has been sent to the administrators. So far so good. |
576 |
|
577 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
578 |
|
579 |
.. code-block:: console |
580 |
|
581 |
root@node1:~ # snf-manage listusers |
582 |
|
583 |
This command should show you a list with only one user; the one we just created. |
584 |
This user should have an id with a value of ``1``. It should also have an |
585 |
"active" status with the value of ``0`` (inactive). Now run: |
586 |
|
587 |
.. code-block:: console |
588 |
|
589 |
root@node1:~ # snf-manage modifyuser --set-active 1 |
590 |
|
591 |
This modifies the active value to ``1``, and actually activates the user. |
592 |
When running in production, the activation is done automatically with different |
593 |
types of moderation, that Astakos supports. You can see the moderation methods |
594 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
595 |
documentation. In production, you can also manually activate a user, by sending |
596 |
him/her an activation email. See how to do this at the :ref:`User |
597 |
activation <user_activation>` section. |
598 |
|
599 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
600 |
your browser again. Try to sign in using your new credentials. If the astakos |
601 |
menu appears and you can see your profile, then you have successfully setup |
602 |
Astakos. |
603 |
|
604 |
Let's continue to install Pithos+ now. |
605 |
|
606 |
|
607 |
Installation of Pithos+ on node2 |
608 |
================================ |
609 |
|
610 |
To install pithos+, grab the packages from our repository (make sure you made |
611 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
612 |
previously), by running: |
613 |
|
614 |
.. code-block:: console |
615 |
|
616 |
# apt-get install snf-pithos-app |
617 |
|
618 |
After successful installation of snf-pithos-app, make sure that also |
619 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
620 |
the "Installation of Astakos on node1" section, if you don't remember why this |
621 |
should happen. Now, install the pithos web interface: |
622 |
|
623 |
.. code-block:: console |
624 |
|
625 |
# apt-get install snf-pithos-webclient |
626 |
|
627 |
This package provides the standalone pithos web client. The web client is the |
628 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
629 |
interface's cloudbar, at the top of the Astakos homepage. |
630 |
|
631 |
Configuration of Pithos+ |
632 |
======================== |
633 |
|
634 |
Conf Files |
635 |
---------- |
636 |
|
637 |
After pithos+ is successfully installed, you will find the directory |
638 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
639 |
after installation of astakos. Here, you will not have to change anything that |
640 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
641 |
only need to change settings that have to do with pithos+. Specifically: |
642 |
|
643 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
644 |
only the two options: |
645 |
|
646 |
.. code-block:: console |
647 |
|
648 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
649 |
|
650 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
651 |
|
652 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
653 |
PITHOS_AUTHENTICATION_USERS = None |
654 |
|
655 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
656 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
657 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
658 |
``example_passw0rd``. All those settings where setup during node1's "Database |
659 |
setup" section. |
660 |
|
661 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
662 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
663 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
664 |
directory at node1's "Pithos+ data directory setup" section. |
665 |
|
666 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
667 |
is available the astakos authentication api. If not set, pithos+ tries to |
668 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
669 |
|
670 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
671 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
672 |
|
673 |
.. code-block:: console |
674 |
|
675 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
676 |
PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback" |
677 |
|
678 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
679 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
680 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
681 |
services, so we use this one. |
682 |
|
683 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
684 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
685 |
|
686 |
.. code-block:: console |
687 |
|
688 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
689 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = 'pithos' |
690 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
691 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
692 |
|
693 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
694 |
cloudbar. |
695 |
|
696 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service |
697 |
served by astakos. It's name should be identical with the ``id`` name given at |
698 |
the astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf |
699 |
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES`` |
700 |
list, to the dictionary: ``{ 'url':'https://nod...', 'name':'pithos+', |
701 |
'id':'pithos }``. This item represents the pithos+ service. The ``id`` we set |
702 |
there, is the ``id`` we want here. |
703 |
|
704 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
705 |
pithos+ web client to get from astakos all the information needed to fill its |
706 |
own cloudbar. So we put our astakos deployment urls there. |
707 |
|
708 |
Servers Initialization |
709 |
---------------------- |
710 |
|
711 |
After configuration is done, we initialize the servers on node2: |
712 |
|
713 |
.. code-block:: console |
714 |
|
715 |
root@node2:~ # /etc/init.d/gunicorn restart |
716 |
root@node2:~ # /etc/init.d/apache2 restart |
717 |
|
718 |
You have now finished the Pithos+ setup. Let's test it now. |
719 |
|
720 |
|
721 |
Testing of Pithos+ |
722 |
================== |
723 |
|
724 |
Open your browser and go to the Astakos homepage: |
725 |
|
726 |
``http://node1.example.com/im`` |
727 |
|
728 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
729 |
top black cloudbar. If everything was setup correctly, this will redirect you |
730 |
to: |
731 |
|
732 |
``https://node2.example.com/ui`` |
733 |
|
734 |
and you will see the blue interface of the Pithos+ application. Click the |
735 |
orange "Upload" button and upload your first file. If the file gets uploaded |
736 |
successfully, then this is your first sign of a successful Pithos+ installation. |
737 |
Go ahead and experiment with the interface to make sure everything works |
738 |
correctly. |
739 |
|
740 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
741 |
|
742 |
If you don't stumble on any problems, then you have successfully installed |
743 |
Pithos+, which you can use as a standalone File Storage Service. |
744 |
|
745 |
If you would like to do more, such as: |
746 |
|
747 |
* Spawning VMs |
748 |
* Spawning VMs from Images stored on Pithos+ |
749 |
* Uploading your custom Images to Pithos+ |
750 |
* Spawning VMs from those custom Images |
751 |
* Registering existing Pithos+ files as Images |
752 |
|
753 |
please continue with the rest of the guide. |
754 |
|
755 |
Installation of Cyclades (and Plankton) on node1 |
756 |
================================================ |
757 |
|
758 |
Installation of cyclades is a two step process: |
759 |
|
760 |
1. install the external services (prerequisites) on which cyclades depends |
761 |
2. install the synnefo software components associated with cyclades |
762 |
|
763 |
Prerequisites |
764 |
------------- |
765 |
.. _cyclades-install-ganeti: |
766 |
|
767 |
Ganeti installation |
768 |
~~~~~~~~~~~~~~~~~~~ |
769 |
|
770 |
Synnefo requires a working Ganeti installation at the backend. Installation |
771 |
of Ganeti is not covered by this document, please refer to |
772 |
`ganeti documentation <http://docs.ganeti.org/ganeti/current/html>`_ for all the |
773 |
gory details. A successful Ganeti installation concludes with a working |
774 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs <GANETI_NODES>`. |
775 |
|
776 |
.. _cyclades-install-db: |
777 |
|
778 |
Database |
779 |
~~~~~~~~ |
780 |
|
781 |
Database installation is done as part of the |
782 |
:ref:`snf-webproject <snf-webproject>` component. |
783 |
|
784 |
.. _cyclades-install-rabbitmq: |
785 |
|
786 |
RabbitMQ |
787 |
~~~~~~~~ |
788 |
|
789 |
RabbitMQ is used as a generic message broker for cyclades. It should be |
790 |
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability |
791 |
configuration as described here: |
792 |
|
793 |
http://www.rabbitmq.com/pacemaker.html |
794 |
|
795 |
After installation, create a user and set its permissions: |
796 |
|
797 |
.. code-block:: console |
798 |
|
799 |
$ rabbitmqctl add_user <username> <password> |
800 |
$ rabbitmqctl set_permissions -p / <username> "^.*" ".*" ".*" |
801 |
|
802 |
The values set for the user and password must be mirrored in the |
803 |
``RABBIT_*`` variables in your settings, as managed by |
804 |
:ref:`snf-common <snf-common>`. |
805 |
|
806 |
.. todo:: Document an active-active configuration based on the latest version |
807 |
of RabbitMQ. |
808 |
|
809 |
.. _cyclades-install-vncauthproxy: |
810 |
|
811 |
vncauthproxy |
812 |
~~~~~~~~~~~~ |
813 |
|
814 |
To support OOB console access to the VMs over VNC, the vncauthproxy |
815 |
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node. |
816 |
|
817 |
.. note:: The Debian package for vncauthproxy undertakes all configuration |
818 |
automatically. |
819 |
|
820 |
Download and install the latest vncauthproxy from its own repository, |
821 |
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit: |
822 |
|
823 |
.. code-block:: console |
824 |
|
825 |
$ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy |
826 |
|
827 |
Create ``/var/log/vncauthproxy`` and set its permissions appropriately. |
828 |
|
829 |
Alternatively, build and install Debian packages. |
830 |
|
831 |
.. code-block:: console |
832 |
|
833 |
$ git checkout debian |
834 |
$ dpkg-buildpackage -b -uc -us |
835 |
# dpkg -i ../vncauthproxy_1.0-1_all.deb |
836 |
|
837 |
.. warning:: |
838 |
**Failure to build the package on the Mac.** |
839 |
|
840 |
``libevent``, a requirement for gevent which in turn is a requirement for |
841 |
vncauthproxy is not included in `MacOSX` by default and installing it with |
842 |
MacPorts does not lead to a version that can be found by the gevent |
843 |
build process. A quick workaround is to execute the following commands:: |
844 |
|
845 |
$ cd $SYNNEFO |
846 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
847 |
<the above fails> |
848 |
$ cd build/gevent |
849 |
$ sudo python setup.py -I/opt/local/include -L/opt/local/lib build |
850 |
$ cd $SYNNEFO |
851 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
852 |
|
853 |
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip |
854 |
.. todo:: kpap: fix installation commands |
855 |
|
856 |
.. _cyclades-install-nfdhcpd: |
857 |
|
858 |
NFDHCPD |
859 |
~~~~~~~ |
860 |
|
861 |
Setup Synnefo-specific networking on the Ganeti backend. |
862 |
This part is deployment-specific and must be customized based on the |
863 |
specific needs of the system administrators. |
864 |
|
865 |
A reference installation will use a Synnefo-specific KVM ifup script, |
866 |
NFDHCPD and pre-provisioned Linux bridges to support public and private |
867 |
network functionality. For this: |
868 |
|
869 |
Grab NFDHCPD from its own repository (https://code.grnet.gr/git/nfdhcpd), |
870 |
install it, modify ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network |
871 |
configuration. |
872 |
|
873 |
Install a custom KVM ifup script for use by Ganeti, as |
874 |
``/etc/ganeti/kvm-vif-bridge``, on GANETI-NODEs. A sample implementation is |
875 |
provided under ``/contrib/ganeti-hooks``. Set ``NFDHCPD_STATE_DIR`` to point |
876 |
to NFDHCPD's state directory, usually ``/var/lib/nfdhcpd``. |
877 |
|
878 |
.. todo:: soc: document NFDHCPD installation, settle on KVM ifup script |
879 |
|
880 |
.. _cyclades-install-snfimage: |
881 |
|
882 |
snf-image |
883 |
~~~~~~~~~ |
884 |
|
885 |
Install the :ref:`snf-image <snf-image>` Ganeti OS provider for image |
886 |
deployment. |
887 |
|
888 |
For :ref:`cyclades <cyclades>` to be able to launch VMs from specified |
889 |
Images, you need the snf-image OS Provider installed on *all* Ganeti nodes. |
890 |
|
891 |
Please see `https://code.grnet.gr/projects/snf-image/wiki`_ |
892 |
for installation instructions and documentation on the design |
893 |
and implementation of snf-image. |
894 |
|
895 |
Please see `https://code.grnet.gr/projects/snf-image/files` |
896 |
for the latest packages. |
897 |
|
898 |
Images should be stored in ``extdump``, or ``diskdump`` format in a directory |
899 |
of your choice, configurable as ``IMAGE_DIR`` in |
900 |
:file:`/etc/default/snf-image`. |
901 |
|
902 |
synnefo components |
903 |
------------------ |
904 |
|
905 |
You need to install the appropriate synnefo software components on each node, |
906 |
depending on its type, see :ref:`Architecture <cyclades-architecture>`. |
907 |
|
908 |
Most synnefo components have dependencies on additional Python packages. |
909 |
The dependencies are described inside each package, and are setup |
910 |
automatically when installing using :command:`pip`, or when installing |
911 |
using your system's package manager. |
912 |
|
913 |
Please see the page of each synnefo software component for specific |
914 |
installation instructions, where applicable. |
915 |
|
916 |
Install the following synnefo components: |
917 |
|
918 |
Nodes of type :ref:`APISERVER <APISERVER_NODE>` |
919 |
Components |
920 |
:ref:`snf-common <snf-common>`, |
921 |
:ref:`snf-webproject <snf-webproject>`, |
922 |
:ref:`snf-cyclades-app <snf-cyclades-app>` |
923 |
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>` |
924 |
Components |
925 |
:ref:`snf-common <snf-common>`, |
926 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` |
927 |
Nodes of type :ref:`LOGIC <LOGIC_NODE>` |
928 |
Components |
929 |
:ref:`snf-common <snf-common>`, |
930 |
:ref:`snf-webproject <snf-webproject>`, |
931 |
:ref:`snf-cyclades-app <snf-cyclades-app>`. |
932 |
|
933 |
|
934 |
Configuration of Cyclades (and Plankton) |
935 |
======================================== |
936 |
|
937 |
This section targets the configuration of the prerequisites for cyclades, |
938 |
and the configuration of the associated synnefo software components. |
939 |
|
940 |
synnefo components |
941 |
------------------ |
942 |
|
943 |
cyclades uses :ref:`snf-common <snf-common>` for settings. |
944 |
Please refer to the configuration sections of |
945 |
:ref:`snf-webproject <snf-webproject>`, |
946 |
:ref:`snf-cyclades-app <snf-cyclades-app>`, |
947 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more |
948 |
information on their configuration. |
949 |
|
950 |
Ganeti |
951 |
~~~~~~ |
952 |
|
953 |
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on |
954 |
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the |
955 |
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``. |
956 |
|
957 |
Database |
958 |
~~~~~~~~ |
959 |
|
960 |
Once all components are installed and configured, |
961 |
initialize the Django DB: |
962 |
|
963 |
.. code-block:: console |
964 |
|
965 |
$ snf-manage syncdb |
966 |
$ snf-manage migrate |
967 |
|
968 |
and load fixtures ``{users, flavors, images}``, |
969 |
which make the API usable by end users by defining a sample set of users, |
970 |
hardware configurations (flavors) and OS images: |
971 |
|
972 |
.. code-block:: console |
973 |
|
974 |
$ snf-manage loaddata /path/to/users.json |
975 |
$ snf-manage loaddata flavors |
976 |
$ snf-manage loaddata images |
977 |
|
978 |
.. warning:: |
979 |
Be sure to load a custom users.json and select a unique token |
980 |
for each of the initial and any other users defined in this file. |
981 |
**DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed |
982 |
configurations. |
983 |
|
984 |
sample users.json file: |
985 |
|
986 |
.. literalinclude:: ../../synnefo/db/fixtures/users.json |
987 |
|
988 |
`download <../_static/users.json>`_ |
989 |
|
990 |
RabbitMQ |
991 |
~~~~~~~~ |
992 |
|
993 |
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup |
994 |
<cyclades-install-rabbitmq>`. |
995 |
|
996 |
.. include:: ../../Changelog |
997 |
|
998 |
|
999 |
Testing of Cyclades (and Plankton) |
1000 |
================================== |
1001 |
|
1002 |
|
1003 |
General Testing |
1004 |
=============== |
1005 |
|
1006 |
|
1007 |
Notes |
1008 |
===== |