root / docs / quick-install-admin-guide.rst @ caa6c07d
History | View | Annotate | Download (41.4 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the guide |
24 |
and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
|
43 |
General Prerequisites |
44 |
===================== |
45 |
|
46 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
47 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
48 |
|
49 |
To be able to download all synnefo components you need to add the following |
50 |
lines in your ``/etc/apt/sources.list`` file: |
51 |
|
52 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
53 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
54 |
|
55 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
56 |
data inside this directory. By 'all data', we mean files, images, and pithos |
57 |
specific mapping data. If you plan to upload more than one basic image, this |
58 |
directory should have at least 50GB of free space. During this guide, we will |
59 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
60 |
to node2. Node2 has this directory mounted under ``/srv/pithos``, too. |
61 |
|
62 |
Before starting the synnefo installation, you will need basic third party |
63 |
software to be installed and configured on the physical nodes. We will describe |
64 |
each node's general prerequisites separately. Any additional configuration, |
65 |
specific to a synnefo service for each node, will be described at the service's |
66 |
section. |
67 |
|
68 |
Node1 |
69 |
----- |
70 |
|
71 |
General Synnefo dependencies |
72 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
73 |
|
74 |
* apache (http server) |
75 |
* gunicorn (WSGI http server) |
76 |
* postgresql (database) |
77 |
* rabbitmq (message queue) |
78 |
|
79 |
You can install the above by running: |
80 |
|
81 |
.. code-block:: console |
82 |
|
83 |
# apt-get install apache2 postgresql rabbitmq-server |
84 |
|
85 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
86 |
the official debian backports: |
87 |
|
88 |
.. code-block:: console |
89 |
|
90 |
# apt-get -t squeeze-backports install gunicorn |
91 |
|
92 |
On node1, we will create our databases, so you will also need the |
93 |
python-psycopg2 package: |
94 |
|
95 |
.. code-block:: console |
96 |
|
97 |
# apt-get install python-psycopg2 |
98 |
|
99 |
Database setup |
100 |
~~~~~~~~~~~~~~ |
101 |
|
102 |
On node1, we create a database called ``snf_apps``, that will host all django |
103 |
apps related tables. We also create the user ``synnefo`` and grant him all |
104 |
privileges on the database. We do this by running: |
105 |
|
106 |
.. code-block:: console |
107 |
|
108 |
root@node1:~ # su - postgres |
109 |
postgres@node1:~ $ psql |
110 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
111 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
112 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
113 |
|
114 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
115 |
grant the ``synnefo`` user all privileges on the database. This database could |
116 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
117 |
create all needed databases on node1 and then node2 will connect to them. |
118 |
|
119 |
.. code-block:: console |
120 |
|
121 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
122 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
123 |
|
124 |
Configure the database to listen to all network interfaces. You can do this by |
125 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
126 |
``listen_addresses`` to ``'*'`` : |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
listen_addresses = '*' |
131 |
|
132 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
133 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
134 |
connections:`` : |
135 |
|
136 |
.. code-block:: console |
137 |
|
138 |
host all all 4.3.2.1/32 md5 |
139 |
host all all 4.3.2.2/32 md5 |
140 |
|
141 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
142 |
actual IPs. Now, restart the server to apply the changes: |
143 |
|
144 |
.. code-block:: console |
145 |
|
146 |
# /etc/init.d/postgresql restart |
147 |
|
148 |
Gunicorn setup |
149 |
~~~~~~~~~~~~~~ |
150 |
|
151 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following: |
152 |
|
153 |
.. code-block:: console |
154 |
|
155 |
CONFIG = { |
156 |
'mode': 'django', |
157 |
'environment': { |
158 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
159 |
}, |
160 |
'working_dir': '/etc/synnefo', |
161 |
'user': 'www-data', |
162 |
'group': 'www-data', |
163 |
'args': ( |
164 |
'--bind=127.0.0.1:8080', |
165 |
'--workers=4', |
166 |
'--log-level=debug', |
167 |
), |
168 |
} |
169 |
|
170 |
.. warning:: Do NOT start the server yet, because it won't find the |
171 |
``synnefo.settings`` module. We will start the server after successful |
172 |
installation of astakos. If the server is running:: |
173 |
|
174 |
# /etc/init.d/gunicorn stop |
175 |
|
176 |
Apache2 setup |
177 |
~~~~~~~~~~~~~ |
178 |
|
179 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
180 |
the following: |
181 |
|
182 |
.. code-block:: console |
183 |
|
184 |
<VirtualHost *:80> |
185 |
ServerName node1.example.com |
186 |
|
187 |
RewriteEngine On |
188 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
189 |
</VirtualHost> |
190 |
|
191 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
192 |
containing the following: |
193 |
|
194 |
.. code-block:: console |
195 |
|
196 |
<IfModule mod_ssl.c> |
197 |
<VirtualHost _default_:443> |
198 |
ServerName node1.example.com |
199 |
|
200 |
Alias /static "/usr/share/synnefo/static" |
201 |
|
202 |
# SetEnv no-gzip |
203 |
# SetEnv dont-vary |
204 |
|
205 |
AllowEncodedSlashes On |
206 |
|
207 |
RequestHeader set X-Forwarded-Protocol "https" |
208 |
|
209 |
<Proxy * > |
210 |
Order allow,deny |
211 |
Allow from all |
212 |
</Proxy> |
213 |
|
214 |
SetEnv proxy-sendchunked |
215 |
SSLProxyEngine off |
216 |
ProxyErrorOverride off |
217 |
|
218 |
ProxyPass /static ! |
219 |
ProxyPass / http://localhost:8080/ retry=0 |
220 |
ProxyPassReverse / http://localhost:8080/ |
221 |
|
222 |
RewriteEngine On |
223 |
RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE] |
224 |
|
225 |
SSLEngine on |
226 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
227 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
228 |
</VirtualHost> |
229 |
</IfModule> |
230 |
|
231 |
Now enable sites and modules by running: |
232 |
|
233 |
.. code-block:: console |
234 |
|
235 |
# a2enmod ssl |
236 |
# a2enmod rewrite |
237 |
# a2dissite default |
238 |
# a2ensite synnefo |
239 |
# a2ensite synnefo-ssl |
240 |
# a2enmod headers |
241 |
# a2enmod proxy_http |
242 |
|
243 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
244 |
|
245 |
# /etc/init.d/apache2 stop |
246 |
|
247 |
Message Queue setup |
248 |
~~~~~~~~~~~~~~~~~~~ |
249 |
|
250 |
The message queue will run on node1, so we need to create the appropriate |
251 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
252 |
exchanges: |
253 |
|
254 |
.. code-block:: console |
255 |
|
256 |
# rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd" |
257 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
258 |
|
259 |
We do not need to initialize the exchanges. This will be done automatically, |
260 |
during the Cyclades setup. |
261 |
|
262 |
Pithos+ data directory setup |
263 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
264 |
|
265 |
As mentioned in the General Prerequisites section, there is a directory called |
266 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
267 |
directory inside it: |
268 |
|
269 |
.. code-block:: console |
270 |
|
271 |
# cd /srv/pithos |
272 |
# mkdir data |
273 |
# chown www-data:www-data data |
274 |
# chmod g+ws data |
275 |
|
276 |
You are now ready with all general prerequisites concerning node1. Let's go to |
277 |
node2. |
278 |
|
279 |
Node2 |
280 |
----- |
281 |
|
282 |
General Synnefo dependencies |
283 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
284 |
|
285 |
* apache (http server) |
286 |
* gunicorn (WSGI http server) |
287 |
* postgresql (database) |
288 |
|
289 |
You can install the above by running: |
290 |
|
291 |
.. code-block:: console |
292 |
|
293 |
# apt-get install apache2 postgresql |
294 |
|
295 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
296 |
the official debian backports: |
297 |
|
298 |
.. code-block:: console |
299 |
|
300 |
# apt-get -t squeeze-backports install gunicorn |
301 |
|
302 |
Node2 will connect to the databases on node1, so you will also need the |
303 |
python-psycopg2 package: |
304 |
|
305 |
.. code-block:: console |
306 |
|
307 |
# apt-get install python-psycopg2 |
308 |
|
309 |
Database setup |
310 |
~~~~~~~~~~~~~~ |
311 |
|
312 |
All databases have been created and setup on node1, so we do not need to take |
313 |
any action here. From node2, we will just connect to them. When you get familiar |
314 |
with the software you may choose to run different databases on different nodes, |
315 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
316 |
of the purpose of this guide. |
317 |
|
318 |
Gunicorn setup |
319 |
~~~~~~~~~~~~~~ |
320 |
|
321 |
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following |
322 |
(same contents as in node1; you can just copy/paste the file): |
323 |
|
324 |
.. code-block:: console |
325 |
|
326 |
CONFIG = { |
327 |
'mode': 'django', |
328 |
'environment': { |
329 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
330 |
}, |
331 |
'working_dir': '/etc/synnefo', |
332 |
'user': 'www-data', |
333 |
'group': 'www-data', |
334 |
'args': ( |
335 |
'--bind=127.0.0.1:8080', |
336 |
'--workers=4', |
337 |
'--log-level=debug', |
338 |
'--timeout=43200' |
339 |
), |
340 |
} |
341 |
|
342 |
.. warning:: Do NOT start the server yet, because it won't find the |
343 |
``synnefo.settings`` module. We will start the server after successful |
344 |
installation of astakos. If the server is running:: |
345 |
|
346 |
# /etc/init.d/gunicorn stop |
347 |
|
348 |
Apache2 setup |
349 |
~~~~~~~~~~~~~ |
350 |
|
351 |
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing |
352 |
the following: |
353 |
|
354 |
.. code-block:: console |
355 |
|
356 |
<VirtualHost *:80> |
357 |
ServerName node2.example.com |
358 |
|
359 |
RewriteEngine On |
360 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
361 |
</VirtualHost> |
362 |
|
363 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
364 |
containing the following: |
365 |
|
366 |
.. code-block:: console |
367 |
|
368 |
<IfModule mod_ssl.c> |
369 |
<VirtualHost _default_:443> |
370 |
ServerName node2.example.com |
371 |
|
372 |
Alias /static "/usr/share/synnefo/static" |
373 |
|
374 |
SetEnv no-gzip |
375 |
SetEnv dont-vary |
376 |
AllowEncodedSlashes On |
377 |
|
378 |
RequestHeader set X-Forwarded-Protocol "https" |
379 |
|
380 |
<Proxy * > |
381 |
Order allow,deny |
382 |
Allow from all |
383 |
</Proxy> |
384 |
|
385 |
SetEnv proxy-sendchunked |
386 |
SSLProxyEngine off |
387 |
ProxyErrorOverride off |
388 |
|
389 |
ProxyPass /static ! |
390 |
ProxyPass / http://localhost:8080/ retry=0 |
391 |
ProxyPassReverse / http://localhost:8080/ |
392 |
|
393 |
SSLEngine on |
394 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
395 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
396 |
</VirtualHost> |
397 |
</IfModule> |
398 |
|
399 |
As in node1, enable sites and modules by running: |
400 |
|
401 |
.. code-block:: console |
402 |
|
403 |
# a2enmod ssl |
404 |
# a2enmod rewrite |
405 |
# a2dissite default |
406 |
# a2ensite synnefo |
407 |
# a2ensite synnefo-ssl |
408 |
# a2enmod headers |
409 |
# a2enmod proxy_http |
410 |
|
411 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
412 |
|
413 |
# /etc/init.d/apache2 stop |
414 |
|
415 |
We are now ready with all general prerequisites for node2. Now that we have |
416 |
finished with all general prerequisites for both nodes, we can start installing |
417 |
the services. First, let's install Astakos on node1. |
418 |
|
419 |
|
420 |
Installation of Astakos on node1 |
421 |
================================ |
422 |
|
423 |
To install astakos, grab the package from our repository (make sure you made |
424 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
425 |
previously), by running: |
426 |
|
427 |
.. code-block:: console |
428 |
|
429 |
# apt-get install snf-astakos-app |
430 |
|
431 |
After successful installation of snf-astakos-app, make sure that also |
432 |
snf-webproject has been installed (marked as "Recommended" package). By default |
433 |
Debian installs "Recommended" packages, but if you have changed your |
434 |
configuration and the package didn't install automatically, you should |
435 |
explicitly install it manually running: |
436 |
|
437 |
.. code-block:: console |
438 |
|
439 |
# apt-get install snf-webproject |
440 |
|
441 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
442 |
the experienced administrator the ability to install synnefo in a custom made |
443 |
django project. This corner case concerns only very advanced users that know |
444 |
what they are doing and want to experiment with synnefo. |
445 |
|
446 |
|
447 |
Configuration of Astakos |
448 |
======================== |
449 |
|
450 |
Conf Files |
451 |
---------- |
452 |
|
453 |
After astakos is successfully installed, you will find the directory |
454 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
455 |
commented configuration options, which are the default options. While installing |
456 |
new snf-* components, new configuration files will appear inside the directory. |
457 |
In this guide (and for all services), we will edit only the minimum necessary |
458 |
configuration options, to reflect our setup. Everything else will remain as is. |
459 |
|
460 |
After getting familiar with synnefo, you will be able to customize the software |
461 |
as you wish and fits your needs. Many options are available, to empower the |
462 |
administrator with extensively customizable setups. |
463 |
|
464 |
For the snf-webproject component (installed as an astakos dependency), we |
465 |
need the following: |
466 |
|
467 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
468 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
469 |
|
470 |
.. code-block:: console |
471 |
|
472 |
DATABASES = { |
473 |
'default': { |
474 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
475 |
'ENGINE': 'postgresql_psycopg2', |
476 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
477 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
478 |
'NAME': 'snf_apps', |
479 |
'USER': 'synnefo', # Not used with sqlite3. |
480 |
'PASSWORD': 'examle_passw0rd', # Not used with sqlite3. |
481 |
# Set to empty string for localhost. Not used with sqlite3. |
482 |
'HOST': '4.3.2.1', |
483 |
# Set to empty string for default. Not used with sqlite3. |
484 |
'PORT': '5432', |
485 |
} |
486 |
} |
487 |
|
488 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
489 |
``SECRET_KEY``. This is a django specific setting which is used to provide a |
490 |
seed in secret-key hashing algorithms. Set this to a random string of your |
491 |
choise and keep it private: |
492 |
|
493 |
.. code-block:: console |
494 |
|
495 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
496 |
|
497 |
For astakos specific configuration, edit the following options in |
498 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
499 |
|
500 |
.. code-block:: console |
501 |
|
502 |
ASTAKOS_IM_MODULES = ['local'] |
503 |
|
504 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
505 |
|
506 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
507 |
|
508 |
ASTAKOS_SITENAME = '~okeanos demo example' |
509 |
|
510 |
ASTAKOS_CLOUD_SERVICES = ( |
511 |
{ 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' }, |
512 |
{ 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' }, |
513 |
{ 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' }) |
514 |
|
515 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
516 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
517 |
|
518 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
519 |
|
520 |
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local |
521 |
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our |
522 |
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page. |
523 |
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by |
524 |
astakos. The first element of the dictionary is used to point to a generic |
525 |
landing page for your services (cyclades, pithos). If you don't have such a |
526 |
page it can be omitted. The second and third element point to our services |
527 |
themselves (the apps) and should be set as above. |
528 |
|
529 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
530 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
531 |
|
532 |
Servers Initialization |
533 |
---------------------- |
534 |
|
535 |
After configuration is done, we initialize the servers on node1: |
536 |
|
537 |
.. code-block:: console |
538 |
|
539 |
root@node1:~ # /etc/init.d/gunicorn restart |
540 |
root@node1:~ # /etc/init.d/apache2 restart |
541 |
|
542 |
Database Initialization |
543 |
----------------------- |
544 |
|
545 |
Then, we initialize the database by running: |
546 |
|
547 |
.. code-block:: console |
548 |
|
549 |
# snf-manage syncdb |
550 |
|
551 |
At this example we don't need to create a django superuser, so we select |
552 |
``[no]`` to the question. After a successful sync, we run the migration needed |
553 |
for astakos: |
554 |
|
555 |
.. code-block:: console |
556 |
|
557 |
# snf-manage migrate im |
558 |
|
559 |
You have now finished the Astakos setup. Let's test it now. |
560 |
|
561 |
|
562 |
Testing of Astakos |
563 |
================== |
564 |
|
565 |
Open your favorite browser and go to: |
566 |
|
567 |
``http://node1.example.com/im`` |
568 |
|
569 |
If this redirects you to ``https://node1.example.com/im`` and you can see |
570 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
571 |
|
572 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
573 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
574 |
see a green box on the top, which informs you that you made a successful request |
575 |
and the request has been sent to the administrators. So far so good, let's assume |
576 |
that you created the user with username ``user@example.com``. |
577 |
|
578 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
579 |
|
580 |
.. code-block:: console |
581 |
|
582 |
root@node1:~ # snf-manage listusers |
583 |
|
584 |
This command should show you a list with only one user; the one we just created. |
585 |
This user should have an id with a value of ``1``. It should also have an |
586 |
"active" status with the value of ``0`` (inactive). Now run: |
587 |
|
588 |
.. code-block:: console |
589 |
|
590 |
root@node1:~ # snf-manage modifyuser --set-active 1 |
591 |
|
592 |
This modifies the active value to ``1``, and actually activates the user. |
593 |
When running in production, the activation is done automatically with different |
594 |
types of moderation, that Astakos supports. You can see the moderation methods |
595 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
596 |
documentation. In production, you can also manually activate a user, by sending |
597 |
him/her an activation email. See how to do this at the :ref:`User |
598 |
activation <user_activation>` section. |
599 |
|
600 |
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with |
601 |
your browser again. Try to sign in using your new credentials. If the astakos |
602 |
menu appears and you can see your profile, then you have successfully setup |
603 |
Astakos. |
604 |
|
605 |
Let's continue to install Pithos+ now. |
606 |
|
607 |
|
608 |
Installation of Pithos+ on node2 |
609 |
================================ |
610 |
|
611 |
To install pithos+, grab the packages from our repository (make sure you made |
612 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
613 |
previously), by running: |
614 |
|
615 |
.. code-block:: console |
616 |
|
617 |
# apt-get install snf-pithos-app |
618 |
|
619 |
After successful installation of snf-pithos-app, make sure that also |
620 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
621 |
the "Installation of Astakos on node1" section, if you don't remember why this |
622 |
should happen. Now, install the pithos web interface: |
623 |
|
624 |
.. code-block:: console |
625 |
|
626 |
# apt-get install snf-pithos-webclient |
627 |
|
628 |
This package provides the standalone pithos web client. The web client is the |
629 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
630 |
interface's cloudbar, at the top of the Astakos homepage. |
631 |
|
632 |
|
633 |
Configuration of Pithos+ |
634 |
======================== |
635 |
|
636 |
Conf Files |
637 |
---------- |
638 |
|
639 |
After pithos+ is successfully installed, you will find the directory |
640 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
641 |
after installation of astakos. Here, you will not have to change anything that |
642 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
643 |
only need to change settings that have to do with pithos+. Specifically: |
644 |
|
645 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
646 |
only the two options: |
647 |
|
648 |
.. code-block:: console |
649 |
|
650 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
651 |
|
652 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
653 |
|
654 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
655 |
PITHOS_AUTHENTICATION_USERS = None |
656 |
|
657 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
658 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
659 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
660 |
``example_passw0rd``. All those settings where setup during node1's "Database |
661 |
setup" section. |
662 |
|
663 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
664 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
665 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
666 |
directory at node1's "Pithos+ data directory setup" section. |
667 |
|
668 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
669 |
is available the astakos authentication api. If not set, pithos+ tries to |
670 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
671 |
|
672 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
673 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
674 |
|
675 |
.. code-block:: console |
676 |
|
677 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
678 |
PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback" |
679 |
|
680 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
681 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
682 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
683 |
services, so we use this one. |
684 |
|
685 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
686 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
687 |
|
688 |
.. code-block:: console |
689 |
|
690 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
691 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = 'pithos' |
692 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
693 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
694 |
|
695 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
696 |
cloudbar. |
697 |
|
698 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service |
699 |
served by astakos. It's name should be identical with the ``id`` name given at |
700 |
the astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf |
701 |
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES`` |
702 |
list, to the dictionary: ``{ 'url':'https://nod...', 'name':'pithos+', |
703 |
'id':'pithos }``. This item represents the pithos+ service. The ``id`` we set |
704 |
there, is the ``id`` we want here. |
705 |
|
706 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
707 |
pithos+ web client to get from astakos all the information needed to fill its |
708 |
own cloudbar. So we put our astakos deployment urls there. |
709 |
|
710 |
Servers Initialization |
711 |
---------------------- |
712 |
|
713 |
After configuration is done, we initialize the servers on node2: |
714 |
|
715 |
.. code-block:: console |
716 |
|
717 |
root@node2:~ # /etc/init.d/gunicorn restart |
718 |
root@node2:~ # /etc/init.d/apache2 restart |
719 |
|
720 |
You have now finished the Pithos+ setup. Let's test it now. |
721 |
|
722 |
|
723 |
Testing of Pithos+ |
724 |
================== |
725 |
|
726 |
Open your browser and go to the Astakos homepage: |
727 |
|
728 |
``http://node1.example.com/im`` |
729 |
|
730 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
731 |
top black cloudbar. If everything was setup correctly, this will redirect you |
732 |
to: |
733 |
|
734 |
``https://node2.example.com/ui`` |
735 |
|
736 |
and you will see the blue interface of the Pithos+ application. Click the |
737 |
orange "Upload" button and upload your first file. If the file gets uploaded |
738 |
successfully, then this is your first sign of a successful Pithos+ installation. |
739 |
Go ahead and experiment with the interface to make sure everything works |
740 |
correctly. |
741 |
|
742 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
743 |
|
744 |
If you don't stumble on any problems, then you have successfully installed |
745 |
Pithos+, which you can use as a standalone File Storage Service. |
746 |
|
747 |
If you would like to do more, such as: |
748 |
|
749 |
* Spawning VMs |
750 |
* Spawning VMs from Images stored on Pithos+ |
751 |
* Uploading your custom Images to Pithos+ |
752 |
* Spawning VMs from those custom Images |
753 |
* Registering existing Pithos+ files as Images |
754 |
|
755 |
please continue with the rest of the guide. |
756 |
|
757 |
|
758 |
Installation of Cyclades (and Plankton) on node1 |
759 |
================================================ |
760 |
|
761 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
762 |
Compute service. Plankton (the Image Registry service) will get installed |
763 |
automatically along with Cyclades, because it is contained in the same Synnefo |
764 |
component right now. |
765 |
|
766 |
Before proceeding with the Cyclades (and Plankton) installation, make sure you |
767 |
have successfully set up Astakos and Pithos+ first, because Cyclades depends |
768 |
on them. If you don't have a working Astakos and Pithos+ installation yet, |
769 |
please return to the :ref:`top <quick-install-admin-guide>` of this guide. |
770 |
|
771 |
Besides Astakos and Pithos+, you will also need a number of additional working |
772 |
prerequisites, before you start the Cyclades installation. |
773 |
|
774 |
Cyclades Prerequisites |
775 |
---------------------- |
776 |
|
777 |
Ganeti |
778 |
~~~~~~ |
779 |
|
780 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
781 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
782 |
Please refer to the |
783 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the |
784 |
gory details. A successful Ganeti installation concludes with a working |
785 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
786 |
<GANETI_NODES>`. |
787 |
|
788 |
The above Ganeti cluster can run on different physical machines than node1 and |
789 |
node2 and can scale independently, according to your needs. |
790 |
|
791 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
792 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
793 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
794 |
|
795 |
We highly recommend that you read the official Ganeti documentation, if you are |
796 |
not familiar with Ganeti. If you are extremely impatient, you can result with |
797 |
the above assumed setup by running: |
798 |
|
799 |
.. code-block:: console |
800 |
|
801 |
root@node1:~ # apt-get install ganeti2 |
802 |
root@node1:~ # apt-get install ganeti-htools |
803 |
root@node2:~ # apt-get install ganeti2 |
804 |
root@node2:~ # apt-get install ganeti-htools |
805 |
|
806 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
807 |
both nodes, choose a domain name that resolves to a valid floating IP (let's say |
808 |
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access |
809 |
between each other using ssh keys and not passwords. Also, make sure there is an |
810 |
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup |
811 |
a bridge interface on the host machines (e.g:: br0). Then run on node1: |
812 |
|
813 |
.. code-block:: console |
814 |
|
815 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init |
816 |
--no-etc-hosts --vg-name=ganeti |
817 |
--nic-parameters link=br0 --master-netdev eth0 |
818 |
ganeti.node1.example.com |
819 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
820 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
821 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
822 |
|
823 |
root@node1:~ # gnt-node add --no-node-setup --master-capable=yes |
824 |
--vm-capable=yes node2.example.com |
825 |
|
826 |
For any problems you may stumble upon installing Ganeti, please refer to the |
827 |
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation |
828 |
of Ganeti is out of the scope of this guide. |
829 |
|
830 |
.. _cyclades-install-snfimage: |
831 |
|
832 |
snf-image |
833 |
~~~~~~~~~ |
834 |
|
835 |
Installation |
836 |
```````````` |
837 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
838 |
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all* |
839 |
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on |
840 |
node1 and node2. You can do this by running on *both* nodes: |
841 |
|
842 |
.. code-block:: console |
843 |
|
844 |
# apt-get install snf-image-host |
845 |
|
846 |
Now, you need to download and save the corresponding helper package. Please see |
847 |
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's |
848 |
assume that you installed snf-image-host version 0.3.5-1. Then, you need |
849 |
snf-image-helper v0.3.5-1 on *both* nodes: |
850 |
|
851 |
.. code-block:: console |
852 |
|
853 |
# cd /var/lib/snf-image/helper/ |
854 |
# wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.3.5-1_all.deb |
855 |
|
856 |
.. warning:: Be careful: Do NOT install the snf-image-helper debian package. |
857 |
Just put it under /var/lib/snf-image/helper/ |
858 |
|
859 |
Once, you have downloaded the snf-image-helper package, create the helper VM by |
860 |
running on *both* nodes: |
861 |
|
862 |
.. code-block:: console |
863 |
|
864 |
# ln -s snf-image-helper_0.3.5-1_all.deb snf-image-helper.deb |
865 |
# snf-image-update-helper |
866 |
|
867 |
This will create all the needed files under ``/var/lib/snf-image/helper/`` for |
868 |
snf-image-host to run successfully. |
869 |
|
870 |
Configuration |
871 |
````````````` |
872 |
snf-image supports native access to Images stored on Pithos+. This means that |
873 |
snf-image can talk directly to the Pithos+ backend, without the need of providing |
874 |
a public URL. More details, are described in the next section. For now, the only |
875 |
thing we need to do, is configure snf-image to access our Pithos+ backend. |
876 |
|
877 |
To do this, we need to set the corresponding variables in |
878 |
``/etc/default/snf-image``, to reflect our Pithos+ setup: |
879 |
|
880 |
.. code-block:: console |
881 |
|
882 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
883 |
|
884 |
PITHOS_DATA="/srv/pithos/data" |
885 |
|
886 |
If you have installed your Ganeti cluster on different nodes than node1 and node2 make |
887 |
sure that ``/srv/pithos/data`` is visible by all of them. |
888 |
|
889 |
If you would like to use Images that are also/only stored locally, you need to |
890 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
891 |
Pithos+. |
892 |
|
893 |
Testing |
894 |
``````` |
895 |
|
896 |
You can test that snf-image is successfully installed by running on the |
897 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
898 |
|
899 |
.. code-block:: console |
900 |
|
901 |
# gnt-os diagnose |
902 |
|
903 |
This should return ``valid`` for snf-image. |
904 |
|
905 |
If you are interested to learn more about snf-image's internals (and even use |
906 |
it alongside Ganeti without Synnefo), please see |
907 |
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning |
908 |
installation instructions, documentation on the design and implementation, and |
909 |
supported Image formats. |
910 |
|
911 |
snf-image's actual Images |
912 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
913 |
|
914 |
Now that snf-image is installed successfully we need to provide it with some |
915 |
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``, |
916 |
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump`` |
917 |
format. For more information about snf-image's Image formats see `here |
918 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_. |
919 |
|
920 |
:ref:`snf-image <snf-image>` also supports three (3) different locations for the |
921 |
above Images to be stored: |
922 |
|
923 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in |
924 |
:file:`/etc/default/snf-image`) |
925 |
* On a remote host (accessible via a public URL e.g: http://... or ftp://...) |
926 |
* On Pithos+ (accessible natively, not only by its public URL) |
927 |
|
928 |
For the purpose of this guide, we will use the `Debian Squeeze Base Image |
929 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official |
930 |
`snf-image page |
931 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is |
932 |
of type ``diskdump``. We will store it in our new Pithos+ installation. |
933 |
|
934 |
To do so, do the following: |
935 |
|
936 |
a) Download the Image from the official snf-image page (`image link |
937 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_). |
938 |
|
939 |
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI |
940 |
or the command line client `kamaki |
941 |
<http://docs.dev.grnet.gr/kamaki/latest/index.html>`_. |
942 |
|
943 |
Once the Image is uploaded successfully, download the Image's metadata file |
944 |
from the official snf-image page (`image_metadata link |
945 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for |
946 |
spawning a VM from Ganeti, in the next section. |
947 |
|
948 |
Of course, you can repeat the procedure to upload more Images, available from the |
949 |
`official snf-image page |
950 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. |
951 |
|
952 |
Spawning a VM from a Pithos+ Image, using Ganeti |
953 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
954 |
|
955 |
Now, it is time to test our installation so far. So, we have Astakos and |
956 |
Pithos+ installed, we have a working Ganeti installation, the snf-image |
957 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
958 |
Pithos+. Make sure you also have the `metadata file |
959 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
960 |
|
961 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
962 |
|
963 |
.. code-block:: console |
964 |
|
965 |
# gnt-instance add -o snf-image+default --os-parameters |
966 |
img_passwd=my_vm_example_passw0rd, |
967 |
img_format=diskdump, |
968 |
img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump", |
969 |
img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' |
970 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check |
971 |
testvm1 |
972 |
|
973 |
In the above command: |
974 |
|
975 |
* ``img_passwd``: the arbitrary root password of your new instance |
976 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
977 |
* ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this |
978 |
should have the format |
979 |
``pithos://<username>/<container>/<filename>``: |
980 |
* ``username``: ``user@example.com`` (defined during Astakos sign up) |
981 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
982 |
* ``filename``: the name of file (visible also from the Web UI) |
983 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
984 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
985 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_ |
986 |
|
987 |
If the ``gnt-instance add`` command returns successfully, then run: |
988 |
|
989 |
.. code-block:: console |
990 |
|
991 |
# gnt-instance info testvm1 | grep "console connection" |
992 |
|
993 |
to find out where to connect using VNC. If you can connect successfully and can |
994 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
995 |
then everything works as expected and you have your new Debian Base VM up and |
996 |
running. |
997 |
|
998 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
999 |
to access the Pithos+ database and the Pithos+ backend data. Also, make sure |
1000 |
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add`` |
1001 |
succeeds but you cannot connect, again find out what went wrong. Do *NOT* |
1002 |
proceed to the next steps unless you are sure everything works till this point. |
1003 |
|
1004 |
If everything works, you have successfully connected Ganeti with Pithos+. |
1005 |
Let's move on to networking now. |
1006 |
|
1007 |
Network setup |
1008 |
~~~~~~~~~~~~~ |
1009 |
|
1010 |
Synnefo RAPI user |
1011 |
~~~~~~~~~~~~~~~~~ |
1012 |
|
1013 |
Once you have a working Ganeti installation create a new RAPI user that will |
1014 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1015 |
so we will call the user ``cyclades``. You can do this, by editting the file |
1016 |
``/var/lib/ganeti/rapi/users`` and adding the line: |
1017 |
|
1018 |
.. code-block:: console |
1019 |
|
1020 |
cyclades {HA1}a62c-example_hash_here-6f0436ddb write |
1021 |
|
1022 |
More about Ganeti's RAPI users `here. |
1023 |
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_ |
1024 |
|
1025 |
|
1026 |
|
1027 |
|
1028 |
.. _cyclades-install-rabbitmq: |
1029 |
|
1030 |
RabbitMQ |
1031 |
~~~~~~~~ |
1032 |
|
1033 |
RabbitMQ is used as a generic message broker for cyclades. It should be |
1034 |
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability |
1035 |
configuration as described here: |
1036 |
|
1037 |
http://www.rabbitmq.com/pacemaker.html |
1038 |
|
1039 |
The values set for the user and password must be mirrored in the |
1040 |
``RABBIT_*`` variables in your settings, as managed by |
1041 |
:ref:`snf-common <snf-common>`. |
1042 |
|
1043 |
.. todo:: Document an active-active configuration based on the latest version |
1044 |
of RabbitMQ. |
1045 |
|
1046 |
.. _cyclades-install-vncauthproxy: |
1047 |
|
1048 |
vncauthproxy |
1049 |
~~~~~~~~~~~~ |
1050 |
|
1051 |
To support OOB console access to the VMs over VNC, the vncauthproxy |
1052 |
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node. |
1053 |
|
1054 |
.. note:: The Debian package for vncauthproxy undertakes all configuration |
1055 |
automatically. |
1056 |
|
1057 |
Download and install the latest vncauthproxy from its own repository, |
1058 |
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit: |
1059 |
|
1060 |
.. code-block:: console |
1061 |
|
1062 |
$ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy |
1063 |
|
1064 |
Create ``/var/log/vncauthproxy`` and set its permissions appropriately. |
1065 |
|
1066 |
Alternatively, build and install Debian packages. |
1067 |
|
1068 |
.. code-block:: console |
1069 |
|
1070 |
$ git checkout debian |
1071 |
$ dpkg-buildpackage -b -uc -us |
1072 |
# dpkg -i ../vncauthproxy_1.0-1_all.deb |
1073 |
|
1074 |
.. warning:: |
1075 |
**Failure to build the package on the Mac.** |
1076 |
|
1077 |
``libevent``, a requirement for gevent which in turn is a requirement for |
1078 |
vncauthproxy is not included in `MacOSX` by default and installing it with |
1079 |
MacPorts does not lead to a version that can be found by the gevent |
1080 |
build process. A quick workaround is to execute the following commands:: |
1081 |
|
1082 |
$ cd $SYNNEFO |
1083 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
1084 |
<the above fails> |
1085 |
$ cd build/gevent |
1086 |
$ sudo python setup.py -I/opt/local/include -L/opt/local/lib build |
1087 |
$ cd $SYNNEFO |
1088 |
$ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy |
1089 |
|
1090 |
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip |
1091 |
.. todo:: kpap: fix installation commands |
1092 |
|
1093 |
.. _cyclades-install-nfdhcpd: |
1094 |
|
1095 |
NFDHCPD |
1096 |
~~~~~~~ |
1097 |
|
1098 |
Setup Synnefo-specific networking on the Ganeti backend. |
1099 |
This part is deployment-specific and must be customized based on the |
1100 |
specific needs of the system administrators. |
1101 |
|
1102 |
A reference installation will use a Synnefo-specific KVM ifup script, |
1103 |
NFDHCPD and pre-provisioned Linux bridges to support public and private |
1104 |
network functionality. For this: |
1105 |
|
1106 |
Grab NFDHCPD from its own repository (https://code.grnet.gr/git/nfdhcpd), |
1107 |
install it, modify ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network |
1108 |
configuration. |
1109 |
|
1110 |
Install a custom KVM ifup script for use by Ganeti, as |
1111 |
``/etc/ganeti/kvm-vif-bridge``, on GANETI-NODEs. A sample implementation is |
1112 |
provided under ``/contrib/ganeti-hooks``. Set ``NFDHCPD_STATE_DIR`` to point |
1113 |
to NFDHCPD's state directory, usually ``/var/lib/nfdhcpd``. |
1114 |
|
1115 |
.. todo:: soc: document NFDHCPD installation, settle on KVM ifup script |
1116 |
|
1117 |
synnefo components |
1118 |
------------------ |
1119 |
|
1120 |
You need to install the appropriate synnefo software components on each node, |
1121 |
depending on its type, see :ref:`Architecture <cyclades-architecture>`. |
1122 |
|
1123 |
Most synnefo components have dependencies on additional Python packages. |
1124 |
The dependencies are described inside each package, and are setup |
1125 |
automatically when installing using :command:`pip`, or when installing |
1126 |
using your system's package manager. |
1127 |
|
1128 |
Please see the page of each synnefo software component for specific |
1129 |
installation instructions, where applicable. |
1130 |
|
1131 |
Install the following synnefo components: |
1132 |
|
1133 |
Nodes of type :ref:`APISERVER <APISERVER_NODE>` |
1134 |
Components |
1135 |
:ref:`snf-common <snf-common>`, |
1136 |
:ref:`snf-webproject <snf-webproject>`, |
1137 |
:ref:`snf-cyclades-app <snf-cyclades-app>` |
1138 |
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>` |
1139 |
Components |
1140 |
:ref:`snf-common <snf-common>`, |
1141 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` |
1142 |
Nodes of type :ref:`LOGIC <LOGIC_NODE>` |
1143 |
Components |
1144 |
:ref:`snf-common <snf-common>`, |
1145 |
:ref:`snf-webproject <snf-webproject>`, |
1146 |
:ref:`snf-cyclades-app <snf-cyclades-app>`. |
1147 |
|
1148 |
|
1149 |
Configuration of Cyclades (and Plankton) |
1150 |
======================================== |
1151 |
|
1152 |
This section targets the configuration of the prerequisites for cyclades, |
1153 |
and the configuration of the associated synnefo software components. |
1154 |
|
1155 |
synnefo components |
1156 |
------------------ |
1157 |
|
1158 |
cyclades uses :ref:`snf-common <snf-common>` for settings. |
1159 |
Please refer to the configuration sections of |
1160 |
:ref:`snf-webproject <snf-webproject>`, |
1161 |
:ref:`snf-cyclades-app <snf-cyclades-app>`, |
1162 |
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more |
1163 |
information on their configuration. |
1164 |
|
1165 |
Ganeti |
1166 |
~~~~~~ |
1167 |
|
1168 |
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on |
1169 |
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the |
1170 |
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``. |
1171 |
|
1172 |
Database |
1173 |
~~~~~~~~ |
1174 |
|
1175 |
Once all components are installed and configured, |
1176 |
initialize the Django DB: |
1177 |
|
1178 |
.. code-block:: console |
1179 |
|
1180 |
$ snf-manage syncdb |
1181 |
$ snf-manage migrate |
1182 |
|
1183 |
and load fixtures ``{users, flavors, images}``, |
1184 |
which make the API usable by end users by defining a sample set of users, |
1185 |
hardware configurations (flavors) and OS images: |
1186 |
|
1187 |
.. code-block:: console |
1188 |
|
1189 |
$ snf-manage loaddata /path/to/users.json |
1190 |
$ snf-manage loaddata flavors |
1191 |
$ snf-manage loaddata images |
1192 |
|
1193 |
.. warning:: |
1194 |
Be sure to load a custom users.json and select a unique token |
1195 |
for each of the initial and any other users defined in this file. |
1196 |
**DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed |
1197 |
configurations. |
1198 |
|
1199 |
sample users.json file: |
1200 |
|
1201 |
.. literalinclude:: ../../synnefo/db/fixtures/users.json |
1202 |
|
1203 |
`download <../_static/users.json>`_ |
1204 |
|
1205 |
RabbitMQ |
1206 |
~~~~~~~~ |
1207 |
|
1208 |
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup |
1209 |
<cyclades-install-rabbitmq>`. |
1210 |
|
1211 |
.. include:: ../../Changelog |
1212 |
|
1213 |
|
1214 |
Testing of Cyclades (and Plankton) |
1215 |
================================== |
1216 |
|
1217 |
|
1218 |
General Testing |
1219 |
=============== |
1220 |
|
1221 |
|
1222 |
Notes |
1223 |
===== |