root / docs / quick-install-admin-guide.rst @ 5135245b
History | View | Annotate | Download (80 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Service (part of Cyclades) |
17 |
* Network Service (part of Cyclades) |
18 |
|
19 |
and a single unified Web UI to manage them all. |
20 |
|
21 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
22 |
not released yet. |
23 |
|
24 |
If you just want to install the Object Storage Service (Pithos), follow the |
25 |
guide and just stop after the "Testing of Pithos" section. |
26 |
|
27 |
|
28 |
Installation of Synnefo / Introduction |
29 |
====================================== |
30 |
|
31 |
We will install the services with the above list's order. The last three |
32 |
services will be installed in a single step (at the end), because at the moment |
33 |
they are contained in the same software component (Cyclades). Furthermore, we |
34 |
will install all services in the first physical node, except Pithos which will |
35 |
be installed in the second, due to a conflict between the snf-pithos-app and |
36 |
snf-cyclades-app component (scheduled to be fixed in the next version). |
37 |
|
38 |
For the rest of the documentation we will refer to the first physical node as |
39 |
"node1" and the second as "node2". We will also assume that their domain names |
40 |
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and |
41 |
"4.3.2.2" respectively. It is important that the two machines are under the same domain name. |
42 |
In case you choose to follow a private installation you will need to |
43 |
set up a private dns server, using dnsmasq for example. See node1 below for more. |
44 |
|
45 |
General Prerequisites |
46 |
===================== |
47 |
|
48 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
49 |
and are related to all the services (Astakos, Pithos, Cyclades). |
50 |
|
51 |
To be able to download all synnefo components you need to add the following |
52 |
lines in your ``/etc/apt/sources.list`` file: |
53 |
|
54 |
| ``deb http://apt.dev.grnet.gr squeeze/`` |
55 |
| ``deb-src http://apt.dev.grnet.gr squeeze/`` |
56 |
|
57 |
and import the repo's GPG key: |
58 |
|
59 |
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -`` |
60 |
|
61 |
Also add the following line to enable the ``squeeze-backports`` repository, |
62 |
which may provide more recent versions of certain packages. The repository |
63 |
is deactivated by default and must be specified expicitly in ``apt-get`` |
64 |
operations: |
65 |
|
66 |
| ``deb http://backports.debian.org/debian-backports squeeze-backports main`` |
67 |
|
68 |
You also need a shared directory visible by both nodes. Pithos will save all |
69 |
data inside this directory. By 'all data', we mean files, images, and pithos |
70 |
specific mapping data. If you plan to upload more than one basic image, this |
71 |
directory should have at least 50GB of free space. During this guide, we will |
72 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
73 |
to node2 (be sure to set no_root_squash flag). Node2 has this directory |
74 |
mounted under ``/srv/pithos``, too. |
75 |
|
76 |
Before starting the synnefo installation, you will need basic third party |
77 |
software to be installed and configured on the physical nodes. We will describe |
78 |
each node's general prerequisites separately. Any additional configuration, |
79 |
specific to a synnefo service for each node, will be described at the service's |
80 |
section. |
81 |
|
82 |
Finally, it is required for Cyclades and Ganeti nodes to have synchronized |
83 |
system clocks (e.g. by running ntpd). |
84 |
|
85 |
Node1 |
86 |
----- |
87 |
|
88 |
|
89 |
General Synnefo dependencies |
90 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
91 |
|
92 |
* apache (http server) |
93 |
* public certificate |
94 |
* gunicorn (WSGI http server) |
95 |
* postgresql (database) |
96 |
* rabbitmq (message queue) |
97 |
* ntp (NTP daemon) |
98 |
* gevent |
99 |
* dns server |
100 |
|
101 |
You can install apache2, postgresql and ntp by running: |
102 |
|
103 |
.. code-block:: console |
104 |
|
105 |
# apt-get install apache2 postgresql ntp |
106 |
|
107 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
108 |
the official debian backports: |
109 |
|
110 |
.. code-block:: console |
111 |
|
112 |
# apt-get -t squeeze-backports install gunicorn |
113 |
|
114 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
115 |
|
116 |
.. code-block:: console |
117 |
|
118 |
# apt-get -t squeeze-backports install python-gevent |
119 |
|
120 |
On node1, we will create our databases, so you will also need the |
121 |
python-psycopg2 package: |
122 |
|
123 |
.. code-block:: console |
124 |
|
125 |
# apt-get install python-psycopg2 |
126 |
|
127 |
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the |
128 |
following line to ``/etc/apt/sources.list``: |
129 |
|
130 |
.. code-block:: console |
131 |
|
132 |
deb http://www.rabbitmq.com/debian testing main |
133 |
|
134 |
Add RabbitMQ public key, to trusted key list: |
135 |
|
136 |
.. code-block:: console |
137 |
|
138 |
# wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc |
139 |
# apt-key add rabbitmq-signing-key-public.asc |
140 |
|
141 |
Finally, to install the package run: |
142 |
|
143 |
.. code-block:: console |
144 |
|
145 |
# apt-get update |
146 |
# apt-get install rabbitmq-server |
147 |
|
148 |
Database setup |
149 |
~~~~~~~~~~~~~~ |
150 |
|
151 |
On node1, we create a database called ``snf_apps``, that will host all django |
152 |
apps related tables. We also create the user ``synnefo`` and grant him all |
153 |
privileges on the database. We do this by running: |
154 |
|
155 |
.. code-block:: console |
156 |
|
157 |
root@node1:~ # su - postgres |
158 |
postgres@node1:~ $ psql |
159 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
160 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
161 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
162 |
|
163 |
We also create the database ``snf_pithos`` needed by the Pithos backend and |
164 |
grant the ``synnefo`` user all privileges on the database. This database could |
165 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
166 |
create all needed databases on node1 and then node2 will connect to them. |
167 |
|
168 |
.. code-block:: console |
169 |
|
170 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
171 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
172 |
|
173 |
Configure the database to listen to all network interfaces. You can do this by |
174 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
175 |
``listen_addresses`` to ``'*'`` : |
176 |
|
177 |
.. code-block:: console |
178 |
|
179 |
listen_addresses = '*' |
180 |
|
181 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
182 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
183 |
connections:`` : |
184 |
|
185 |
.. code-block:: console |
186 |
|
187 |
host all all 4.3.2.1/32 md5 |
188 |
host all all 4.3.2.2/32 md5 |
189 |
|
190 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
191 |
actual IPs. Now, restart the server to apply the changes: |
192 |
|
193 |
.. code-block:: console |
194 |
|
195 |
# /etc/init.d/postgresql restart |
196 |
|
197 |
Gunicorn setup |
198 |
~~~~~~~~~~~~~~ |
199 |
|
200 |
Rename the file ``/etc/gunicorn.d/synnefo.example`` to |
201 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file: |
202 |
|
203 |
.. code-block:: console |
204 |
|
205 |
# mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
206 |
|
207 |
|
208 |
.. warning:: Do NOT start the server yet, because it won't find the |
209 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
210 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
211 |
``--worker-class=sync``. We will start the server after successful |
212 |
installation of astakos. If the server is running:: |
213 |
|
214 |
# /etc/init.d/gunicorn stop |
215 |
|
216 |
Certificate Creation |
217 |
~~~~~~~~~~~~~~~~~~~~~ |
218 |
|
219 |
Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection. |
220 |
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate. |
221 |
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate |
222 |
and sign your keys with this. To do so on node1 run |
223 |
|
224 |
.. code-block:: console |
225 |
|
226 |
# aptitude install openvpn |
227 |
# mkdir /etc/openvpn/easy-rsa |
228 |
# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa |
229 |
# cd /etc/openvpn/easy-rsa/2.0 |
230 |
# vim vars |
231 |
|
232 |
In vars you can set your own parameters such as KEY_COUNTRY |
233 |
|
234 |
.. code-block:: console |
235 |
|
236 |
# . ./vars |
237 |
# ./clean-all |
238 |
|
239 |
Now you can create the certificate |
240 |
|
241 |
.. code-block:: console |
242 |
|
243 |
# ./build-ca |
244 |
|
245 |
The previous will create a ``ca.crt`` file. Copy this file under |
246 |
``/usr/local/share/ca-certificates/`` directory and run : |
247 |
|
248 |
.. code-block:: console |
249 |
|
250 |
# update-ca-certificates |
251 |
|
252 |
to update the records. You will have to do the following on node2 as well. |
253 |
|
254 |
Now you can create the keys and sign them with the certificate |
255 |
|
256 |
.. code-block:: console |
257 |
|
258 |
# ./build-key-server node1.example.com |
259 |
|
260 |
This will create a .pem and a .key file in your current folder. Copy these in |
261 |
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and |
262 |
use them in the apache2 configuration file below instead of the defaults. |
263 |
|
264 |
Apache2 setup |
265 |
~~~~~~~~~~~~~ |
266 |
|
267 |
Create the file ``/etc/apache2/sites-available/synnefo`` containing the |
268 |
following: |
269 |
|
270 |
.. code-block:: console |
271 |
|
272 |
<VirtualHost *:80> |
273 |
ServerName node1.example.com |
274 |
|
275 |
RewriteEngine On |
276 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
277 |
RewriteRule ^(.*)$ - [F,L] |
278 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
279 |
</VirtualHost> |
280 |
|
281 |
|
282 |
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the |
283 |
following: |
284 |
|
285 |
.. code-block:: console |
286 |
|
287 |
<IfModule mod_ssl.c> |
288 |
<VirtualHost _default_:443> |
289 |
ServerName node1.example.com |
290 |
|
291 |
Alias /static "/usr/share/synnefo/static" |
292 |
|
293 |
# SetEnv no-gzip |
294 |
# SetEnv dont-vary |
295 |
|
296 |
AllowEncodedSlashes On |
297 |
|
298 |
RequestHeader set X-Forwarded-Protocol "https" |
299 |
|
300 |
<Proxy * > |
301 |
Order allow,deny |
302 |
Allow from all |
303 |
</Proxy> |
304 |
|
305 |
SetEnv proxy-sendchunked |
306 |
SSLProxyEngine off |
307 |
ProxyErrorOverride off |
308 |
|
309 |
ProxyPass /static ! |
310 |
ProxyPass / http://localhost:8080/ retry=0 |
311 |
ProxyPassReverse / http://localhost:8080/ |
312 |
|
313 |
RewriteEngine On |
314 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
315 |
RewriteRule ^(.*)$ - [F,L] |
316 |
|
317 |
SSLEngine on |
318 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
319 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
320 |
</VirtualHost> |
321 |
</IfModule> |
322 |
|
323 |
Now enable sites and modules by running: |
324 |
|
325 |
.. code-block:: console |
326 |
|
327 |
# a2enmod ssl |
328 |
# a2enmod rewrite |
329 |
# a2dissite default |
330 |
# a2ensite synnefo |
331 |
# a2ensite synnefo-ssl |
332 |
# a2enmod headers |
333 |
# a2enmod proxy_http |
334 |
|
335 |
.. note:: This isn't really needed, but it's a good security practice to disable |
336 |
directory listing in apache:: |
337 |
|
338 |
# a2dismod autoindex |
339 |
|
340 |
|
341 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
342 |
|
343 |
# /etc/init.d/apache2 stop |
344 |
|
345 |
|
346 |
.. _rabbitmq-setup: |
347 |
|
348 |
Message Queue setup |
349 |
~~~~~~~~~~~~~~~~~~~ |
350 |
|
351 |
The message queue will run on node1, so we need to create the appropriate |
352 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
353 |
exchanges: |
354 |
|
355 |
.. code-block:: console |
356 |
|
357 |
# rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd" |
358 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
359 |
|
360 |
We do not need to initialize the exchanges. This will be done automatically, |
361 |
during the Cyclades setup. |
362 |
|
363 |
Pithos data directory setup |
364 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
365 |
|
366 |
As mentioned in the General Prerequisites section, there is a directory called |
367 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
368 |
directory inside it: |
369 |
|
370 |
.. code-block:: console |
371 |
|
372 |
# cd /srv/pithos |
373 |
# mkdir data |
374 |
# chown www-data:www-data data |
375 |
# chmod g+ws data |
376 |
|
377 |
DNS server setup |
378 |
~~~~~~~~~~~~~~~~ |
379 |
|
380 |
If your machines are not under the same domain nameyou have to set up a dns server. |
381 |
In order to set up a dns server using dnsmasq do the following |
382 |
|
383 |
.. code-block:: console |
384 |
|
385 |
# apt-get install dnsmasq |
386 |
|
387 |
Then edit you ``/etc/hosts/`` as follows |
388 |
|
389 |
.. code-block:: console |
390 |
|
391 |
4.3.2.1 node1.example.com |
392 |
4.3.2.2 node2.example.com |
393 |
|
394 |
Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and |
395 |
the ``interface`` you would like to listen to. |
396 |
|
397 |
Also add the following in your ``/etc/resolv.conf`` file |
398 |
|
399 |
.. code-block:: console |
400 |
|
401 |
nameserver 4.3.2.1 |
402 |
|
403 |
You are now ready with all general prerequisites concerning node1. Let's go to |
404 |
node2. |
405 |
|
406 |
Node2 |
407 |
----- |
408 |
|
409 |
General Synnefo dependencies |
410 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
411 |
|
412 |
* apache (http server) |
413 |
* gunicorn (WSGI http server) |
414 |
* postgresql (database) |
415 |
* ntp (NTP daemon) |
416 |
* gevent |
417 |
* certificates |
418 |
* dns setup |
419 |
|
420 |
You can install the above by running: |
421 |
|
422 |
.. code-block:: console |
423 |
|
424 |
# apt-get install apache2 postgresql ntp |
425 |
|
426 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
427 |
the official debian backports: |
428 |
|
429 |
.. code-block:: console |
430 |
|
431 |
# apt-get -t squeeze-backports install gunicorn |
432 |
|
433 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
434 |
|
435 |
.. code-block:: console |
436 |
|
437 |
# apt-get -t squeeze-backports install python-gevent |
438 |
|
439 |
Node2 will connect to the databases on node1, so you will also need the |
440 |
python-psycopg2 package: |
441 |
|
442 |
.. code-block:: console |
443 |
|
444 |
# apt-get install python-psycopg2 |
445 |
|
446 |
Database setup |
447 |
~~~~~~~~~~~~~~ |
448 |
|
449 |
All databases have been created and setup on node1, so we do not need to take |
450 |
any action here. From node2, we will just connect to them. When you get familiar |
451 |
with the software you may choose to run different databases on different nodes, |
452 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
453 |
of the purpose of this guide. |
454 |
|
455 |
Gunicorn setup |
456 |
~~~~~~~~~~~~~~ |
457 |
|
458 |
Rename the file ``/etc/gunicorn.d/synnefo.example`` to |
459 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file |
460 |
(as happened for node1): |
461 |
|
462 |
.. code-block:: console |
463 |
|
464 |
# mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
465 |
|
466 |
|
467 |
.. warning:: Do NOT start the server yet, because it won't find the |
468 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
469 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
470 |
``--worker-class=sync``. We will start the server after successful |
471 |
installation of astakos. If the server is running:: |
472 |
|
473 |
# /etc/init.d/gunicorn stop |
474 |
|
475 |
Apache2 setup |
476 |
~~~~~~~~~~~~~ |
477 |
|
478 |
Create the file ``/etc/apache2/sites-available/synnefo`` containing the |
479 |
following: |
480 |
|
481 |
.. code-block:: console |
482 |
|
483 |
<VirtualHost *:80> |
484 |
ServerName node2.example.com |
485 |
|
486 |
RewriteEngine On |
487 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
488 |
RewriteRule ^(.*)$ - [F,L] |
489 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
490 |
</VirtualHost> |
491 |
|
492 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
493 |
containing the following: |
494 |
|
495 |
.. code-block:: console |
496 |
|
497 |
<IfModule mod_ssl.c> |
498 |
<VirtualHost _default_:443> |
499 |
ServerName node2.example.com |
500 |
|
501 |
Alias /static "/usr/share/synnefo/static" |
502 |
|
503 |
SetEnv no-gzip |
504 |
SetEnv dont-vary |
505 |
AllowEncodedSlashes On |
506 |
|
507 |
RequestHeader set X-Forwarded-Protocol "https" |
508 |
|
509 |
<Proxy * > |
510 |
Order allow,deny |
511 |
Allow from all |
512 |
</Proxy> |
513 |
|
514 |
SetEnv proxy-sendchunked |
515 |
SSLProxyEngine off |
516 |
ProxyErrorOverride off |
517 |
|
518 |
ProxyPass /static ! |
519 |
ProxyPass / http://localhost:8080/ retry=0 |
520 |
ProxyPassReverse / http://localhost:8080/ |
521 |
|
522 |
SSLEngine on |
523 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
524 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
525 |
</VirtualHost> |
526 |
</IfModule> |
527 |
|
528 |
As in node1, enable sites and modules by running: |
529 |
|
530 |
.. code-block:: console |
531 |
|
532 |
# a2enmod ssl |
533 |
# a2enmod rewrite |
534 |
# a2dissite default |
535 |
# a2ensite synnefo |
536 |
# a2ensite synnefo-ssl |
537 |
# a2enmod headers |
538 |
# a2enmod proxy_http |
539 |
|
540 |
.. note:: This isn't really needed, but it's a good security practice to disable |
541 |
directory listing in apache:: |
542 |
|
543 |
# a2dismod autoindex |
544 |
|
545 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
546 |
|
547 |
# /etc/init.d/apache2 stop |
548 |
|
549 |
|
550 |
Acquire certificate |
551 |
~~~~~~~~~~~~~~~~~~~ |
552 |
|
553 |
Copy the certificate you created before on node1 (`ca.crt`) under the directory |
554 |
``/usr/local/share/ca-certificate`` |
555 |
|
556 |
and run: |
557 |
|
558 |
.. code-block:: console |
559 |
|
560 |
# update-ca-certificates |
561 |
|
562 |
to update the records. |
563 |
|
564 |
|
565 |
DNS Setup |
566 |
~~~~~~~~~ |
567 |
|
568 |
Add the following line in ``/etc/resolv.conf`` file |
569 |
|
570 |
.. code-block:: console |
571 |
|
572 |
nameserver 4.3.2.1 |
573 |
|
574 |
to inform the node about the new dns server. |
575 |
|
576 |
We are now ready with all general prerequisites for node2. Now that we have |
577 |
finished with all general prerequisites for both nodes, we can start installing |
578 |
the services. First, let's install Astakos on node1. |
579 |
|
580 |
Installation of Astakos on node1 |
581 |
================================ |
582 |
|
583 |
To install astakos, grab the package from our repository (make sure you made |
584 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
585 |
previously), by running: |
586 |
|
587 |
.. code-block:: console |
588 |
|
589 |
# apt-get install snf-astakos-app snf-pithos-backend |
590 |
|
591 |
.. _conf-astakos: |
592 |
|
593 |
Configuration of Astakos |
594 |
======================== |
595 |
|
596 |
Conf Files |
597 |
---------- |
598 |
|
599 |
After astakos is successfully installed, you will find the directory |
600 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
601 |
commented configuration options, which are the default options. While installing |
602 |
new snf-* components, new configuration files will appear inside the directory. |
603 |
In this guide (and for all services), we will edit only the minimum necessary |
604 |
configuration options, to reflect our setup. Everything else will remain as is. |
605 |
|
606 |
After getting familiar with synnefo, you will be able to customize the software |
607 |
as you wish and fits your needs. Many options are available, to empower the |
608 |
administrator with extensively customizable setups. |
609 |
|
610 |
For the snf-webproject component (installed as an astakos dependency), we |
611 |
need the following: |
612 |
|
613 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
614 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
615 |
|
616 |
.. code-block:: console |
617 |
|
618 |
DATABASES = { |
619 |
'default': { |
620 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
621 |
'ENGINE': 'django.db.backends.postgresql_psycopg2', |
622 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
623 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
624 |
'NAME': 'snf_apps', |
625 |
'USER': 'synnefo', # Not used with sqlite3. |
626 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
627 |
# Set to empty string for localhost. Not used with sqlite3. |
628 |
'HOST': '4.3.2.1', |
629 |
# Set to empty string for default. Not used with sqlite3. |
630 |
'PORT': '5432', |
631 |
} |
632 |
} |
633 |
|
634 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
635 |
``SECRET_KEY``. This is a Django specific setting which is used to provide a |
636 |
seed in secret-key hashing algorithms. Set this to a random string of your |
637 |
choice and keep it private: |
638 |
|
639 |
.. code-block:: console |
640 |
|
641 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
642 |
|
643 |
For astakos specific configuration, edit the following options in |
644 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
645 |
|
646 |
.. code-block:: console |
647 |
|
648 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
649 |
|
650 |
ASTAKOS_BASE_URL = 'https://node1.example.com/astakos' |
651 |
|
652 |
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all |
653 |
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an |
654 |
extra path (``/astakos`` here) is recommended in order to distinguish |
655 |
components, if more than one are installed on the same machine. |
656 |
|
657 |
.. note:: For the purpose of this guide, we don't enable recaptcha authentication. |
658 |
If you would like to enable it, you have to edit the following options: |
659 |
|
660 |
.. code-block:: console |
661 |
|
662 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
663 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
664 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
665 |
ASTAKOS_RECAPTCHA_ENABLED = True |
666 |
|
667 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
668 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
669 |
|
670 |
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` : |
671 |
|
672 |
.. code-block:: console |
673 |
|
674 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
675 |
|
676 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services' |
677 |
|
678 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu' |
679 |
|
680 |
Those settings have to do with the black cloudbar endpoints and will be |
681 |
described in more detail later on in this guide. For now, just edit the domain |
682 |
to point at node1 which is where we have installed Astakos. |
683 |
|
684 |
If you are an advanced user and want to use the Shibboleth Authentication |
685 |
method, read the relative :ref:`section <shibboleth-auth>`. |
686 |
|
687 |
.. _email-configuration: |
688 |
|
689 |
Email delivery configuration |
690 |
---------------------------- |
691 |
|
692 |
Many of the ``astakos`` operations require server to notify service users and |
693 |
administrators via email. e.g. right after the signup process the service sents |
694 |
an email to the registered email address containing an email verification url, |
695 |
after the user verifies the email address astakos once again needs to notify |
696 |
administrators with a notice that a new account has just been verified. |
697 |
|
698 |
More specifically astakos sends emails in the following cases |
699 |
|
700 |
- An email containing a verification link after each signup process. |
701 |
- An email to the people listed in ``ADMINS`` setting after each email |
702 |
verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email |
703 |
notifies administrators that an additional action is required in order to |
704 |
activate the user. |
705 |
- A welcome email to the user email and an admin notification to ``ADMINS`` |
706 |
right after each account activation. |
707 |
- Feedback messages submited from astakos contact view and astakos feedback |
708 |
API endpoint are sent to contacts listed in ``HELPDESK`` setting. |
709 |
- Project application request notifications to people included in ``HELPDESK`` |
710 |
and ``MANAGERS`` settings. |
711 |
- Notifications after each project members action (join request, membership |
712 |
accepted/declinde etc.) to project members or project owners. |
713 |
|
714 |
Astakos uses the Django internal email delivering mechanism to send email |
715 |
notifications. A simple configuration, using an external smtp server to |
716 |
deliver messages, is shown below. Alter the following example to meet your |
717 |
smtp server characteristics. Notice that the smtp server is needed for a proper |
718 |
installation |
719 |
|
720 |
.. code-block:: python |
721 |
|
722 |
# /etc/synnefo/00-snf-common-admins.conf |
723 |
EMAIL_HOST = "mysmtp.server.synnefo.org" |
724 |
EMAIL_HOST_USER = "<smtpuser>" |
725 |
EMAIL_HOST_PASSWORD = "<smtppassword>" |
726 |
|
727 |
# this gets appended in all email subjects |
728 |
EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] " |
729 |
|
730 |
# Address to use for outgoing emails |
731 |
DEFAULT_FROM_EMAIL = "server@example.synnefo.org" |
732 |
|
733 |
# Email where users can contact for support. This is used in html/email |
734 |
# templates. |
735 |
CONTACT_EMAIL = "server@example.synnefo.org" |
736 |
|
737 |
# The email address that error messages come from |
738 |
SERVER_EMAIL = "server-errors@example.synnefo.org" |
739 |
|
740 |
Notice that since email settings might be required by applications other than |
741 |
astakos they are defined in a different configuration file than the one |
742 |
previously used to set astakos specific settings. |
743 |
|
744 |
Refer to |
745 |
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_ |
746 |
for additional information on available email settings. |
747 |
|
748 |
As refered in the previous section, based on the operation that triggers |
749 |
an email notification, the recipients list differs. Specifically for |
750 |
emails whose recipients include contacts from your service team |
751 |
(administrators, managers, helpdesk etc) synnefo provides the following |
752 |
settings located in ``10-snf-common-admins.conf``: |
753 |
|
754 |
.. code-block:: python |
755 |
|
756 |
ADMINS = (('Admin name', 'admin@example.synnefo.org'), |
757 |
('Admin2 name', 'admin2@example.synnefo.org)) |
758 |
MANAGERS = (('Manager name', 'manager@example.synnefo.org'),) |
759 |
HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),) |
760 |
|
761 |
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing: |
762 |
|
763 |
.. code-block:: python |
764 |
|
765 |
EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend' |
766 |
EMAIL_FILE_PATH = '/tmp/app-messages' |
767 |
|
768 |
|
769 |
|
770 |
Enable Pooling |
771 |
-------------- |
772 |
|
773 |
This section can be bypassed, but we strongly recommend you apply the following, |
774 |
since they result in a significant performance boost. |
775 |
|
776 |
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper |
777 |
around Psycopg2. This allows independent Django requests to reuse pooled DB |
778 |
connections, with significant performance gains. |
779 |
|
780 |
To use, first monkey-patch psycopg2. For Django, run this before the |
781 |
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``: |
782 |
|
783 |
.. code-block:: console |
784 |
|
785 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
786 |
monkey_patch_psycopg2() |
787 |
|
788 |
Since we are running with greenlets, we should modify psycopg2 behavior, so it |
789 |
works properly in a greenlet context: |
790 |
|
791 |
.. code-block:: console |
792 |
|
793 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
794 |
make_psycopg_green() |
795 |
|
796 |
Use the Psycopg2 driver as usual. For Django, this means using |
797 |
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable |
798 |
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI |
799 |
driver, through ``DATABASES.OPTIONS`` in Django. |
800 |
|
801 |
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf`` |
802 |
file that looks like this: |
803 |
|
804 |
.. code-block:: console |
805 |
|
806 |
# Monkey-patch psycopg2 |
807 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
808 |
monkey_patch_psycopg2() |
809 |
|
810 |
# If running with greenlets |
811 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
812 |
make_psycopg_green() |
813 |
|
814 |
DATABASES = { |
815 |
'default': { |
816 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
817 |
'ENGINE': 'django.db.backends.postgresql_psycopg2', |
818 |
'OPTIONS': {'synnefo_poolsize': 8}, |
819 |
|
820 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
821 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
822 |
'NAME': 'snf_apps', |
823 |
'USER': 'synnefo', # Not used with sqlite3. |
824 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
825 |
# Set to empty string for localhost. Not used with sqlite3. |
826 |
'HOST': '4.3.2.1', |
827 |
# Set to empty string for default. Not used with sqlite3. |
828 |
'PORT': '5432', |
829 |
} |
830 |
} |
831 |
|
832 |
Database Initialization |
833 |
----------------------- |
834 |
|
835 |
After configuration is done, we initialize the database by running: |
836 |
|
837 |
.. code-block:: console |
838 |
|
839 |
# snf-manage syncdb |
840 |
|
841 |
At this example we don't need to create a django superuser, so we select |
842 |
``[no]`` to the question. After a successful sync, we run the migration needed |
843 |
for astakos: |
844 |
|
845 |
.. code-block:: console |
846 |
|
847 |
# snf-manage migrate im |
848 |
# snf-manage migrate quotaholder_app |
849 |
|
850 |
Then, we load the pre-defined user groups |
851 |
|
852 |
.. code-block:: console |
853 |
|
854 |
# snf-manage loaddata groups |
855 |
|
856 |
.. _services-reg: |
857 |
|
858 |
Services Registration |
859 |
--------------------- |
860 |
|
861 |
When the database is ready, we need to register the services. The following |
862 |
command will ask you to register the standard Synnefo components (astakos, |
863 |
cyclades, and pithos) along with the services they provide. Note that you |
864 |
have to register at least astakos in order to have a usable authentication |
865 |
system. For each component, you will be asked to provide two URLs: its base |
866 |
URL and its UI URL. |
867 |
|
868 |
The former is the location where the component resides; it should equal |
869 |
the ``<component_name>_BASE_URL`` as specified in the respective component |
870 |
settings. For example, the base URL for astakos would be |
871 |
``https://node1.example.com/astakos``. |
872 |
|
873 |
The latter is the URL that appears in the Cloudbar and leads to the |
874 |
component UI. If you want to follow the default setup, set |
875 |
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base |
876 |
URL as explained before. (You can later change the UI URL with |
877 |
``snf-manage component-modify <component_name> --url new_ui_url``.) |
878 |
|
879 |
The command will also register automatically the resource definitions |
880 |
offered by the services. |
881 |
|
882 |
.. code-block:: console |
883 |
|
884 |
# snf-component-register |
885 |
|
886 |
.. note:: |
887 |
|
888 |
This command is equivalent to running the following series of commands; |
889 |
it registers the three components in astakos and then in each host it |
890 |
exports the respective service definitions, copies the exported json file |
891 |
to the astakos host, where it finally imports it: |
892 |
|
893 |
.. code-block:: console |
894 |
|
895 |
astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url |
896 |
astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url |
897 |
astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url |
898 |
astakos-host$ snf-manage service-export-astakos > astakos.json |
899 |
astakos-host$ snf-manage service-import --json astakos.json |
900 |
cyclades-host$ snf-manage service-export-cyclades > cyclades.json |
901 |
# copy the file to astakos-host |
902 |
astakos-host$ snf-manage service-import --json cyclades.json |
903 |
pithos-host$ snf-manage service-export-pithos > pithos.json |
904 |
# copy the file to astakos-host |
905 |
astakos-host$ snf-manage service-import --json pithos.json |
906 |
|
907 |
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2 |
908 |
|
909 |
Setting Default Base Quota for Resources |
910 |
---------------------------------------- |
911 |
|
912 |
We now have to specify the limit on resources that each user can employ |
913 |
(exempting resources offered by projects). |
914 |
|
915 |
.. code-block:: console |
916 |
|
917 |
# snf-manage resource-modify --default-quota-interactive |
918 |
|
919 |
|
920 |
Servers Initialization |
921 |
---------------------- |
922 |
|
923 |
Finally, we initialize the servers on node1: |
924 |
|
925 |
.. code-block:: console |
926 |
|
927 |
root@node1:~ # /etc/init.d/gunicorn restart |
928 |
root@node1:~ # /etc/init.d/apache2 restart |
929 |
|
930 |
We have now finished the Astakos setup. Let's test it now. |
931 |
|
932 |
|
933 |
Testing of Astakos |
934 |
================== |
935 |
|
936 |
Open your favorite browser and go to: |
937 |
|
938 |
``http://node1.example.com/astakos`` |
939 |
|
940 |
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see |
941 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
942 |
|
943 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
944 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
945 |
see a green box on the top, which informs you that you made a successful request |
946 |
and the request has been sent to the administrators. So far so good, let's |
947 |
assume that you created the user with username ``user@example.com``. |
948 |
|
949 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
950 |
|
951 |
.. code-block:: console |
952 |
|
953 |
root@node1:~ # snf-manage user-list |
954 |
|
955 |
This command should show you a list with only one user; the one we just created. |
956 |
This user should have an id with a value of ``1`` and flag "active" and |
957 |
"verified" set to False. Now run: |
958 |
|
959 |
.. code-block:: console |
960 |
|
961 |
root@node1:~ # snf-manage user-modify 1 --verify --accept |
962 |
|
963 |
This verifies the user email and activates the user. |
964 |
When running in production, the activation is done automatically with different |
965 |
types of moderation, that Astakos supports. You can see the moderation methods |
966 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
967 |
documentation. In production, you can also manually activate a user, by sending |
968 |
him/her an activation email. See how to do this at the :ref:`User |
969 |
activation <user_activation>` section. |
970 |
|
971 |
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with |
972 |
your browser again. Try to sign in using your new credentials. If the astakos |
973 |
menu appears and you can see your profile, then you have successfully setup |
974 |
Astakos. |
975 |
|
976 |
Let's continue to install Pithos now. |
977 |
|
978 |
|
979 |
Installation of Pithos on node2 |
980 |
=============================== |
981 |
|
982 |
To install Pithos, grab the packages from our repository (make sure you made |
983 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
984 |
previously), by running: |
985 |
|
986 |
.. code-block:: console |
987 |
|
988 |
# apt-get install snf-pithos-app snf-pithos-backend |
989 |
|
990 |
Now, install the pithos web interface: |
991 |
|
992 |
.. code-block:: console |
993 |
|
994 |
# apt-get install snf-pithos-webclient |
995 |
|
996 |
This package provides the standalone pithos web client. The web client is the |
997 |
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos |
998 |
interface's cloudbar, at the top of the Astakos homepage. |
999 |
|
1000 |
|
1001 |
.. _conf-pithos: |
1002 |
|
1003 |
Configuration of Pithos |
1004 |
======================= |
1005 |
|
1006 |
Conf Files |
1007 |
---------- |
1008 |
|
1009 |
After Pithos is successfully installed, you will find the directory |
1010 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
1011 |
after installation of astakos. Here, you will not have to change anything that |
1012 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
1013 |
only need to change settings that have to do with Pithos. Specifically: |
1014 |
|
1015 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
1016 |
this options: |
1017 |
|
1018 |
.. code-block:: console |
1019 |
|
1020 |
ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0' |
1021 |
|
1022 |
PITHOS_BASE_URL = 'https://node2.example.com/pithos' |
1023 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1024 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
1025 |
|
1026 |
PITHOS_SERVICE_TOKEN = 'pithos_service_token22w' |
1027 |
|
1028 |
|
1029 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to |
1030 |
find the Pithos backend database. Above we tell Pithos that its database is |
1031 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
1032 |
``example_passw0rd``. All those settings where setup during node1's "Database |
1033 |
setup" section. |
1034 |
|
1035 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find |
1036 |
the Pithos backend data. Above we tell Pithos to store its data under |
1037 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
1038 |
directory at node1's "Pithos data directory setup" section. |
1039 |
|
1040 |
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is. |
1041 |
The Astakos service is used for user management (authentication, quotas, etc.) |
1042 |
|
1043 |
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL. |
1044 |
|
1045 |
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos. |
1046 |
It can be retrieved by running on the Astakos node (node1 in our case): |
1047 |
|
1048 |
.. code-block:: console |
1049 |
|
1050 |
# snf-manage component-list |
1051 |
|
1052 |
The token has been generated automatically during the :ref:`Pithos service |
1053 |
registration <services-reg>`. |
1054 |
|
1055 |
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the |
1056 |
object checksums. This results to improved performance during object uploading. |
1057 |
However, if compatibility with the OpenStack Object Storage API is important |
1058 |
then it should be changed to ``True``. |
1059 |
|
1060 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
1061 |
Pithos web UI with the astakos web UI (through the top cloudbar): |
1062 |
|
1063 |
.. code-block:: console |
1064 |
|
1065 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
1066 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services' |
1067 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu' |
1068 |
|
1069 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
1070 |
cloudbar. |
1071 |
|
1072 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
1073 |
Pithos web client to get from astakos all the information needed to fill its |
1074 |
own cloudbar. So we put our astakos deployment urls there. |
1075 |
|
1076 |
Pooling and Greenlets |
1077 |
--------------------- |
1078 |
|
1079 |
Pithos is pooling-ready without the need of further configuration, because it |
1080 |
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos |
1081 |
backend objects for access to the Pithos DB. |
1082 |
|
1083 |
However, as in Astakos, since we are running with Greenlets, it is also |
1084 |
recommended to modify psycopg2 behavior so it works properly in a greenlet |
1085 |
context. This means adding the following lines at the top of your |
1086 |
``/etc/synnefo/10-snf-webproject-database.conf`` file: |
1087 |
|
1088 |
.. code-block:: console |
1089 |
|
1090 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
1091 |
make_psycopg_green() |
1092 |
|
1093 |
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as |
1094 |
mentioned above, depending on your setup) argument on your |
1095 |
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something |
1096 |
like this: |
1097 |
|
1098 |
.. code-block:: console |
1099 |
|
1100 |
CONFIG = { |
1101 |
'mode': 'django', |
1102 |
'environment': { |
1103 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
1104 |
}, |
1105 |
'working_dir': '/etc/synnefo', |
1106 |
'user': 'www-data', |
1107 |
'group': 'www-data', |
1108 |
'args': ( |
1109 |
'--bind=127.0.0.1:8080', |
1110 |
'--workers=4', |
1111 |
'--worker-class=gevent', |
1112 |
'--log-level=debug', |
1113 |
'--timeout=43200' |
1114 |
), |
1115 |
} |
1116 |
|
1117 |
Stamp Database Revision |
1118 |
----------------------- |
1119 |
|
1120 |
Pithos uses the alembic_ database migrations tool. |
1121 |
|
1122 |
.. _alembic: http://alembic.readthedocs.org |
1123 |
|
1124 |
After a successful installation, we should stamp it at the most recent |
1125 |
revision, so that future migrations know where to start upgrading in |
1126 |
the migration history. |
1127 |
|
1128 |
.. code-block:: console |
1129 |
|
1130 |
root@node2:~ # pithos-migrate stamp head |
1131 |
|
1132 |
Servers Initialization |
1133 |
---------------------- |
1134 |
|
1135 |
After configuration is done, we initialize the servers on node2: |
1136 |
|
1137 |
.. code-block:: console |
1138 |
|
1139 |
root@node2:~ # /etc/init.d/gunicorn restart |
1140 |
root@node2:~ # /etc/init.d/apache2 restart |
1141 |
|
1142 |
You have now finished the Pithos setup. Let's test it now. |
1143 |
|
1144 |
|
1145 |
Testing of Pithos |
1146 |
================= |
1147 |
|
1148 |
Open your browser and go to the Astakos homepage: |
1149 |
|
1150 |
``http://node1.example.com/astakos`` |
1151 |
|
1152 |
Login, and you will see your profile page. Now, click the "pithos" link on the |
1153 |
top black cloudbar. If everything was setup correctly, this will redirect you |
1154 |
to: |
1155 |
|
1156 |
|
1157 |
and you will see the blue interface of the Pithos application. Click the |
1158 |
orange "Upload" button and upload your first file. If the file gets uploaded |
1159 |
successfully, then this is your first sign of a successful Pithos installation. |
1160 |
Go ahead and experiment with the interface to make sure everything works |
1161 |
correctly. |
1162 |
|
1163 |
You can also use the Pithos clients to sync data from your Windows PC or MAC. |
1164 |
|
1165 |
If you don't stumble on any problems, then you have successfully installed |
1166 |
Pithos, which you can use as a standalone File Storage Service. |
1167 |
|
1168 |
If you would like to do more, such as: |
1169 |
|
1170 |
* Spawning VMs |
1171 |
* Spawning VMs from Images stored on Pithos |
1172 |
* Uploading your custom Images to Pithos |
1173 |
* Spawning VMs from those custom Images |
1174 |
* Registering existing Pithos files as Images |
1175 |
* Connect VMs to the Internet |
1176 |
* Create Private Networks |
1177 |
* Add VMs to Private Networks |
1178 |
|
1179 |
please continue with the rest of the guide. |
1180 |
|
1181 |
|
1182 |
Cyclades Prerequisites |
1183 |
====================== |
1184 |
|
1185 |
Before proceeding with the Cyclades installation, make sure you have |
1186 |
successfully set up Astakos and Pithos first, because Cyclades depends on |
1187 |
them. If you don't have a working Astakos and Pithos installation yet, please |
1188 |
return to the :ref:`top <quick-install-admin-guide>` of this guide. |
1189 |
|
1190 |
Besides Astakos and Pithos, you will also need a number of additional working |
1191 |
prerequisites, before you start the Cyclades installation. |
1192 |
|
1193 |
Ganeti |
1194 |
------ |
1195 |
|
1196 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
1197 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
1198 |
Please refer to the |
1199 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.6/html>`_ for all the |
1200 |
gory details. A successful Ganeti installation concludes with a working |
1201 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
1202 |
<GANETI_NODES>`. |
1203 |
|
1204 |
The above Ganeti cluster can run on different physical machines than node1 and |
1205 |
node2 and can scale independently, according to your needs. |
1206 |
|
1207 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
1208 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
1209 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
1210 |
|
1211 |
We highly recommend that you read the official Ganeti documentation, if you are |
1212 |
not familiar with Ganeti. |
1213 |
|
1214 |
Unfortunately, the current stable version of the stock Ganeti (v2.6.2) doesn't |
1215 |
support IP pool management. This feature will be available in Ganeti >= 2.7. |
1216 |
Synnefo depends on the IP pool functionality of Ganeti, so you have to use |
1217 |
GRNET provided packages until stable 2.7 is out. These packages will also install |
1218 |
the proper version of Ganeti. To do so: |
1219 |
|
1220 |
.. code-block:: console |
1221 |
|
1222 |
# apt-get install snf-ganeti ganeti-htools |
1223 |
|
1224 |
Ganeti will make use of drbd. To enable this and make the configuration pemanent |
1225 |
you have to do the following : |
1226 |
|
1227 |
.. code-block:: console |
1228 |
|
1229 |
# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true |
1230 |
# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules |
1231 |
|
1232 |
|
1233 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
1234 |
both nodes, choose a domain name that resolves to a valid floating IP (let's |
1235 |
say it's ``ganeti.node1.example.com``). This IP is needed to communicate with |
1236 |
the Ganeti cluster. Make sure node1 and node2 have same dsa,rsa keys and authorised_keys |
1237 |
for password-less root ssh between each other. If not then skip passing --no-ssh-init but be |
1238 |
aware that it will replace /root/.ssh/* related files and you might lose access to master node. |
1239 |
Also, Ganeti will need a volume to host your VMs' disks. So, make sure there is an lvm volume |
1240 |
group named ``ganeti``. Finally, setup a bridge interface on the host machines (e.g: br0). This |
1241 |
will be needed for the network configuration afterwards. |
1242 |
|
1243 |
Then run on node1: |
1244 |
|
1245 |
.. code-block:: console |
1246 |
|
1247 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \ |
1248 |
--no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \ |
1249 |
--master-netdev eth0 ganeti.node1.example.com |
1250 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
1251 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
1252 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
1253 |
|
1254 |
root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \ |
1255 |
--vm-capable=yes node2.example.com |
1256 |
root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti |
1257 |
root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default |
1258 |
|
1259 |
For any problems you may stumble upon installing Ganeti, please refer to the |
1260 |
`official documentation <http://docs.ganeti.org/ganeti/2.6/html>`_. Installation |
1261 |
of Ganeti is out of the scope of this guide. |
1262 |
|
1263 |
.. _cyclades-install-snfimage: |
1264 |
|
1265 |
snf-image |
1266 |
--------- |
1267 |
|
1268 |
Installation |
1269 |
~~~~~~~~~~~~ |
1270 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
1271 |
you need the :ref: |
1272 |
`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` OS |
1273 |
Definition installed on *all* VM-capable Ganeti nodes. This means we need |
1274 |
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on |
1275 |
node1 and node2. You can do this by running on *both* nodes: |
1276 |
|
1277 |
.. code-block:: console |
1278 |
|
1279 |
# apt-get install snf-image snf-pithos-backend python-psycopg2 |
1280 |
|
1281 |
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able |
1282 |
to handle image files stored on Pithos. It also needs `python-psycopg2` to be |
1283 |
able to access the Pithos database. This is why, we also install them on *all* |
1284 |
VM-capable Ganeti nodes. |
1285 |
|
1286 |
.. warning:: |
1287 |
snf-image uses ``curl`` for handling URLs. This means that it will |
1288 |
not work out of the box if you try to use URLs served by servers which do |
1289 |
not have a valid certificate. In case you haven't followed the guide's |
1290 |
directions about the certificates,in order to circumvent this you should edit the file |
1291 |
``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node. |
1292 |
|
1293 |
Configuration |
1294 |
~~~~~~~~~~~~~ |
1295 |
snf-image supports native access to Images stored on Pithos. This means that |
1296 |
it can talk directly to the Pithos backend, without the need of providing a |
1297 |
public URL. More details, are described in the next section. For now, the only |
1298 |
thing we need to do, is configure snf-image to access our Pithos backend. |
1299 |
|
1300 |
To do this, we need to set the corresponding variables in |
1301 |
``/etc/default/snf-image``, to reflect our Pithos setup: |
1302 |
|
1303 |
.. code-block:: console |
1304 |
|
1305 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
1306 |
|
1307 |
PITHOS_DATA="/srv/pithos/data" |
1308 |
|
1309 |
If you have installed your Ganeti cluster on different nodes than node1 and |
1310 |
node2 make sure that ``/srv/pithos/data`` is visible by all of them. |
1311 |
|
1312 |
If you would like to use Images that are also/only stored locally, you need to |
1313 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
1314 |
Pithos. |
1315 |
|
1316 |
Testing |
1317 |
~~~~~~~ |
1318 |
You can test that snf-image is successfully installed by running on the |
1319 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
1320 |
|
1321 |
.. code-block:: console |
1322 |
|
1323 |
# gnt-os diagnose |
1324 |
|
1325 |
This should return ``valid`` for snf-image. |
1326 |
|
1327 |
If you are interested to learn more about snf-image's internals (and even use |
1328 |
it alongside Ganeti without Synnefo), please see |
1329 |
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information |
1330 |
concerning installation instructions, documentation on the design and |
1331 |
implementation, and supported Image formats. |
1332 |
|
1333 |
.. _snf-image-images: |
1334 |
|
1335 |
Actual Images for snf-image |
1336 |
--------------------------- |
1337 |
|
1338 |
Now that snf-image is installed successfully we need to provide it with some |
1339 |
Images. |
1340 |
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` |
1341 |
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We |
1342 |
recommend the use of the ``diskdump`` format. For more information about |
1343 |
snf-image Image formats see `here |
1344 |
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_. |
1345 |
|
1346 |
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` |
1347 |
also supports three (3) different locations for the above Images to be stored: |
1348 |
|
1349 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` |
1350 |
in :file:`/etc/default/snf-image`) |
1351 |
* On a remote host (accessible via public URL e.g: http://... or ftp://...) |
1352 |
* On Pithos (accessible natively, not only by its public URL) |
1353 |
|
1354 |
For the purpose of this guide, we will use the Debian Squeeze Base Image found |
1355 |
on the official `snf-image page |
1356 |
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The |
1357 |
image is of type ``diskdump``. We will store it in our new Pithos installation. |
1358 |
|
1359 |
To do so, do the following: |
1360 |
|
1361 |
a) Download the Image from the official snf-image page. |
1362 |
|
1363 |
b) Upload the Image to your Pithos installation, either using the Pithos Web |
1364 |
UI or the command line client `kamaki |
1365 |
<http://www.synnefo.org/docs/kamaki/latest/index.html>`_. |
1366 |
|
1367 |
Once the Image is uploaded successfully, download the Image's metadata file |
1368 |
from the official snf-image page. You will need it, for spawning a VM from |
1369 |
Ganeti, in the next section. |
1370 |
|
1371 |
Of course, you can repeat the procedure to upload more Images, available from |
1372 |
the `official snf-image page |
1373 |
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. |
1374 |
|
1375 |
.. _ganeti-with-pithos-images: |
1376 |
|
1377 |
Spawning a VM from a Pithos Image, using Ganeti |
1378 |
----------------------------------------------- |
1379 |
|
1380 |
Now, it is time to test our installation so far. So, we have Astakos and |
1381 |
Pithos installed, we have a working Ganeti installation, the snf-image |
1382 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
1383 |
Pithos. Make sure you also have the `metadata file |
1384 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
1385 |
|
1386 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
1387 |
|
1388 |
.. code-block:: console |
1389 |
|
1390 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1391 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1392 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1393 |
testvm1 |
1394 |
|
1395 |
In the above command: |
1396 |
|
1397 |
* ``img_passwd``: the arbitrary root password of your new instance |
1398 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
1399 |
* ``img_id``: If you want to deploy an Image stored on Pithos (our case), this |
1400 |
should have the format ``pithos://<UUID>/<container>/<filename>``: |
1401 |
* ``UUID``: the username found in Cyclades Web UI under API access |
1402 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
1403 |
* ``filename``: the name of file (visible also from the Web UI) |
1404 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
1405 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1406 |
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_ |
1407 |
|
1408 |
If the ``gnt-instance add`` command returns successfully, then run: |
1409 |
|
1410 |
.. code-block:: console |
1411 |
|
1412 |
# gnt-instance info testvm1 | grep "console connection" |
1413 |
|
1414 |
to find out where to connect using VNC. If you can connect successfully and can |
1415 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
1416 |
then everything works as expected and you have your new Debian Base VM up and |
1417 |
running. |
1418 |
|
1419 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
1420 |
to access the Pithos database and the Pithos backend data (newer versions |
1421 |
require UUID instead of a username). Another issue you may encounter is that in |
1422 |
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in |
1423 |
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and |
1424 |
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect, |
1425 |
again find out what went wrong. Do *NOT* proceed to the next steps unless you |
1426 |
are sure everything works till this point. |
1427 |
|
1428 |
If everything works, you have successfully connected Ganeti with Pithos. Let's |
1429 |
move on to networking now. |
1430 |
|
1431 |
.. warning:: |
1432 |
|
1433 |
You can bypass the networking sections and go straight to |
1434 |
:ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup |
1435 |
the Cyclades Network Service, but only the Cyclades Compute Service |
1436 |
(recommended for now). |
1437 |
|
1438 |
Networking Setup Overview |
1439 |
------------------------- |
1440 |
|
1441 |
This part is deployment-specific and must be customized based on the specific |
1442 |
needs of the system administrator. However, to do so, the administrator needs |
1443 |
to understand how each level handles Virtual Networks, to be able to setup the |
1444 |
backend appropriately, before installing Cyclades. To do so, please read the |
1445 |
:ref:`Network <networks>` section before proceeding. |
1446 |
|
1447 |
Since synnefo 0.11 all network actions are managed with the snf-manage |
1448 |
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd, |
1449 |
snf-network, bridges, vlans) to be already configured correctly. The only |
1450 |
actions needed in this point are: |
1451 |
|
1452 |
a) Have Ganeti with IP pool management support installed. |
1453 |
|
1454 |
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc. |
1455 |
|
1456 |
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs. |
1457 |
|
1458 |
In order to test that everything is setup correctly before installing Cyclades, |
1459 |
we will make some testing actions in this section, and the actual setup will be |
1460 |
done afterwards with snf-manage commands. |
1461 |
|
1462 |
.. _snf-network: |
1463 |
|
1464 |
snf-network |
1465 |
~~~~~~~~~~~ |
1466 |
|
1467 |
snf-network includes `kvm-vif-bridge` script that is invoked every time |
1468 |
a tap (a VM's NIC) is created. Based on environment variables passed by |
1469 |
Ganeti it issues various commands depending on the network type the NIC is |
1470 |
connected to and sets up a corresponding dhcp lease. |
1471 |
|
1472 |
Install snf-network on all Ganeti nodes: |
1473 |
|
1474 |
.. code-block:: console |
1475 |
|
1476 |
# apt-get install snf-network |
1477 |
|
1478 |
Then, in :file:`/etc/default/snf-network` set: |
1479 |
|
1480 |
.. code-block:: console |
1481 |
|
1482 |
MAC_MASK=ff:ff:f0:00:00:00 |
1483 |
|
1484 |
.. _nfdhcpd: |
1485 |
|
1486 |
nfdhcpd |
1487 |
~~~~~~~ |
1488 |
|
1489 |
Each NIC's IP is chosen by Ganeti (with IP pool management support). |
1490 |
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and |
1491 |
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will |
1492 |
create a dhcp response. |
1493 |
|
1494 |
.. code-block:: console |
1495 |
|
1496 |
# apt-get install nfqueue-bindings-python=0.3+physindev-1 |
1497 |
# apt-get install nfdhcpd |
1498 |
|
1499 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1500 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1501 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new |
1502 |
VMs. Once you are finished, restart the server on all nodes: |
1503 |
|
1504 |
.. code-block:: console |
1505 |
|
1506 |
# /etc/init.d/nfdhcpd restart |
1507 |
|
1508 |
If you are using ``ferm``, then you need to run the following: |
1509 |
|
1510 |
.. code-block:: console |
1511 |
|
1512 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
1513 |
# /etc/init.d/ferm restart |
1514 |
|
1515 |
or make sure to run after boot: |
1516 |
|
1517 |
.. code-block:: console |
1518 |
|
1519 |
# iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42 |
1520 |
|
1521 |
and if you have IPv6 enabled: |
1522 |
|
1523 |
.. code-block:: console |
1524 |
|
1525 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43 |
1526 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44 |
1527 |
|
1528 |
You can check which clients are currently served by nfdhcpd by running: |
1529 |
|
1530 |
.. code-block:: console |
1531 |
|
1532 |
# kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid` |
1533 |
|
1534 |
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``. |
1535 |
|
1536 |
Public Network Setup |
1537 |
-------------------- |
1538 |
|
1539 |
To achieve basic networking the simplest way is to have a common bridge (e.g. |
1540 |
``br0``, on the same collision domain with the router) where all VMs will |
1541 |
connect to. Packets will be "forwarded" to the router and then to the Internet. |
1542 |
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to |
1543 |
:ref:`Network <networks>` section). |
1544 |
|
1545 |
Physical Host Setup |
1546 |
~~~~~~~~~~~~~~~~~~~ |
1547 |
|
1548 |
Assuming ``eth0`` on both hosts is the public interface (directly connected |
1549 |
to the router), run on every node: |
1550 |
|
1551 |
.. code-block:: console |
1552 |
|
1553 |
# apt-get install vlan |
1554 |
# brctl addbr br0 |
1555 |
# ip link set br0 up |
1556 |
# vconfig add eth0 100 |
1557 |
# ip link set eth0.100 up |
1558 |
# brctl addif br0 eth0.100 |
1559 |
|
1560 |
|
1561 |
Testing a Public Network |
1562 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1563 |
|
1564 |
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you |
1565 |
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the |
1566 |
network by running: |
1567 |
|
1568 |
.. code-block:: console |
1569 |
|
1570 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public |
1571 |
|
1572 |
Then, connect the network to all your nodegroups. We assume that we only have |
1573 |
one nodegroup (``default``) in our Ganeti cluster: |
1574 |
|
1575 |
.. code-block:: console |
1576 |
|
1577 |
# gnt-network connect test-net-public default bridged br0 |
1578 |
|
1579 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1580 |
the Public Network. We will add a new VM, the same way we did it on the |
1581 |
previous testing section. However, now will also add one NIC, configured to be |
1582 |
managed from our previously defined network. Run on the GANETI-MASTER (node1): |
1583 |
|
1584 |
.. code-block:: console |
1585 |
|
1586 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1587 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1588 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1589 |
--net 0:ip=pool,network=test-net-public \ |
1590 |
testvm2 |
1591 |
|
1592 |
If the above returns successfully, connect to the new VM through VNC as before and run: |
1593 |
|
1594 |
.. code-block:: console |
1595 |
|
1596 |
root@testvm2:~ # ip addr |
1597 |
root@testvm2:~ # ip route |
1598 |
root@testvm2:~ # cat /etc/resolv.conf |
1599 |
|
1600 |
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config |
1601 |
(nameserver option in nfdhcpd.conf). This shows correct configuration of |
1602 |
ganeti, snf-network and nfdhcpd. |
1603 |
|
1604 |
Now ping the outside world. If this works too, then you have also configured |
1605 |
correctly your physical host and router. |
1606 |
|
1607 |
Make sure everything works as expected, before proceeding with the Private |
1608 |
Networks setup. |
1609 |
|
1610 |
.. _private-networks-setup: |
1611 |
|
1612 |
Private Networks Setup |
1613 |
---------------------- |
1614 |
|
1615 |
Synnefo supports two types of private networks: |
1616 |
|
1617 |
- based on MAC filtering |
1618 |
- based on physical VLANs |
1619 |
|
1620 |
Both types provide Layer 2 isolation to the end-user. |
1621 |
|
1622 |
For the first type a common bridge (e.g. ``prv0``) is needed while for the |
1623 |
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different |
1624 |
physical VLAN. To this end to assure isolation among end-users' private networks |
1625 |
each has to have different MAC prefix (for the filtering to take place) or to be |
1626 |
"connected" to a different bridge (VLAN actually). |
1627 |
|
1628 |
Physical Host Setup |
1629 |
~~~~~~~~~~~~~~~~~~~ |
1630 |
|
1631 |
In order to create the necessary VLAN/bridges, one for MAC filtered private |
1632 |
networks and various (e.g. 20) for private networks based on physical VLANs, |
1633 |
run on every node: |
1634 |
|
1635 |
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs |
1636 |
configured correctly) connected together, run on every node: |
1637 |
|
1638 |
.. code-block:: console |
1639 |
|
1640 |
# modprobe 8021q |
1641 |
# $iface=eth0 |
1642 |
# for prv in $(seq 0 20); do |
1643 |
vlan=$prv |
1644 |
bridge=prv$prv |
1645 |
vconfig add $iface $vlan |
1646 |
ifconfig $iface.$vlan up |
1647 |
brctl addbr $bridge |
1648 |
brctl setfd $bridge 0 |
1649 |
brctl addif $bridge $iface.$vlan |
1650 |
ifconfig $bridge up |
1651 |
done |
1652 |
|
1653 |
The above will do the following : |
1654 |
|
1655 |
* provision 21 new bridges: ``prv0`` - ``prv20`` |
1656 |
* provision 21 new vlans: ``eth0.0`` - ``eth0.20`` |
1657 |
* add the corresponding vlan to the equivalent bridge |
1658 |
|
1659 |
You can run ``brctl show`` on both nodes to see if everything was setup |
1660 |
correctly. |
1661 |
|
1662 |
Testing the Private Networks |
1663 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1664 |
|
1665 |
To test the Private Networks, we will create two instances and put them in the |
1666 |
same Private Networks (one MAC Filtered and one Physical VLAN). This means |
1667 |
that the instances will have a second NIC connected to the ``prv0`` |
1668 |
pre-provisioned bridge and a third to ``prv1``. |
1669 |
|
1670 |
We run the same command as in the Public Network testing section, but with one |
1671 |
more argument for the second NIC: |
1672 |
|
1673 |
.. code-block:: console |
1674 |
|
1675 |
# gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac |
1676 |
# gnt-network connect test-net-prv-mac default bridged prv0 |
1677 |
|
1678 |
# gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan |
1679 |
# gnt-network connect test-net-prv-vlan default bridged prv1 |
1680 |
|
1681 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1682 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1683 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1684 |
--net 0:ip=pool,network=test-net-public \ |
1685 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1686 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1687 |
testvm3 |
1688 |
|
1689 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1690 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1691 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1692 |
--net 0:ip=pool,network=test-net-public \ |
1693 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1694 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1695 |
testvm4 |
1696 |
|
1697 |
Above, we create two instances with first NIC connected to the internet, their |
1698 |
second NIC connected to a MAC filtered private Network and their third NIC |
1699 |
connected to the first Physical VLAN Private Network. Now, connect to the |
1700 |
instances using VNC and make sure everything works as expected: |
1701 |
|
1702 |
a) The instances have access to the public internet through their first eth |
1703 |
interface (``eth0``), which has been automatically assigned a public IP. |
1704 |
|
1705 |
b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``) |
1706 |
|
1707 |
c) ip link set ``eth1``/``eth2`` up |
1708 |
|
1709 |
d) dhclient ``eth1``/``eth2`` |
1710 |
|
1711 |
e) On testvm3 ping 192.168.1.2/10.0.0.2 |
1712 |
|
1713 |
If everything works as expected, then you have finished the Network Setup at the |
1714 |
backend for both types of Networks (Public & Private). |
1715 |
|
1716 |
.. _cyclades-gtools: |
1717 |
|
1718 |
Cyclades Ganeti tools |
1719 |
--------------------- |
1720 |
|
1721 |
In order for Ganeti to be connected with Cyclades later on, we need the |
1722 |
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our |
1723 |
case). You can install them by running in both nodes: |
1724 |
|
1725 |
.. code-block:: console |
1726 |
|
1727 |
# apt-get install snf-cyclades-gtools |
1728 |
|
1729 |
This will install the following: |
1730 |
|
1731 |
* ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ) |
1732 |
* ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages) |
1733 |
|
1734 |
Configure ``snf-cyclades-gtools`` |
1735 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1736 |
|
1737 |
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` |
1738 |
configuration file. At least we need to set the RabbitMQ endpoint for all tools |
1739 |
that need it: |
1740 |
|
1741 |
.. code-block:: console |
1742 |
|
1743 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1744 |
|
1745 |
The above variables should reflect your :ref:`Message Queue setup |
1746 |
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes. |
1747 |
|
1748 |
Connect ``snf-image`` with ``snf-progress-monitor`` |
1749 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1750 |
|
1751 |
Finally, we need to configure ``snf-image`` to publish progress messages during |
1752 |
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and |
1753 |
set the corresponding variable to ``snf-progress-monitor``: |
1754 |
|
1755 |
.. code-block:: console |
1756 |
|
1757 |
PROGRESS_MONITOR="snf-progress-monitor" |
1758 |
|
1759 |
This file should be editted in all Ganeti nodes. |
1760 |
|
1761 |
.. _rapi-user: |
1762 |
|
1763 |
Synnefo RAPI user |
1764 |
----------------- |
1765 |
|
1766 |
As a last step before installing Cyclades, create a new RAPI user that will |
1767 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1768 |
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``. |
1769 |
You can do this, by first running: |
1770 |
|
1771 |
.. code-block:: console |
1772 |
|
1773 |
# echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5 |
1774 |
|
1775 |
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows: |
1776 |
|
1777 |
.. code-block:: console |
1778 |
|
1779 |
cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write |
1780 |
|
1781 |
More about Ganeti's RAPI users `here. |
1782 |
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_ |
1783 |
|
1784 |
You have now finished with all needed Prerequisites for Cyclades. Let's move on |
1785 |
to the actual Cyclades installation. |
1786 |
|
1787 |
|
1788 |
Installation of Cyclades on node1 |
1789 |
================================= |
1790 |
|
1791 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
1792 |
Compute service. The Image Service will get installed automatically along with |
1793 |
Cyclades, because it is contained in the same Synnefo component. |
1794 |
|
1795 |
We will install Cyclades on node1. To do so, we install the corresponding |
1796 |
package by running on node1: |
1797 |
|
1798 |
.. code-block:: console |
1799 |
|
1800 |
# apt-get install snf-cyclades-app memcached python-memcache |
1801 |
|
1802 |
If all packages install successfully, then Cyclades are installed and we |
1803 |
proceed with their configuration. |
1804 |
|
1805 |
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data |
1806 |
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password). |
1807 |
This is achieved by storing all sensitive information to a CACHE backend and |
1808 |
exporting it via VMAPI. The cache entries are invalidated after the first |
1809 |
request. Synnefo uses `memcached <http://memcached.org/>`_ as a |
1810 |
`Django <https://www.djangoproject.com/>`_ cache backend. |
1811 |
|
1812 |
Configuration of Cyclades |
1813 |
========================= |
1814 |
|
1815 |
Conf files |
1816 |
---------- |
1817 |
|
1818 |
After installing Cyclades, a number of new configuration files will appear under |
1819 |
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here |
1820 |
only the minimal needed changes to result with a working system. In general, |
1821 |
sane defaults have been chosen for the most of the options, to cover most of the |
1822 |
common scenarios. However, if you want to tweak Cyclades feel free to do so, |
1823 |
once you get familiar with the different options. |
1824 |
|
1825 |
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``: |
1826 |
|
1827 |
.. code-block:: console |
1828 |
|
1829 |
CYCLADES_BASE_URL = 'https://node1.example.com/cyclades' |
1830 |
ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0' |
1831 |
|
1832 |
CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w' |
1833 |
|
1834 |
The ``ASTAKOS_AUTH_URL`` denotes the Astakos endpoint for Cyclades, |
1835 |
which is used for all user management, including authentication. |
1836 |
Since our Astakos, Cyclades, and Pithos installations belong together, |
1837 |
they should all have identical ``ASTAKOS_AUTH_URL`` setting |
1838 |
(see also, :ref:`previously <conf-pithos>`). |
1839 |
|
1840 |
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL. |
1841 |
Appending an extra path (``/cyclades`` here) is recommended in order to |
1842 |
distinguish components, if more than one are installed on the same machine. |
1843 |
|
1844 |
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with astakos. |
1845 |
It can be retrieved by running on the Astakos node (node1 in our case): |
1846 |
|
1847 |
.. code-block:: console |
1848 |
|
1849 |
# snf-manage component-list |
1850 |
|
1851 |
The token has been generated automatically during the :ref:`Cyclades service |
1852 |
registration <services-reg>`. |
1853 |
|
1854 |
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``: |
1855 |
|
1856 |
.. code-block:: console |
1857 |
|
1858 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
1859 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services' |
1860 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu' |
1861 |
|
1862 |
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common |
1863 |
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are |
1864 |
used by the Cyclades Web UI to get from Astakos all the information needed to |
1865 |
fill its own cloudbar. So, we put our Astakos deployment urls there. All the |
1866 |
above should have the same values we put in the corresponding variables in |
1867 |
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous |
1868 |
:ref:`Pithos configuration <conf-pithos>` section. |
1869 |
|
1870 |
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``: |
1871 |
|
1872 |
.. code-block:: console |
1873 |
|
1874 |
BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1875 |
BACKEND_BLOCK_PATH = '/srv/pithos/data/' |
1876 |
|
1877 |
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION`` |
1878 |
denotes the Pithos database (where the Image files are stored). So we set that |
1879 |
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual |
1880 |
Pithos data location. |
1881 |
|
1882 |
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``: |
1883 |
|
1884 |
.. code-block:: console |
1885 |
|
1886 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1887 |
|
1888 |
The above settings denote the Message Queue. Those settings should have the same |
1889 |
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and |
1890 |
reflect our :ref:`Message Queue setup <rabbitmq-setup>`. |
1891 |
|
1892 |
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``: |
1893 |
|
1894 |
.. code-block:: console |
1895 |
|
1896 |
VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600" |
1897 |
|
1898 |
Edit ``/etc/default/vncauthproxy``: |
1899 |
|
1900 |
.. code-block:: console |
1901 |
|
1902 |
CHUID="nobody:www-data" |
1903 |
|
1904 |
We have now finished with the basic Cyclades configuration. |
1905 |
|
1906 |
Database Initialization |
1907 |
----------------------- |
1908 |
|
1909 |
Once Cyclades is configured, we sync the database: |
1910 |
|
1911 |
.. code-block:: console |
1912 |
|
1913 |
$ snf-manage syncdb |
1914 |
$ snf-manage migrate |
1915 |
|
1916 |
and load the initial server flavors: |
1917 |
|
1918 |
.. code-block:: console |
1919 |
|
1920 |
$ snf-manage loaddata flavors |
1921 |
|
1922 |
If everything returns successfully, our database is ready. |
1923 |
|
1924 |
Add the Ganeti backend |
1925 |
---------------------- |
1926 |
|
1927 |
In our installation we assume that we only have one Ganeti cluster, the one we |
1928 |
setup earlier. At this point you have to add this backend (Ganeti cluster) to |
1929 |
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>` |
1930 |
correctly. |
1931 |
|
1932 |
.. code-block:: console |
1933 |
|
1934 |
$ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd |
1935 |
|
1936 |
You can see everything has been setup correctly by running: |
1937 |
|
1938 |
.. code-block:: console |
1939 |
|
1940 |
$ snf-manage backend-list |
1941 |
|
1942 |
Enable the new backend by running: |
1943 |
|
1944 |
.. code-block:: |
1945 |
|
1946 |
$ snf-manage backend-modify --drained False 1 |
1947 |
|
1948 |
.. warning:: Since version 0.13, the backend is set to "drained" by default. |
1949 |
This means that you cannot add VMs to it. The reason for this is that the |
1950 |
nodes should be unavailable to Synnefo until the Administrator explicitly |
1951 |
releases them. To change this setting, use ``snf-manage backend-modify |
1952 |
--drained False <backend-id>``. |
1953 |
|
1954 |
If something is not set correctly, you can modify the backend with the |
1955 |
``snf-manage backend-modify`` command. If something has gone wrong, you could |
1956 |
modify the backend to reflect the Ganeti installation by running: |
1957 |
|
1958 |
.. code-block:: console |
1959 |
|
1960 |
$ snf-manage backend-modify --clustername "ganeti.node1.example.com" |
1961 |
--user=cyclades |
1962 |
--pass=example_rapi_passw0rd |
1963 |
1 |
1964 |
|
1965 |
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding |
1966 |
domain that resolves to the master IP, than the IP itself, to ensure Cyclades |
1967 |
can talk to Ganeti even after a Ganeti master-failover. |
1968 |
|
1969 |
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's |
1970 |
password. Once we setup the first backend to point at our Ganeti cluster, we |
1971 |
update the Cyclades backends status by running: |
1972 |
|
1973 |
.. code-block:: console |
1974 |
|
1975 |
$ snf-manage backend-update-status |
1976 |
|
1977 |
Cyclades can manage multiple Ganeti backends, but for the purpose of this |
1978 |
guide,we won't get into more detail regarding mulitple backends. If you want to |
1979 |
learn more please see /*TODO*/. |
1980 |
|
1981 |
Add a Public Network |
1982 |
---------------------- |
1983 |
|
1984 |
Cyclades supports different Public Networks on different Ganeti backends. |
1985 |
After connecting Cyclades with our Ganeti cluster, we need to setup a Public |
1986 |
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every |
1987 |
created NIC on a bridge. After having a bridge (e.g. br0) created in every |
1988 |
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0': |
1989 |
|
1990 |
.. code-block:: console |
1991 |
|
1992 |
$ snf-manage network-create --subnet=5.6.7.0/27 \ |
1993 |
--gateway=5.6.7.1 \ |
1994 |
--subnet6=2001:648:2FFC:1322::/64 \ |
1995 |
--gateway6=2001:648:2FFC:1322::1 \ |
1996 |
--public --dhcp=True --flavor=CUSTOM \ |
1997 |
--link=br0 --mode=bridged \ |
1998 |
--name=public_network \ |
1999 |
--backend-id=1 |
2000 |
|
2001 |
This will create the Public Network on both Cyclades and the Ganeti backend. To |
2002 |
make sure everything was setup correctly, also run: |
2003 |
|
2004 |
.. code-block:: console |
2005 |
|
2006 |
$ snf-manage reconcile-networks |
2007 |
|
2008 |
You can see all available networks by running: |
2009 |
|
2010 |
.. code-block:: console |
2011 |
|
2012 |
$ snf-manage network-list |
2013 |
|
2014 |
and inspect each network's state by running: |
2015 |
|
2016 |
.. code-block:: console |
2017 |
|
2018 |
$ snf-manage network-inspect <net_id> |
2019 |
|
2020 |
Finally, you can see the networks from the Ganeti perspective by running on the |
2021 |
Ganeti MASTER: |
2022 |
|
2023 |
.. code-block:: console |
2024 |
|
2025 |
$ gnt-network list |
2026 |
$ gnt-network info <network_name> |
2027 |
|
2028 |
Create pools for Private Networks |
2029 |
--------------------------------- |
2030 |
|
2031 |
To prevent duplicate assignment of resources to different private networks, |
2032 |
Cyclades supports two types of pools: |
2033 |
|
2034 |
- MAC prefix Pool |
2035 |
- Bridge Pool |
2036 |
|
2037 |
As long as those resourses have been provisioned, admin has to define two |
2038 |
these pools in Synnefo: |
2039 |
|
2040 |
|
2041 |
.. code-block:: console |
2042 |
|
2043 |
root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
2044 |
|
2045 |
root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20 |
2046 |
|
2047 |
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`: |
2048 |
|
2049 |
.. code-block:: console |
2050 |
|
2051 |
DEFAULT_MAC_FILTERED_BRIDGE = 'prv0' |
2052 |
|
2053 |
Servers restart |
2054 |
--------------- |
2055 |
|
2056 |
Restart gunicorn on node1: |
2057 |
|
2058 |
.. code-block:: console |
2059 |
|
2060 |
# /etc/init.d/gunicorn restart |
2061 |
|
2062 |
Now let's do the final connections of Cyclades with Ganeti. |
2063 |
|
2064 |
``snf-dispatcher`` initialization |
2065 |
--------------------------------- |
2066 |
|
2067 |
``snf-dispatcher`` dispatches all messages published to the Message Queue and |
2068 |
manages the Cyclades database accordingly. It also initializes all exchanges. By |
2069 |
default it is not enabled during installation of Cyclades, so let's enable it in |
2070 |
its configuration file ``/etc/default/snf-dispatcher``: |
2071 |
|
2072 |
.. code-block:: console |
2073 |
|
2074 |
SNF_DSPTCH_ENABLE=true |
2075 |
|
2076 |
and start the daemon: |
2077 |
|
2078 |
.. code-block:: console |
2079 |
|
2080 |
# /etc/init.d/snf-dispatcher start |
2081 |
|
2082 |
You can see that everything works correctly by tailing its log file |
2083 |
``/var/log/synnefo/dispatcher.log``. |
2084 |
|
2085 |
``snf-ganeti-eventd`` on GANETI MASTER |
2086 |
-------------------------------------- |
2087 |
|
2088 |
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd`` |
2089 |
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package). |
2090 |
The daemon is already installed on the GANETI MASTER (node1 in our case). |
2091 |
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools`` |
2092 |
installation, so we enable it in its configuration file |
2093 |
``/etc/default/snf-ganeti-eventd``: |
2094 |
|
2095 |
.. code-block:: console |
2096 |
|
2097 |
SNF_EVENTD_ENABLE=true |
2098 |
|
2099 |
and start the daemon: |
2100 |
|
2101 |
.. code-block:: console |
2102 |
|
2103 |
# /etc/init.d/snf-ganeti-eventd start |
2104 |
|
2105 |
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER |
2106 |
|
2107 |
Apply Quota |
2108 |
----------- |
2109 |
|
2110 |
The following commands will check and fix the integrity of user quota. |
2111 |
In a freshly installed system, these commands have no effect and can be |
2112 |
skipped. |
2113 |
|
2114 |
.. code-block:: console |
2115 |
|
2116 |
node1 # snf-manage quota --sync |
2117 |
node1 # snf-manage reconcile-resources-astakos --fix |
2118 |
node2 # snf-manage reconcile-resources-pithos --fix |
2119 |
node1 # snf-manage reconcile-resources-cyclades --fix |
2120 |
|
2121 |
|
2122 |
If all the above return successfully, then you have finished with the Cyclades |
2123 |
installation and setup. |
2124 |
|
2125 |
Let's test our installation now. |
2126 |
|
2127 |
|
2128 |
Testing of Cyclades |
2129 |
=================== |
2130 |
|
2131 |
Cyclades Web UI |
2132 |
--------------- |
2133 |
|
2134 |
First of all we need to test that our Cyclades Web UI works correctly. Open your |
2135 |
browser and go to the Astakos home page. Login and then click 'cyclades' on the |
2136 |
top cloud bar. This should redirect you to: |
2137 |
|
2138 |
`http://node1.example.com/cyclades/ui/` |
2139 |
|
2140 |
and the Cyclades home page should appear. If not, please go back and find what |
2141 |
went wrong. Do not proceed if you don't see the Cyclades home page. |
2142 |
|
2143 |
If the Cyclades home page appears, click on the orange button 'New machine'. The |
2144 |
first step of the 'New machine wizard' will appear. This step shows all the |
2145 |
available Images from which you can spawn new VMs. The list should be currently |
2146 |
empty, as we haven't registered any Images yet. Close the wizard and browse the |
2147 |
interface (not many things to see yet). If everything seems to work, let's |
2148 |
register our first Image file. |
2149 |
|
2150 |
Cyclades Images |
2151 |
--------------- |
2152 |
|
2153 |
To test our Cyclades installation, we will use an Image stored on Pithos to |
2154 |
spawn a new VM from the Cyclades interface. We will describe all steps, even |
2155 |
though you may already have uploaded an Image on Pithos from a :ref:`previous |
2156 |
<snf-image-images>` section: |
2157 |
|
2158 |
* Upload an Image file to Pithos |
2159 |
* Register that Image file to Cyclades |
2160 |
* Spawn a new VM from that Image from the Cyclades Web UI |
2161 |
|
2162 |
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_ |
2163 |
command line client to do the uploading and registering of the Image. |
2164 |
|
2165 |
Installation of `kamaki` |
2166 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
2167 |
|
2168 |
You can install `kamaki` anywhere you like, since it is a standalone client of |
2169 |
the APIs and talks to the installation over `http`. For the purpose of this |
2170 |
guide we will assume that we have downloaded the `Debian Squeeze Base Image |
2171 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's |
2172 |
``/srv/images`` directory. For that reason we will install `kamaki` on node1, |
2173 |
too. We do this by running: |
2174 |
|
2175 |
.. code-block:: console |
2176 |
|
2177 |
# apt-get install kamaki |
2178 |
|
2179 |
Configuration of kamaki |
2180 |
~~~~~~~~~~~~~~~~~~~~~~~ |
2181 |
|
2182 |
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our |
2183 |
installation. We do this by running: |
2184 |
|
2185 |
.. code-block:: console |
2186 |
|
2187 |
$ kamaki config set cloud.default.url \ |
2188 |
"https://node1.example.com/astakos/identity/v2.0" |
2189 |
$ kamaki config set cloud.default.token USER_TOKEN |
2190 |
|
2191 |
Both the Authentication URL and the USER_TOKEN appear on the user's |
2192 |
`API access` web page on the Astakos Web UI. |
2193 |
|
2194 |
You can see that the new configuration options have been applied correctly, |
2195 |
either by checking the editable file ``~/.kamakirc`` or by running: |
2196 |
|
2197 |
.. code-block:: console |
2198 |
|
2199 |
$ kamaki config list |
2200 |
|
2201 |
A quick test to check that kamaki is configured correctly, is to try to |
2202 |
authenticate a user based on his/her token (in this case the user is you): |
2203 |
|
2204 |
.. code-block:: console |
2205 |
|
2206 |
$ kamaki user authenticate |
2207 |
|
2208 |
The above operation provides various user information, e.g. UUID (the unique |
2209 |
user id) which might prove useful in some operations. |
2210 |
|
2211 |
Upload an Image file to Pithos |
2212 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
2213 |
|
2214 |
Now, that we have set up `kamaki` we will upload the Image that we have |
2215 |
downloaded and stored under ``/srv/images/``. Although we can upload the Image |
2216 |
under the root ``Pithos`` container (as you may have done when uploading the |
2217 |
Image from the Pithos Web UI), we will create a new container called ``images`` |
2218 |
and store the Image under that container. We do this for two reasons: |
2219 |
|
2220 |
a) To demonstrate how to create containers other than the default ``Pithos``. |
2221 |
This can be done only with the `kamaki` client and not through the Web UI. |
2222 |
|
2223 |
b) As a best organization practise, so that you won't have your Image files |
2224 |
tangled along with all your other Pithos files and directory structures. |
2225 |
|
2226 |
We create the new ``images`` container by running: |
2227 |
|
2228 |
.. code-block:: console |
2229 |
|
2230 |
$ kamaki file create images |
2231 |
|
2232 |
To check if the container has been created, list all containers of your |
2233 |
account: |
2234 |
|
2235 |
.. code-block:: console |
2236 |
|
2237 |
$ kamaki file list |
2238 |
|
2239 |
Then, we upload the Image file to that container: |
2240 |
|
2241 |
.. code-block:: console |
2242 |
|
2243 |
$ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump images |
2244 |
|
2245 |
The first is the local path and the second is the remote container on Pithos. |
2246 |
Check if the file has been uploaded, by listing the container contents: |
2247 |
|
2248 |
.. code-block:: console |
2249 |
|
2250 |
$ kamaki file list images |
2251 |
|
2252 |
Alternatively check if the new container and file appear on the Pithos Web UI. |
2253 |
|
2254 |
Register an existing Image file to Cyclades |
2255 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
2256 |
|
2257 |
For the purposes of the following example, we assume that the user UUID is |
2258 |
``u53r-un1qu3-1d``. |
2259 |
|
2260 |
Once the Image file has been successfully uploaded on Pithos then we register |
2261 |
it to Cyclades, by running: |
2262 |
|
2263 |
.. code-block:: console |
2264 |
|
2265 |
$ kamaki image register "Debian Base" \ |
2266 |
pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump \ |
2267 |
--public \ |
2268 |
--disk-format=diskdump \ |
2269 |
--property OSFAMILY=linux --property ROOT_PARTITION=1 \ |
2270 |
--property description="Debian Squeeze Base System" \ |
2271 |
--property size=451 --property kernel=2.6.32 --property GUI="No GUI" \ |
2272 |
--property sortorder=1 --property USERS=root --property OS=debian |
2273 |
|
2274 |
This command registers the Pithos file |
2275 |
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump`` as an |
2276 |
Image in Cyclades. This Image will be public (``--public``), so all users will |
2277 |
be able to spawn VMs from it and is of type ``diskdump``. The first two |
2278 |
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest |
2279 |
properties are optional, but recommended, so that the Images appear nicely on |
2280 |
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The |
2281 |
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable |
2282 |
inside the ``20-snf-cyclades-app-ui.conf`` configuration file. |
2283 |
|
2284 |
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed |
2285 |
from Cyclades to Ganeti and then `snf-image` (also see |
2286 |
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are |
2287 |
used to show information on the Cyclades UI. |
2288 |
|
2289 |
Spawn a VM from the Cyclades Web UI |
2290 |
----------------------------------- |
2291 |
|
2292 |
If the registration completes successfully, then go to the Cyclades Web UI from |
2293 |
your browser at: |
2294 |
|
2295 |
`https://node1.example.com/cyclades/ui/` |
2296 |
|
2297 |
Click on the 'New Machine' button and the first step of the wizard will appear. |
2298 |
Click on 'My Images' (right after 'System' Images) on the left pane of the |
2299 |
wizard. Your previously registered Image "Debian Base" should appear under |
2300 |
'Available Images'. If not, something has gone wrong with the registration. Make |
2301 |
sure you can see your Image file on the Pithos Web UI and ``kamaki image |
2302 |
register`` returns successfully with all options and properties as shown above. |
2303 |
|
2304 |
If the Image appears on the list, select it and complete the wizard by selecting |
2305 |
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you |
2306 |
write down your password, because you *WON'T* be able to retrieve it later. |
2307 |
|
2308 |
If everything was setup correctly, after a few minutes your new machine will go |
2309 |
to state 'Running' and you will be able to use it. Click 'Console' to connect |
2310 |
through VNC out of band, or click on the machine's icon to connect directly via |
2311 |
SSH or RDP (for windows machines). |
2312 |
|
2313 |
Congratulations. You have successfully installed the whole Synnefo stack and |
2314 |
connected all components. Go ahead in the next section to test the Network |
2315 |
functionality from inside Cyclades and discover even more features. |
2316 |
|
2317 |
General Testing |
2318 |
=============== |
2319 |
|
2320 |
Notes |
2321 |
===== |
2322 |
|