root / docs / quick-install-admin-guide.rst @ 591e1df0
History | View | Annotate | Download (76.4 kB)
1 |
.. _quick-install-admin-guide: |
---|---|
2 |
|
3 |
Administrator's Quick Installation Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the Administrator's quick installation guide. |
7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes, |
9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will |
11 |
have the following services running: |
12 |
|
13 |
* Identity Management (Astakos) |
14 |
* Object Storage Service (Pithos+) |
15 |
* Compute Service (Cyclades) |
16 |
* Image Registry Service (Plankton) |
17 |
|
18 |
and a single unified Web UI to manage them all. |
19 |
|
20 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
21 |
not released yet. |
22 |
|
23 |
If you just want to install the Object Storage Service (Pithos+), follow the |
24 |
guide and just stop after the "Testing of Pithos+" section. |
25 |
|
26 |
|
27 |
Installation of Synnefo / Introduction |
28 |
====================================== |
29 |
|
30 |
We will install the services with the above list's order. Cyclades and Plankton |
31 |
will be installed in a single step (at the end), because at the moment they are |
32 |
contained in the same software component. Furthermore, we will install all |
33 |
services in the first physical node, except Pithos+ which will be installed in |
34 |
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app |
35 |
component (scheduled to be fixed in the next version). |
36 |
|
37 |
For the rest of the documentation we will refer to the first physical node as |
38 |
"node1" and the second as "node2". We will also assume that their domain names |
39 |
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and |
40 |
"4.3.2.2" respectively. |
41 |
|
42 |
.. note:: It is import that the two machines are under the same domain name. |
43 |
If they are not, you can do this by editting the file ``/etc/hosts`` |
44 |
on both machines, and add the following lines: |
45 |
|
46 |
.. code-block:: console |
47 |
|
48 |
4.3.2.1 node1.example.com |
49 |
4.3.2.2 node2.example.com |
50 |
|
51 |
|
52 |
General Prerequisites |
53 |
===================== |
54 |
|
55 |
These are the general synnefo prerequisites, that you need on node1 and node2 |
56 |
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton). |
57 |
|
58 |
To be able to download all synnefo components you need to add the following |
59 |
lines in your ``/etc/apt/sources.list`` file: |
60 |
|
61 |
| ``deb http://apt.dev.grnet.gr squeeze main`` |
62 |
| ``deb-src http://apt.dev.grnet.gr squeeze main`` |
63 |
| ``deb http://apt.dev.grnet.gr squeeze-backports main`` |
64 |
|
65 |
and import the repo's GPG key: |
66 |
|
67 |
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -`` |
68 |
|
69 |
Also add the following line to enable the ``squeeze-backports`` repository, |
70 |
which may provide more recent versions of certain packages. The repository |
71 |
is deactivated by default and must be specified expicitly in ``apt-get`` |
72 |
operations: |
73 |
|
74 |
| ``deb http://backports.debian.org/debian-backports squeeze-backports main`` |
75 |
|
76 |
You also need a shared directory visible by both nodes. Pithos+ will save all |
77 |
data inside this directory. By 'all data', we mean files, images, and pithos |
78 |
specific mapping data. If you plan to upload more than one basic image, this |
79 |
directory should have at least 50GB of free space. During this guide, we will |
80 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
81 |
to node2 (be sure to set no_root_squash flag). Node2 has this directory |
82 |
mounted under ``/srv/pithos``, too. |
83 |
|
84 |
Before starting the synnefo installation, you will need basic third party |
85 |
software to be installed and configured on the physical nodes. We will describe |
86 |
each node's general prerequisites separately. Any additional configuration, |
87 |
specific to a synnefo service for each node, will be described at the service's |
88 |
section. |
89 |
|
90 |
Finally, it is required for Cyclades and Ganeti nodes to have synchronized |
91 |
system clocks (e.g. by running ntpd). |
92 |
|
93 |
Node1 |
94 |
----- |
95 |
|
96 |
General Synnefo dependencies |
97 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
98 |
|
99 |
* apache (http server) |
100 |
* gunicorn (WSGI http server) |
101 |
* postgresql (database) |
102 |
* rabbitmq (message queue) |
103 |
* ntp (NTP daemon) |
104 |
* gevent |
105 |
|
106 |
You can install apache2, progresql and ntp by running: |
107 |
|
108 |
.. code-block:: console |
109 |
|
110 |
# apt-get install apache2 postgresql ntp |
111 |
|
112 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
113 |
the official debian backports: |
114 |
|
115 |
.. code-block:: console |
116 |
|
117 |
# apt-get -t squeeze-backports install gunicorn |
118 |
|
119 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
120 |
|
121 |
.. code-block:: console |
122 |
|
123 |
# apt-get -t squeeze-backports install python-gevent |
124 |
|
125 |
On node1, we will create our databases, so you will also need the |
126 |
python-psycopg2 package: |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
# apt-get install python-psycopg2 |
131 |
|
132 |
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the |
133 |
following line to ``/etc/apt/sources.list``: |
134 |
|
135 |
.. code-block:: console |
136 |
|
137 |
deb http://www.rabbitmq.com/debian testing main |
138 |
|
139 |
Add RabbitMQ public key, to trusted key list: |
140 |
|
141 |
.. code-block:: console |
142 |
|
143 |
# wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc |
144 |
# apt-key add rabbitmq-signing-key-public.asc |
145 |
|
146 |
Finally, to install the package run: |
147 |
|
148 |
.. code-block:: console |
149 |
|
150 |
# apt-get update |
151 |
# apt-get install rabbitmq-server |
152 |
|
153 |
Database setup |
154 |
~~~~~~~~~~~~~~ |
155 |
|
156 |
On node1, we create a database called ``snf_apps``, that will host all django |
157 |
apps related tables. We also create the user ``synnefo`` and grant him all |
158 |
privileges on the database. We do this by running: |
159 |
|
160 |
.. code-block:: console |
161 |
|
162 |
root@node1:~ # su - postgres |
163 |
postgres@node1:~ $ psql |
164 |
postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
165 |
postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd'; |
166 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo; |
167 |
|
168 |
We also create the database ``snf_pithos`` needed by the pithos+ backend and |
169 |
grant the ``synnefo`` user all privileges on the database. This database could |
170 |
be created on node2 instead, but we do it on node1 for simplicity. We will |
171 |
create all needed databases on node1 and then node2 will connect to them. |
172 |
|
173 |
.. code-block:: console |
174 |
|
175 |
postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0; |
176 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
177 |
|
178 |
Configure the database to listen to all network interfaces. You can do this by |
179 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change |
180 |
``listen_addresses`` to ``'*'`` : |
181 |
|
182 |
.. code-block:: console |
183 |
|
184 |
listen_addresses = '*' |
185 |
|
186 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and |
187 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
188 |
connections:`` : |
189 |
|
190 |
.. code-block:: console |
191 |
|
192 |
host all all 4.3.2.1/32 md5 |
193 |
host all all 4.3.2.2/32 md5 |
194 |
|
195 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's |
196 |
actual IPs. Now, restart the server to apply the changes: |
197 |
|
198 |
.. code-block:: console |
199 |
|
200 |
# /etc/init.d/postgresql restart |
201 |
|
202 |
Gunicorn setup |
203 |
~~~~~~~~~~~~~~ |
204 |
|
205 |
Create the file ``/etc/gunicorn.d/synnefo`` containing the following: |
206 |
|
207 |
.. code-block:: console |
208 |
|
209 |
CONFIG = { |
210 |
'mode': 'django', |
211 |
'environment': { |
212 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
213 |
}, |
214 |
'working_dir': '/etc/synnefo', |
215 |
'user': 'www-data', |
216 |
'group': 'www-data', |
217 |
'args': ( |
218 |
'--bind=127.0.0.1:8080', |
219 |
'--worker-class=gevent', |
220 |
'--workers=8', |
221 |
'--log-level=debug', |
222 |
), |
223 |
} |
224 |
|
225 |
.. warning:: Do NOT start the server yet, because it won't find the |
226 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
227 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
228 |
``--worker-class=sync``. We will start the server after successful |
229 |
installation of astakos. If the server is running:: |
230 |
|
231 |
# /etc/init.d/gunicorn stop |
232 |
|
233 |
Apache2 setup |
234 |
~~~~~~~~~~~~~ |
235 |
|
236 |
Create the file ``/etc/apache2/sites-available/synnefo`` containing the |
237 |
following: |
238 |
|
239 |
.. code-block:: console |
240 |
|
241 |
<VirtualHost *:80> |
242 |
ServerName node1.example.com |
243 |
|
244 |
RewriteEngine On |
245 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
246 |
RewriteRule ^(.*)$ - [F,L] |
247 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
248 |
</VirtualHost> |
249 |
|
250 |
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the |
251 |
following: |
252 |
|
253 |
.. code-block:: console |
254 |
|
255 |
<IfModule mod_ssl.c> |
256 |
<VirtualHost _default_:443> |
257 |
ServerName node1.example.com |
258 |
|
259 |
Alias /static "/usr/share/synnefo/static" |
260 |
|
261 |
# SetEnv no-gzip |
262 |
# SetEnv dont-vary |
263 |
|
264 |
AllowEncodedSlashes On |
265 |
|
266 |
RequestHeader set X-Forwarded-Protocol "https" |
267 |
|
268 |
<Proxy * > |
269 |
Order allow,deny |
270 |
Allow from all |
271 |
</Proxy> |
272 |
|
273 |
SetEnv proxy-sendchunked |
274 |
SSLProxyEngine off |
275 |
ProxyErrorOverride off |
276 |
|
277 |
ProxyPass /static ! |
278 |
ProxyPass / http://localhost:8080/ retry=0 |
279 |
ProxyPassReverse / http://localhost:8080/ |
280 |
|
281 |
RewriteEngine On |
282 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
283 |
RewriteRule ^(.*)$ - [F,L] |
284 |
|
285 |
SSLEngine on |
286 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
287 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
288 |
</VirtualHost> |
289 |
</IfModule> |
290 |
|
291 |
Now enable sites and modules by running: |
292 |
|
293 |
.. code-block:: console |
294 |
|
295 |
# a2enmod ssl |
296 |
# a2enmod rewrite |
297 |
# a2dissite default |
298 |
# a2ensite synnefo |
299 |
# a2ensite synnefo-ssl |
300 |
# a2enmod headers |
301 |
# a2enmod proxy_http |
302 |
|
303 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
304 |
|
305 |
# /etc/init.d/apache2 stop |
306 |
|
307 |
.. _rabbitmq-setup: |
308 |
|
309 |
Message Queue setup |
310 |
~~~~~~~~~~~~~~~~~~~ |
311 |
|
312 |
The message queue will run on node1, so we need to create the appropriate |
313 |
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all |
314 |
exchanges: |
315 |
|
316 |
.. code-block:: console |
317 |
|
318 |
# rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd" |
319 |
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
320 |
|
321 |
We do not need to initialize the exchanges. This will be done automatically, |
322 |
during the Cyclades setup. |
323 |
|
324 |
Pithos+ data directory setup |
325 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
326 |
|
327 |
As mentioned in the General Prerequisites section, there is a directory called |
328 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
329 |
directory inside it: |
330 |
|
331 |
.. code-block:: console |
332 |
|
333 |
# cd /srv/pithos |
334 |
# mkdir data |
335 |
# chown www-data:www-data data |
336 |
# chmod g+ws data |
337 |
|
338 |
You are now ready with all general prerequisites concerning node1. Let's go to |
339 |
node2. |
340 |
|
341 |
Node2 |
342 |
----- |
343 |
|
344 |
General Synnefo dependencies |
345 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
346 |
|
347 |
* apache (http server) |
348 |
* gunicorn (WSGI http server) |
349 |
* postgresql (database) |
350 |
* ntp (NTP daemon) |
351 |
* gevent |
352 |
|
353 |
You can install the above by running: |
354 |
|
355 |
.. code-block:: console |
356 |
|
357 |
# apt-get install apache2 postgresql ntp |
358 |
|
359 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
360 |
the official debian backports: |
361 |
|
362 |
.. code-block:: console |
363 |
|
364 |
# apt-get -t squeeze-backports install gunicorn |
365 |
|
366 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
367 |
|
368 |
.. code-block:: console |
369 |
|
370 |
# apt-get -t squeeze-backports install python-gevent |
371 |
|
372 |
Node2 will connect to the databases on node1, so you will also need the |
373 |
python-psycopg2 package: |
374 |
|
375 |
.. code-block:: console |
376 |
|
377 |
# apt-get install python-psycopg2 |
378 |
|
379 |
Database setup |
380 |
~~~~~~~~~~~~~~ |
381 |
|
382 |
All databases have been created and setup on node1, so we do not need to take |
383 |
any action here. From node2, we will just connect to them. When you get familiar |
384 |
with the software you may choose to run different databases on different nodes, |
385 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
386 |
of the purpose of this guide. |
387 |
|
388 |
Gunicorn setup |
389 |
~~~~~~~~~~~~~~ |
390 |
|
391 |
Create the file ``/etc/gunicorn.d/synnefo`` containing the following |
392 |
(same contents as in node1; you can just copy/paste the file): |
393 |
|
394 |
.. code-block:: console |
395 |
|
396 |
CONFIG = { |
397 |
'mode': 'django', |
398 |
'environment': { |
399 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
400 |
}, |
401 |
'working_dir': '/etc/synnefo', |
402 |
'user': 'www-data', |
403 |
'group': 'www-data', |
404 |
'args': ( |
405 |
'--bind=127.0.0.1:8080', |
406 |
'--worker-class=gevent', |
407 |
'--workers=4', |
408 |
'--log-level=debug', |
409 |
'--timeout=43200' |
410 |
), |
411 |
} |
412 |
|
413 |
.. warning:: Do NOT start the server yet, because it won't find the |
414 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
415 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
416 |
``--worker-class=sync``. We will start the server after successful |
417 |
installation of astakos. If the server is running:: |
418 |
|
419 |
# /etc/init.d/gunicorn stop |
420 |
|
421 |
Apache2 setup |
422 |
~~~~~~~~~~~~~ |
423 |
|
424 |
Create the file ``/etc/apache2/sites-available/synnefo`` containing the |
425 |
following: |
426 |
|
427 |
.. code-block:: console |
428 |
|
429 |
<VirtualHost *:80> |
430 |
ServerName node2.example.com |
431 |
|
432 |
RewriteEngine On |
433 |
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC] |
434 |
RewriteRule ^(.*)$ - [F,L] |
435 |
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} |
436 |
</VirtualHost> |
437 |
|
438 |
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/`` |
439 |
containing the following: |
440 |
|
441 |
.. code-block:: console |
442 |
|
443 |
<IfModule mod_ssl.c> |
444 |
<VirtualHost _default_:443> |
445 |
ServerName node2.example.com |
446 |
|
447 |
Alias /static "/usr/share/synnefo/static" |
448 |
|
449 |
SetEnv no-gzip |
450 |
SetEnv dont-vary |
451 |
AllowEncodedSlashes On |
452 |
|
453 |
RequestHeader set X-Forwarded-Protocol "https" |
454 |
|
455 |
<Proxy * > |
456 |
Order allow,deny |
457 |
Allow from all |
458 |
</Proxy> |
459 |
|
460 |
SetEnv proxy-sendchunked |
461 |
SSLProxyEngine off |
462 |
ProxyErrorOverride off |
463 |
|
464 |
ProxyPass /static ! |
465 |
ProxyPass / http://localhost:8080/ retry=0 |
466 |
ProxyPassReverse / http://localhost:8080/ |
467 |
|
468 |
SSLEngine on |
469 |
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem |
470 |
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key |
471 |
</VirtualHost> |
472 |
</IfModule> |
473 |
|
474 |
As in node1, enable sites and modules by running: |
475 |
|
476 |
.. code-block:: console |
477 |
|
478 |
# a2enmod ssl |
479 |
# a2enmod rewrite |
480 |
# a2dissite default |
481 |
# a2ensite synnefo |
482 |
# a2ensite synnefo-ssl |
483 |
# a2enmod headers |
484 |
# a2enmod proxy_http |
485 |
|
486 |
.. warning:: Do NOT start/restart the server yet. If the server is running:: |
487 |
|
488 |
# /etc/init.d/apache2 stop |
489 |
|
490 |
We are now ready with all general prerequisites for node2. Now that we have |
491 |
finished with all general prerequisites for both nodes, we can start installing |
492 |
the services. First, let's install Astakos on node1. |
493 |
|
494 |
|
495 |
Installation of Astakos on node1 |
496 |
================================ |
497 |
|
498 |
To install astakos, grab the package from our repository (make sure you made |
499 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
500 |
previously), by running: |
501 |
|
502 |
.. code-block:: console |
503 |
|
504 |
# apt-get install snf-astakos-app snf-quotaholder-app snf-pithos-backend |
505 |
|
506 |
After successful installation of snf-astakos-app, make sure that also |
507 |
snf-webproject has been installed (marked as "Recommended" package). By default |
508 |
Debian installs "Recommended" packages, but if you have changed your |
509 |
configuration and the package didn't install automatically, you should |
510 |
explicitly install it manually running: |
511 |
|
512 |
.. code-block:: console |
513 |
|
514 |
# apt-get install snf-webproject |
515 |
|
516 |
The reason snf-webproject is "Recommended" and not a hard dependency, is to give |
517 |
the experienced administrator the ability to install Synnefo in a custom made |
518 |
`Django <https://www.djangoproject.com/>`_ project. This corner case |
519 |
concerns only very advanced users that know what they are doing and want to |
520 |
experiment with synnefo. |
521 |
|
522 |
|
523 |
.. _conf-astakos: |
524 |
|
525 |
Configuration of Astakos |
526 |
======================== |
527 |
|
528 |
Conf Files |
529 |
---------- |
530 |
|
531 |
After astakos is successfully installed, you will find the directory |
532 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
533 |
commented configuration options, which are the default options. While installing |
534 |
new snf-* components, new configuration files will appear inside the directory. |
535 |
In this guide (and for all services), we will edit only the minimum necessary |
536 |
configuration options, to reflect our setup. Everything else will remain as is. |
537 |
|
538 |
After getting familiar with synnefo, you will be able to customize the software |
539 |
as you wish and fits your needs. Many options are available, to empower the |
540 |
administrator with extensively customizable setups. |
541 |
|
542 |
For the snf-webproject component (installed as an astakos dependency), we |
543 |
need the following: |
544 |
|
545 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
546 |
uncomment and edit the ``DATABASES`` block to reflect our database: |
547 |
|
548 |
.. code-block:: console |
549 |
|
550 |
DATABASES = { |
551 |
'default': { |
552 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
553 |
'ENGINE': 'postgresql_psycopg2', |
554 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
555 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
556 |
'NAME': 'snf_apps', |
557 |
'USER': 'synnefo', # Not used with sqlite3. |
558 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
559 |
# Set to empty string for localhost. Not used with sqlite3. |
560 |
'HOST': '4.3.2.1', |
561 |
# Set to empty string for default. Not used with sqlite3. |
562 |
'PORT': '5432', |
563 |
} |
564 |
} |
565 |
|
566 |
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit |
567 |
``SECRET_KEY``. This is a Django specific setting which is used to provide a |
568 |
seed in secret-key hashing algorithms. Set this to a random string of your |
569 |
choise and keep it private: |
570 |
|
571 |
.. code-block:: console |
572 |
|
573 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
574 |
|
575 |
For astakos specific configuration, edit the following options in |
576 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
577 |
|
578 |
.. code-block:: console |
579 |
|
580 |
ASTAKOS_DEFAULT_ADMIN_EMAIL = None |
581 |
|
582 |
ASTAKOS_COOKIE_DOMAIN = '.example.com' |
583 |
|
584 |
ASTAKOS_BASEURL = 'https://node1.example.com' |
585 |
|
586 |
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all |
587 |
services). ``ASTAKOS_BASEURL`` is the astakos home page. |
588 |
|
589 |
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email. |
590 |
Every time a new account is created a notification is sent to this email. |
591 |
For this we need access to a running mail server, so we have disabled |
592 |
it for now by setting its value to None. For more informations on this, |
593 |
read the relative :ref:`section <mail-server>`. |
594 |
|
595 |
.. note:: For the purpose of this guide, we don't enable recaptcha authentication. |
596 |
If you would like to enable it, you have to edit the following options: |
597 |
|
598 |
.. code-block:: console |
599 |
|
600 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
601 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
602 |
ASTAKOS_RECAPTCHA_USE_SSL = True |
603 |
ASTAKOS_RECAPTCHA_ENABLED = True |
604 |
|
605 |
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY`` |
606 |
go to https://www.google.com/recaptcha/admin/create and create your own pair. |
607 |
|
608 |
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` : |
609 |
|
610 |
.. code-block:: console |
611 |
|
612 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
613 |
|
614 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
615 |
|
616 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
617 |
|
618 |
Those settings have to do with the black cloudbar endpoints and will be |
619 |
described in more detail later on in this guide. For now, just edit the domain |
620 |
to point at node1 which is where we have installed Astakos. |
621 |
|
622 |
If you are an advanced user and want to use the Shibboleth Authentication |
623 |
method, read the relative :ref:`section <shibboleth-auth>`. |
624 |
|
625 |
.. note:: Because Cyclades and Astakos are running on the same machine |
626 |
in our example, we have to deactivate the CSRF verification. We can do so |
627 |
by adding to |
628 |
``/etc/synnefo/99-local.conf``: |
629 |
|
630 |
.. code-block:: console |
631 |
|
632 |
MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware') |
633 |
TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf') |
634 |
|
635 |
Since version 0.13 you need to configure some basic settings for the new *Quota* |
636 |
feature. |
637 |
|
638 |
Specifically: |
639 |
|
640 |
Edit ``/etc/synnefo/20-snf-astakos-app-settings.conf``: |
641 |
|
642 |
.. code-block:: console |
643 |
|
644 |
QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v' |
645 |
QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w' |
646 |
ASTAKOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w' |
647 |
ASTAKOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v' |
648 |
|
649 |
Enable Pooling |
650 |
-------------- |
651 |
|
652 |
This section can be bypassed, but we strongly recommend you apply the following, |
653 |
since they result in a significant performance boost. |
654 |
|
655 |
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper |
656 |
around Psycopg2. This allows independent Django requests to reuse pooled DB |
657 |
connections, with significant performance gains. |
658 |
|
659 |
To use, first monkey-patch psycopg2. For Django, run this before the |
660 |
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``: |
661 |
|
662 |
.. code-block:: console |
663 |
|
664 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
665 |
monkey_patch_psycopg2() |
666 |
|
667 |
Since we are running with greenlets, we should modify psycopg2 behavior, so it |
668 |
works properly in a greenlet context: |
669 |
|
670 |
.. code-block:: console |
671 |
|
672 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
673 |
make_psycopg_green() |
674 |
|
675 |
Use the Psycopg2 driver as usual. For Django, this means using |
676 |
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable |
677 |
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI |
678 |
driver, through ``DATABASES.OPTIONS`` in Django. |
679 |
|
680 |
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf`` |
681 |
file that looks like this: |
682 |
|
683 |
.. code-block:: console |
684 |
|
685 |
# Monkey-patch psycopg2 |
686 |
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2 |
687 |
monkey_patch_psycopg2() |
688 |
|
689 |
# If running with greenlets |
690 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
691 |
make_psycopg_green() |
692 |
|
693 |
DATABASES = { |
694 |
'default': { |
695 |
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle' |
696 |
'ENGINE': 'postgresql_psycopg2', |
697 |
'OPTIONS': {'synnefo_poolsize': 8}, |
698 |
|
699 |
# ATTENTION: This *must* be the absolute path if using sqlite3. |
700 |
# See: http://docs.djangoproject.com/en/dev/ref/settings/#name |
701 |
'NAME': 'snf_apps', |
702 |
'USER': 'synnefo', # Not used with sqlite3. |
703 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
704 |
# Set to empty string for localhost. Not used with sqlite3. |
705 |
'HOST': '4.3.2.1', |
706 |
# Set to empty string for default. Not used with sqlite3. |
707 |
'PORT': '5432', |
708 |
} |
709 |
} |
710 |
|
711 |
Database Initialization |
712 |
----------------------- |
713 |
|
714 |
After configuration is done, we initialize the database by running: |
715 |
|
716 |
.. code-block:: console |
717 |
|
718 |
# snf-manage syncdb |
719 |
|
720 |
At this example we don't need to create a django superuser, so we select |
721 |
``[no]`` to the question. After a successful sync, we run the migration needed |
722 |
for astakos: |
723 |
|
724 |
.. code-block:: console |
725 |
|
726 |
# snf-manage migrate im |
727 |
|
728 |
Then, we load the pre-defined user groups |
729 |
|
730 |
.. code-block:: console |
731 |
|
732 |
# snf-manage loaddata groups |
733 |
|
734 |
.. _services-reg: |
735 |
|
736 |
Services Registration |
737 |
--------------------- |
738 |
|
739 |
When the database is ready, we configure the elements of the Astakos cloudbar, |
740 |
to point to our future services: |
741 |
|
742 |
.. code-block:: console |
743 |
|
744 |
# snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png |
745 |
# snf-manage service-add "cyclades" https://node1.example.com/ui/ |
746 |
# snf-manage service-add "pithos+" https://node2.example.com/ui/ |
747 |
|
748 |
Servers Initialization |
749 |
---------------------- |
750 |
|
751 |
Finally, we initialize the servers on node1: |
752 |
|
753 |
.. code-block:: console |
754 |
|
755 |
root@node1:~ # /etc/init.d/gunicorn restart |
756 |
root@node1:~ # /etc/init.d/apache2 restart |
757 |
|
758 |
We have now finished the Astakos setup. Let's test it now. |
759 |
|
760 |
|
761 |
Testing of Astakos |
762 |
================== |
763 |
|
764 |
Open your favorite browser and go to: |
765 |
|
766 |
``http://node1.example.com/im`` |
767 |
|
768 |
If this redirects you to ``https://node1.example.com/im/`` and you can see |
769 |
the "welcome" door of Astakos, then you have successfully setup Astakos. |
770 |
|
771 |
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button |
772 |
and fill all your data at the sign up form. Then click "SUBMIT". You should now |
773 |
see a green box on the top, which informs you that you made a successful request |
774 |
and the request has been sent to the administrators. So far so good, let's |
775 |
assume that you created the user with username ``user@example.com``. |
776 |
|
777 |
Now we need to activate that user. Return to a command prompt at node1 and run: |
778 |
|
779 |
.. code-block:: console |
780 |
|
781 |
root@node1:~ # snf-manage user-list |
782 |
|
783 |
This command should show you a list with only one user; the one we just created. |
784 |
This user should have an id with a value of ``1``. It should also have an |
785 |
"active" status with the value of ``0`` (inactive). Now run: |
786 |
|
787 |
.. code-block:: console |
788 |
|
789 |
root@node1:~ # snf-manage user-update --set-active 1 |
790 |
|
791 |
This modifies the active value to ``1``, and actually activates the user. |
792 |
When running in production, the activation is done automatically with different |
793 |
types of moderation, that Astakos supports. You can see the moderation methods |
794 |
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific |
795 |
documentation. In production, you can also manually activate a user, by sending |
796 |
him/her an activation email. See how to do this at the :ref:`User |
797 |
activation <user_activation>` section. |
798 |
|
799 |
Now let's go back to the homepage. Open ``http://node1.example.com/im/`` with |
800 |
your browser again. Try to sign in using your new credentials. If the astakos |
801 |
menu appears and you can see your profile, then you have successfully setup |
802 |
Astakos. |
803 |
|
804 |
Let's continue to install Pithos+ now. |
805 |
|
806 |
|
807 |
Installation of Pithos+ on node2 |
808 |
================================ |
809 |
|
810 |
To install pithos+, grab the packages from our repository (make sure you made |
811 |
the additions needed in your ``/etc/apt/sources.list`` file, as described |
812 |
previously), by running: |
813 |
|
814 |
.. code-block:: console |
815 |
|
816 |
# apt-get install snf-pithos-app snf-pithos-backend |
817 |
|
818 |
After successful installation of snf-pithos-app, make sure that also |
819 |
snf-webproject has been installed (marked as "Recommended" package). Refer to |
820 |
the "Installation of Astakos on node1" section, if you don't remember why this |
821 |
should happen. Now, install the pithos web interface: |
822 |
|
823 |
.. code-block:: console |
824 |
|
825 |
# apt-get install snf-pithos-webclient |
826 |
|
827 |
This package provides the standalone pithos web client. The web client is the |
828 |
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos |
829 |
interface's cloudbar, at the top of the Astakos homepage. |
830 |
|
831 |
|
832 |
.. _conf-pithos: |
833 |
|
834 |
Configuration of Pithos+ |
835 |
======================== |
836 |
|
837 |
Conf Files |
838 |
---------- |
839 |
|
840 |
After pithos+ is successfully installed, you will find the directory |
841 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
842 |
after installation of astakos. Here, you will not have to change anything that |
843 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
844 |
only need to change settings that have to do with pithos+. Specifically: |
845 |
|
846 |
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set |
847 |
this options: |
848 |
|
849 |
.. code-block:: console |
850 |
|
851 |
PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
852 |
|
853 |
PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data' |
854 |
|
855 |
PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate' |
856 |
PITHOS_AUTHENTICATION_USERS = None |
857 |
|
858 |
PITHOS_SERVICE_TOKEN = 'pithos_service_token22w==' |
859 |
PITHOS_USER_CATALOG_URL = 'https://node1.example.com/user_catalogs' |
860 |
PITHOS_USER_FEEDBACK_URL = 'https://node1.example.com/feedback' |
861 |
PITHOS_USER_LOGIN_URL = 'https://node1.example.com/login' |
862 |
|
863 |
PITHOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v' |
864 |
PITHOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w' |
865 |
PITHOS_USE_QUOTAHOLDER = True |
866 |
|
867 |
# Set to False if astakos & pithos are on the same host |
868 |
#PITHOS_PROXY_USER_SERVICES = True |
869 |
|
870 |
|
871 |
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to |
872 |
find the pithos+ backend database. Above we tell pithos+ that its database is |
873 |
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password |
874 |
``example_passw0rd``. All those settings where setup during node1's "Database |
875 |
setup" section. |
876 |
|
877 |
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find |
878 |
the pithos+ backend data. Above we tell pithos+ to store its data under |
879 |
``/srv/pithos/data``, which is visible by both nodes. We have already setup this |
880 |
directory at node1's "Pithos+ data directory setup" section. |
881 |
|
882 |
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI |
883 |
is available the astakos authentication api. If not set, pithos+ tries to |
884 |
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool. |
885 |
|
886 |
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on |
887 |
the Astakos node (node1 in our case): |
888 |
|
889 |
.. code-block:: console |
890 |
|
891 |
# snf-manage service-list |
892 |
|
893 |
The token has been generated automatically during the :ref:`Pithos+ service |
894 |
registration <services-reg>`. |
895 |
|
896 |
Then we need to setup the web UI and connect it to astakos. To do so, edit |
897 |
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``: |
898 |
|
899 |
.. code-block:: console |
900 |
|
901 |
PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next=" |
902 |
PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback" |
903 |
|
904 |
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if |
905 |
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the |
906 |
pithos+ feedback form. Astakos already provides a generic feedback form for all |
907 |
services, so we use this one. |
908 |
|
909 |
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the |
910 |
object checksums. This results to improved performance during object uploading. |
911 |
However, if compatibility with the OpenStack Object Storage API is important |
912 |
then it should be changed to ``True``. |
913 |
|
914 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
915 |
pithos+ web UI with the astakos web UI (through the top cloudbar): |
916 |
|
917 |
.. code-block:: console |
918 |
|
919 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
920 |
PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3' |
921 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
922 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu' |
923 |
|
924 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common |
925 |
cloudbar. |
926 |
|
927 |
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered |
928 |
Astakos service. You can see all :ref:`registered services <services-reg>` by |
929 |
running on the Astakos node (node1): |
930 |
|
931 |
.. code-block:: console |
932 |
|
933 |
# snf-manage service-list |
934 |
|
935 |
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos |
936 |
service's ``id`` as shown by the above command, in our case ``3``. |
937 |
|
938 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
939 |
pithos+ web client to get from astakos all the information needed to fill its |
940 |
own cloudbar. So we put our astakos deployment urls there. |
941 |
|
942 |
Pooling and Greenlets |
943 |
--------------------- |
944 |
|
945 |
Pithos is pooling-ready without the need of further configuration, because it |
946 |
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos |
947 |
backend objects for access to the Pithos DB. |
948 |
|
949 |
However, as in Astakos, since we are running with Greenlets, it is also |
950 |
recommended to modify psycopg2 behavior so it works properly in a greenlet |
951 |
context. This means adding the following lines at the top of your |
952 |
``/etc/synnefo/10-snf-webproject-database.conf`` file: |
953 |
|
954 |
.. code-block:: console |
955 |
|
956 |
from synnefo.lib.db.psyco_gevent import make_psycopg_green |
957 |
make_psycopg_green() |
958 |
|
959 |
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as |
960 |
mentioned above, depending on your setup) argument on your |
961 |
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something |
962 |
like this: |
963 |
|
964 |
.. code-block:: console |
965 |
|
966 |
CONFIG = { |
967 |
'mode': 'django', |
968 |
'environment': { |
969 |
'DJANGO_SETTINGS_MODULE': 'synnefo.settings', |
970 |
}, |
971 |
'working_dir': '/etc/synnefo', |
972 |
'user': 'www-data', |
973 |
'group': 'www-data', |
974 |
'args': ( |
975 |
'--bind=127.0.0.1:8080', |
976 |
'--workers=4', |
977 |
'--worker-class=gevent', |
978 |
'--log-level=debug', |
979 |
'--timeout=43200' |
980 |
), |
981 |
} |
982 |
|
983 |
Stamp Database Revision |
984 |
----------------------- |
985 |
|
986 |
Pithos uses the alembic_ database migrations tool. |
987 |
|
988 |
.. _alembic: http://alembic.readthedocs.org |
989 |
|
990 |
After a sucessful installation, we should stamp it with the most recent |
991 |
revision, in order to be able in the future to define the migrations should run |
992 |
in subsequent upgrades. |
993 |
|
994 |
In order to find the most recent revision, we check the migration history: |
995 |
|
996 |
.. code-block:: console |
997 |
|
998 |
root@node2:~ # pithos-migrate history |
999 |
2a309a9a3438 -> 27381099d477 (head), alter public add column url |
1000 |
165ba3fbfe53 -> 2a309a9a3438, fix statistics negative population |
1001 |
3dd56e750a3 -> 165ba3fbfe53, update account in paths |
1002 |
230f8ce9c90f -> 3dd56e750a3, Fix latest_version |
1003 |
8320b1c62d9 -> 230f8ce9c90f, alter nodes add column latest version |
1004 |
None -> 8320b1c62d9, create index nodes.parent |
1005 |
|
1006 |
Finally, we stamp it with the one found in the previous step: |
1007 |
|
1008 |
.. code-block:: console |
1009 |
|
1010 |
root@node2:~ # pithos-migrate stamp 27381099d477 |
1011 |
|
1012 |
Servers Initialization |
1013 |
---------------------- |
1014 |
|
1015 |
After configuration is done, we initialize the servers on node2: |
1016 |
|
1017 |
.. code-block:: console |
1018 |
|
1019 |
root@node2:~ # /etc/init.d/gunicorn restart |
1020 |
root@node2:~ # /etc/init.d/apache2 restart |
1021 |
|
1022 |
You have now finished the Pithos+ setup. Let's test it now. |
1023 |
|
1024 |
|
1025 |
Testing of Pithos+ |
1026 |
================== |
1027 |
|
1028 |
Open your browser and go to the Astakos homepage: |
1029 |
|
1030 |
``http://node1.example.com/im`` |
1031 |
|
1032 |
Login, and you will see your profile page. Now, click the "pithos+" link on the |
1033 |
top black cloudbar. If everything was setup correctly, this will redirect you |
1034 |
to: |
1035 |
|
1036 |
|
1037 |
and you will see the blue interface of the Pithos+ application. Click the |
1038 |
orange "Upload" button and upload your first file. If the file gets uploaded |
1039 |
successfully, then this is your first sign of a successful Pithos+ installation. |
1040 |
Go ahead and experiment with the interface to make sure everything works |
1041 |
correctly. |
1042 |
|
1043 |
You can also use the Pithos+ clients to sync data from your Windows PC or MAC. |
1044 |
|
1045 |
If you don't stumble on any problems, then you have successfully installed |
1046 |
Pithos+, which you can use as a standalone File Storage Service. |
1047 |
|
1048 |
If you would like to do more, such as: |
1049 |
|
1050 |
* Spawning VMs |
1051 |
* Spawning VMs from Images stored on Pithos+ |
1052 |
* Uploading your custom Images to Pithos+ |
1053 |
* Spawning VMs from those custom Images |
1054 |
* Registering existing Pithos+ files as Images |
1055 |
* Connect VMs to the Internet |
1056 |
* Create Private Networks |
1057 |
* Add VMs to Private Networks |
1058 |
|
1059 |
please continue with the rest of the guide. |
1060 |
|
1061 |
|
1062 |
Cyclades (and Plankton) Prerequisites |
1063 |
===================================== |
1064 |
|
1065 |
Before proceeding with the Cyclades (and Plankton) installation, make sure you |
1066 |
have successfully set up Astakos and Pithos+ first, because Cyclades depends |
1067 |
on them. If you don't have a working Astakos and Pithos+ installation yet, |
1068 |
please return to the :ref:`top <quick-install-admin-guide>` of this guide. |
1069 |
|
1070 |
Besides Astakos and Pithos+, you will also need a number of additional working |
1071 |
prerequisites, before you start the Cyclades installation. |
1072 |
|
1073 |
Ganeti |
1074 |
------ |
1075 |
|
1076 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
1077 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
1078 |
Please refer to the |
1079 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the |
1080 |
gory details. A successful Ganeti installation concludes with a working |
1081 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
1082 |
<GANETI_NODES>`. |
1083 |
|
1084 |
The above Ganeti cluster can run on different physical machines than node1 and |
1085 |
node2 and can scale independently, according to your needs. |
1086 |
|
1087 |
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER |
1088 |
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a |
1089 |
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too. |
1090 |
|
1091 |
We highly recommend that you read the official Ganeti documentation, if you are |
1092 |
not familiar with Ganeti. |
1093 |
|
1094 |
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't |
1095 |
support IP pool management. This feature will be available in Ganeti >= 2.7. |
1096 |
Synnefo depends on the IP pool functionality of Ganeti, so you have to use |
1097 |
GRNET provided packages until stable 2.7 is out. To do so: |
1098 |
|
1099 |
.. code-block:: console |
1100 |
|
1101 |
# apt-get install snf-ganeti ganeti-htools |
1102 |
# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true |
1103 |
|
1104 |
You should have: |
1105 |
|
1106 |
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1 |
1107 |
|
1108 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
1109 |
both nodes, choose a domain name that resolves to a valid floating IP (let's |
1110 |
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same |
1111 |
dsa/rsa keys and authorised_keys for password-less root ssh between each other. |
1112 |
If not then skip passing --no-ssh-init but be aware that it will replace |
1113 |
/root/.ssh/* related files and you might lose access to master node. Also, |
1114 |
make sure there is an lvm volume group named ``ganeti`` that will host your |
1115 |
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0). |
1116 |
Then run on node1: |
1117 |
|
1118 |
.. code-block:: console |
1119 |
|
1120 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \ |
1121 |
--no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \ |
1122 |
--master-netdev eth0 ganeti.node1.example.com |
1123 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
1124 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
1125 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
1126 |
|
1127 |
root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \ |
1128 |
--vm-capable=yes node2.example.com |
1129 |
root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti |
1130 |
root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default |
1131 |
|
1132 |
For any problems you may stumble upon installing Ganeti, please refer to the |
1133 |
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation |
1134 |
of Ganeti is out of the scope of this guide. |
1135 |
|
1136 |
.. _cyclades-install-snfimage: |
1137 |
|
1138 |
snf-image |
1139 |
--------- |
1140 |
|
1141 |
Installation |
1142 |
~~~~~~~~~~~~ |
1143 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
1144 |
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all* |
1145 |
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on |
1146 |
node1 and node2. You can do this by running on *both* nodes: |
1147 |
|
1148 |
.. code-block:: console |
1149 |
|
1150 |
# apt-get install snf-image snf-pithos-backend python-psycopg2 |
1151 |
|
1152 |
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able |
1153 |
to handle image files stored on Pithos+. It also needs `python-psycopg2` to be |
1154 |
able to access the Pithos+ database. This is why, we also install them on *all* |
1155 |
VM-capable Ganeti nodes. |
1156 |
|
1157 |
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will |
1158 |
not work out of the box if you try to use URLs served by servers which do |
1159 |
not have a valid certificate. To circumvent this you should edit the file |
1160 |
``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``. |
1161 |
|
1162 |
After `snf-image` has been installed successfully, create the helper VM by |
1163 |
running on *both* nodes: |
1164 |
|
1165 |
.. code-block:: console |
1166 |
|
1167 |
# snf-image-update-helper |
1168 |
|
1169 |
This will create all the needed files under ``/var/lib/snf-image/helper/`` for |
1170 |
snf-image to run successfully, and it may take a few minutes depending on your |
1171 |
Internet connection. |
1172 |
|
1173 |
Configuration |
1174 |
~~~~~~~~~~~~~ |
1175 |
snf-image supports native access to Images stored on Pithos+. This means that |
1176 |
it can talk directly to the Pithos+ backend, without the need of providing a |
1177 |
public URL. More details, are described in the next section. For now, the only |
1178 |
thing we need to do, is configure snf-image to access our Pithos+ backend. |
1179 |
|
1180 |
To do this, we need to set the corresponding variables in |
1181 |
``/etc/default/snf-image``, to reflect our Pithos+ setup: |
1182 |
|
1183 |
.. code-block:: console |
1184 |
|
1185 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
1186 |
|
1187 |
PITHOS_DATA="/srv/pithos/data" |
1188 |
|
1189 |
If you have installed your Ganeti cluster on different nodes than node1 and |
1190 |
node2 make sure that ``/srv/pithos/data`` is visible by all of them. |
1191 |
|
1192 |
If you would like to use Images that are also/only stored locally, you need to |
1193 |
save them under ``IMAGE_DIR``, however this guide targets Images stored only on |
1194 |
Pithos+. |
1195 |
|
1196 |
Testing |
1197 |
~~~~~~~ |
1198 |
You can test that snf-image is successfully installed by running on the |
1199 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
1200 |
|
1201 |
.. code-block:: console |
1202 |
|
1203 |
# gnt-os diagnose |
1204 |
|
1205 |
This should return ``valid`` for snf-image. |
1206 |
|
1207 |
If you are interested to learn more about snf-image's internals (and even use |
1208 |
it alongside Ganeti without Synnefo), please see |
1209 |
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information |
1210 |
concerning installation instructions, documentation on the design and |
1211 |
implementation, and supported Image formats. |
1212 |
|
1213 |
.. _snf-image-images: |
1214 |
|
1215 |
Actual Images for snf-image |
1216 |
--------------------------- |
1217 |
|
1218 |
Now that snf-image is installed successfully we need to provide it with some |
1219 |
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``, |
1220 |
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump`` |
1221 |
format. For more information about snf-image Image formats see `here |
1222 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_. |
1223 |
|
1224 |
:ref:`snf-image <snf-image>` also supports three (3) different locations for the |
1225 |
above Images to be stored: |
1226 |
|
1227 |
* Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` |
1228 |
in :file:`/etc/default/snf-image`) |
1229 |
* On a remote host (accessible via public URL e.g: http://... or ftp://...) |
1230 |
* On Pithos+ (accessible natively, not only by its public URL) |
1231 |
|
1232 |
For the purpose of this guide, we will use the Debian Squeeze Base Image found |
1233 |
on the official `snf-image page |
1234 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is |
1235 |
of type ``diskdump``. We will store it in our new Pithos+ installation. |
1236 |
|
1237 |
To do so, do the following: |
1238 |
|
1239 |
a) Download the Image from the official snf-image page. |
1240 |
|
1241 |
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web |
1242 |
UI or the command line client `kamaki |
1243 |
<http://docs.dev.grnet.gr/kamaki/latest/index.html>`_. |
1244 |
|
1245 |
Once the Image is uploaded successfully, download the Image's metadata file |
1246 |
from the official snf-image page. You will need it, for spawning a VM from |
1247 |
Ganeti, in the next section. |
1248 |
|
1249 |
Of course, you can repeat the procedure to upload more Images, available from |
1250 |
the `official snf-image page |
1251 |
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. |
1252 |
|
1253 |
.. _ganeti-with-pithos-images: |
1254 |
|
1255 |
Spawning a VM from a Pithos+ Image, using Ganeti |
1256 |
------------------------------------------------ |
1257 |
|
1258 |
Now, it is time to test our installation so far. So, we have Astakos and |
1259 |
Pithos+ installed, we have a working Ganeti installation, the snf-image |
1260 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
1261 |
Pithos+. Make sure you also have the `metadata file |
1262 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
1263 |
|
1264 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
1265 |
|
1266 |
.. code-block:: console |
1267 |
|
1268 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1269 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1270 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1271 |
testvm1 |
1272 |
|
1273 |
In the above command: |
1274 |
|
1275 |
* ``img_passwd``: the arbitrary root password of your new instance |
1276 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
1277 |
* ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this |
1278 |
should have the format ``pithos://<UUID>/<container>/<filename>``: |
1279 |
* ``username``: ``user@example.com`` (defined during Astakos sign up) |
1280 |
* ``container``: ``pithos`` (default, if the Web UI was used) |
1281 |
* ``filename``: the name of file (visible also from the Web UI) |
1282 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
1283 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1284 |
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_ |
1285 |
|
1286 |
If the ``gnt-instance add`` command returns successfully, then run: |
1287 |
|
1288 |
.. code-block:: console |
1289 |
|
1290 |
# gnt-instance info testvm1 | grep "console connection" |
1291 |
|
1292 |
to find out where to connect using VNC. If you can connect successfully and can |
1293 |
login to your new instance using the root password ``my_vm_example_passw0rd``, |
1294 |
then everything works as expected and you have your new Debian Base VM up and |
1295 |
running. |
1296 |
|
1297 |
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured |
1298 |
to access the Pithos+ database and the Pithos+ backend data (newer versions |
1299 |
require UUID instead of a username). Another issue you may encounter is that in |
1300 |
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in |
1301 |
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and |
1302 |
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect, |
1303 |
again find out what went wrong. Do *NOT* proceed to the next steps unless you |
1304 |
are sure everything works till this point. |
1305 |
|
1306 |
If everything works, you have successfully connected Ganeti with Pithos+. Let's |
1307 |
move on to networking now. |
1308 |
|
1309 |
.. warning:: |
1310 |
|
1311 |
You can bypass the networking sections and go straight to |
1312 |
:ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup |
1313 |
the Cyclades Network Service, but only the Cyclades Compute Service |
1314 |
(recommended for now). |
1315 |
|
1316 |
Networking Setup Overview |
1317 |
------------------------- |
1318 |
|
1319 |
This part is deployment-specific and must be customized based on the specific |
1320 |
needs of the system administrator. However, to do so, the administrator needs |
1321 |
to understand how each level handles Virtual Networks, to be able to setup the |
1322 |
backend appropriately, before installing Cyclades. To do so, please read the |
1323 |
:ref:`Network <networks>` section before proceeding. |
1324 |
|
1325 |
Since synnefo 0.11 all network actions are managed with the snf-manage |
1326 |
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd, |
1327 |
snf-network, bridges, vlans) to be already configured correctly. The only |
1328 |
actions needed in this point are: |
1329 |
|
1330 |
a) Have Ganeti with IP pool management support installed. |
1331 |
|
1332 |
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc. |
1333 |
|
1334 |
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs. |
1335 |
|
1336 |
In order to test that everything is setup correctly before installing Cyclades, |
1337 |
we will make some testing actions in this section, and the actual setup will be |
1338 |
done afterwards with snf-manage commands. |
1339 |
|
1340 |
.. _snf-network: |
1341 |
|
1342 |
snf-network |
1343 |
~~~~~~~~~~~ |
1344 |
|
1345 |
snf-network includes `kvm-vif-bridge` script that is invoked every time |
1346 |
a tap (a VM's NIC) is created. Based on environment variables passed by |
1347 |
Ganeti it issues various commands depending on the network type the NIC is |
1348 |
connected to and sets up a corresponding dhcp lease. |
1349 |
|
1350 |
Install snf-network on all Ganeti nodes: |
1351 |
|
1352 |
.. code-block:: console |
1353 |
|
1354 |
# apt-get install snf-network |
1355 |
|
1356 |
Then, in :file:`/etc/default/snf-network` set: |
1357 |
|
1358 |
.. code-block:: console |
1359 |
|
1360 |
MAC_MASK=ff:ff:f0:00:00:00 |
1361 |
|
1362 |
.. _nfdhcpd: |
1363 |
|
1364 |
nfdhcpd |
1365 |
~~~~~~~ |
1366 |
|
1367 |
Each NIC's IP is chosen by Ganeti (with IP pool management support). |
1368 |
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and |
1369 |
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will |
1370 |
create a dhcp response. |
1371 |
|
1372 |
.. code-block:: console |
1373 |
|
1374 |
# apt-get install nfqueue-bindings-python=0.3+physindev-1 |
1375 |
# apt-get install nfdhcpd |
1376 |
|
1377 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1378 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1379 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new |
1380 |
VMs. Once you are finished, restart the server on all nodes: |
1381 |
|
1382 |
.. code-block:: console |
1383 |
|
1384 |
# /etc/init.d/nfdhcpd restart |
1385 |
|
1386 |
If you are using ``ferm``, then you need to run the following: |
1387 |
|
1388 |
.. code-block:: console |
1389 |
|
1390 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
1391 |
# /etc/init.d/ferm restart |
1392 |
|
1393 |
or make sure to run after boot: |
1394 |
|
1395 |
.. code-block:: console |
1396 |
|
1397 |
# iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42 |
1398 |
|
1399 |
and if you have IPv6 enabled: |
1400 |
|
1401 |
.. code-block:: console |
1402 |
|
1403 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43 |
1404 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44 |
1405 |
|
1406 |
You can check which clients are currently served by nfdhcpd by running: |
1407 |
|
1408 |
.. code-block:: console |
1409 |
|
1410 |
# kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid` |
1411 |
|
1412 |
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``. |
1413 |
|
1414 |
Public Network Setup |
1415 |
-------------------- |
1416 |
|
1417 |
To achieve basic networking the simplest way is to have a common bridge (e.g. |
1418 |
``br0``, on the same collision domain with the router) where all VMs will |
1419 |
connect to. Packets will be "forwarded" to the router and then to the Internet. |
1420 |
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to |
1421 |
:ref:`Network <networks>` section). |
1422 |
|
1423 |
Physical Host Setup |
1424 |
~~~~~~~~~~~~~~~~~~~ |
1425 |
|
1426 |
Assuming ``eth0`` on both hosts is the public interface (directly connected |
1427 |
to the router), run on every node: |
1428 |
|
1429 |
.. code-block:: console |
1430 |
|
1431 |
# apt-get install vlan |
1432 |
# brctl addbr br0 |
1433 |
# ip link set br0 up |
1434 |
# vconfig add eth0 100 |
1435 |
# ip link set eth0.100 up |
1436 |
# brctl addif br0 eth0.100 |
1437 |
|
1438 |
|
1439 |
Testing a Public Network |
1440 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
1441 |
|
1442 |
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you |
1443 |
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the |
1444 |
network by running: |
1445 |
|
1446 |
.. code-block:: console |
1447 |
|
1448 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public |
1449 |
|
1450 |
Then, connect the network to all your nodegroups. We assume that we only have |
1451 |
one nodegroup (``default``) in our Ganeti cluster: |
1452 |
|
1453 |
.. code-block:: console |
1454 |
|
1455 |
# gnt-network connect test-net-public default bridged br0 |
1456 |
|
1457 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1458 |
the Public Network. We will add a new VM, the same way we did it on the |
1459 |
previous testing section. However, now will also add one NIC, configured to be |
1460 |
managed from our previously defined network. Run on the GANETI-MASTER (node1): |
1461 |
|
1462 |
.. code-block:: console |
1463 |
|
1464 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1465 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1466 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1467 |
--net 0:ip=pool,network=test-net-public \ |
1468 |
testvm2 |
1469 |
|
1470 |
If the above returns successfully, connect to the new VM and run: |
1471 |
|
1472 |
.. code-block:: console |
1473 |
|
1474 |
root@testvm2:~ # ip addr |
1475 |
root@testvm2:~ # ip route |
1476 |
root@testvm2:~ # cat /etc/resolv.conf |
1477 |
|
1478 |
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config |
1479 |
(nameserver option in nfdhcpd.conf). This shows correct configuration of |
1480 |
ganeti, snf-network and nfdhcpd. |
1481 |
|
1482 |
Now ping the outside world. If this works too, then you have also configured |
1483 |
correctly your physical host and router. |
1484 |
|
1485 |
Make sure everything works as expected, before proceeding with the Private |
1486 |
Networks setup. |
1487 |
|
1488 |
.. _private-networks-setup: |
1489 |
|
1490 |
Private Networks Setup |
1491 |
---------------------- |
1492 |
|
1493 |
Synnefo supports two types of private networks: |
1494 |
|
1495 |
- based on MAC filtering |
1496 |
- based on physical VLANs |
1497 |
|
1498 |
Both types provide Layer 2 isolation to the end-user. |
1499 |
|
1500 |
For the first type a common bridge (e.g. ``prv0``) is needed while for the |
1501 |
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different |
1502 |
physical VLAN. To this end to assure isolation among end-users' private networks |
1503 |
each has to have different MAC prefix (for the filtering to take place) or to be |
1504 |
"connected" to a different bridge (VLAN actually). |
1505 |
|
1506 |
Physical Host Setup |
1507 |
~~~~~~~~~~~~~~~~~~~ |
1508 |
|
1509 |
In order to create the necessary VLAN/bridges, one for MAC filtered private |
1510 |
networks and various (e.g. 20) for private networks based on physical VLANs, |
1511 |
run on every node: |
1512 |
|
1513 |
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs |
1514 |
configured correctly) connected together, run on every node: |
1515 |
|
1516 |
.. code-block:: console |
1517 |
|
1518 |
# modprobe 8021q |
1519 |
# $iface=eth0 |
1520 |
# for prv in $(seq 0 20); do |
1521 |
vlan=$prv |
1522 |
bridge=prv$prv |
1523 |
vconfig add $iface $vlan |
1524 |
ifconfig $iface.$vlan up |
1525 |
brctl addbr $bridge |
1526 |
brctl setfd $bridge 0 |
1527 |
brctl addif $bridge $iface.$vlan |
1528 |
ifconfig $bridge up |
1529 |
done |
1530 |
|
1531 |
The above will do the following : |
1532 |
|
1533 |
* provision 21 new bridges: ``prv0`` - ``prv20`` |
1534 |
* provision 21 new vlans: ``eth0.0`` - ``eth0.20`` |
1535 |
* add the corresponding vlan to the equivalent bridge |
1536 |
|
1537 |
You can run ``brctl show`` on both nodes to see if everything was setup |
1538 |
correctly. |
1539 |
|
1540 |
Testing the Private Networks |
1541 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1542 |
|
1543 |
To test the Private Networks, we will create two instances and put them in the |
1544 |
same Private Networks (one MAC Filtered and one Physical VLAN). This means |
1545 |
that the instances will have a second NIC connected to the ``prv0`` |
1546 |
pre-provisioned bridge and a third to ``prv1``. |
1547 |
|
1548 |
We run the same command as in the Public Network testing section, but with one |
1549 |
more argument for the second NIC: |
1550 |
|
1551 |
.. code-block:: console |
1552 |
|
1553 |
# gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac |
1554 |
# gnt-network connect test-net-prv-mac default bridged prv0 |
1555 |
|
1556 |
# gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan |
1557 |
# gnt-network connect test-net-prv-vlan default bridged prv1 |
1558 |
|
1559 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1560 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1561 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1562 |
--net 0:ip=pool,network=test-net-public \ |
1563 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1564 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1565 |
testvm3 |
1566 |
|
1567 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1568 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \ |
1569 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1570 |
--net 0:ip=pool,network=test-net-public \ |
1571 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1572 |
--net 2:ip=none,network=test-net-prv-vlan \ |
1573 |
testvm4 |
1574 |
|
1575 |
Above, we create two instances with first NIC connected to the internet, their |
1576 |
second NIC connected to a MAC filtered private Network and their third NIC |
1577 |
connected to the first Physical VLAN Private Network. Now, connect to the |
1578 |
instances using VNC and make sure everything works as expected: |
1579 |
|
1580 |
a) The instances have access to the public internet through their first eth |
1581 |
interface (``eth0``), which has been automatically assigned a public IP. |
1582 |
|
1583 |
b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``) |
1584 |
|
1585 |
c) ip link set ``eth1``/``eth2`` up |
1586 |
|
1587 |
d) dhclient ``eth1``/``eth2`` |
1588 |
|
1589 |
e) On testvm3 ping 192.168.1.2/10.0.0.2 |
1590 |
|
1591 |
If everything works as expected, then you have finished the Network Setup at the |
1592 |
backend for both types of Networks (Public & Private). |
1593 |
|
1594 |
.. _cyclades-gtools: |
1595 |
|
1596 |
Cyclades Ganeti tools |
1597 |
--------------------- |
1598 |
|
1599 |
In order for Ganeti to be connected with Cyclades later on, we need the |
1600 |
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our |
1601 |
case). You can install them by running in both nodes: |
1602 |
|
1603 |
.. code-block:: console |
1604 |
|
1605 |
# apt-get install snf-cyclades-gtools |
1606 |
|
1607 |
This will install the following: |
1608 |
|
1609 |
* ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ) |
1610 |
* ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``) |
1611 |
* ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages) |
1612 |
|
1613 |
Configure ``snf-cyclades-gtools`` |
1614 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1615 |
|
1616 |
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` |
1617 |
configuration file. At least we need to set the RabbitMQ endpoint for all tools |
1618 |
that need it: |
1619 |
|
1620 |
.. code-block:: console |
1621 |
|
1622 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1623 |
|
1624 |
The above variables should reflect your :ref:`Message Queue setup |
1625 |
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes. |
1626 |
|
1627 |
Connect ``snf-image`` with ``snf-progress-monitor`` |
1628 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1629 |
|
1630 |
Finally, we need to configure ``snf-image`` to publish progress messages during |
1631 |
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and |
1632 |
set the corresponding variable to ``snf-progress-monitor``: |
1633 |
|
1634 |
.. code-block:: console |
1635 |
|
1636 |
PROGRESS_MONITOR="snf-progress-monitor" |
1637 |
|
1638 |
This file should be editted in all Ganeti nodes. |
1639 |
|
1640 |
.. _rapi-user: |
1641 |
|
1642 |
Synnefo RAPI user |
1643 |
----------------- |
1644 |
|
1645 |
As a last step before installing Cyclades, create a new RAPI user that will |
1646 |
have ``write`` access. Cyclades will use this user to issue commands to Ganeti, |
1647 |
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``. |
1648 |
You can do this, by first running: |
1649 |
|
1650 |
.. code-block:: console |
1651 |
|
1652 |
# echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5 |
1653 |
|
1654 |
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows: |
1655 |
|
1656 |
.. code-block:: console |
1657 |
|
1658 |
cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write |
1659 |
|
1660 |
More about Ganeti's RAPI users `here. |
1661 |
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_ |
1662 |
|
1663 |
You have now finished with all needed Prerequisites for Cyclades (and |
1664 |
Plankton). Let's move on to the actual Cyclades installation. |
1665 |
|
1666 |
|
1667 |
Installation of Cyclades (and Plankton) on node1 |
1668 |
================================================ |
1669 |
|
1670 |
This section describes the installation of Cyclades. Cyclades is Synnefo's |
1671 |
Compute service. Plankton (the Image Registry service) will get installed |
1672 |
automatically along with Cyclades, because it is contained in the same Synnefo |
1673 |
component right now. |
1674 |
|
1675 |
We will install Cyclades (and Plankton) on node1. To do so, we install the |
1676 |
corresponding package by running on node1: |
1677 |
|
1678 |
.. code-block:: console |
1679 |
|
1680 |
# apt-get install snf-cyclades-app memcached python-memcache |
1681 |
|
1682 |
If all packages install successfully, then Cyclades and Plankton are installed |
1683 |
and we proceed with their configuration. |
1684 |
|
1685 |
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data |
1686 |
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password). |
1687 |
This is achieved by storing all sensitive information to a CACHE backend and |
1688 |
exporting it via VMAPI. The cache entries are invalidated after the first |
1689 |
request. Synnefo uses `memcached <http://memcached.org/>`_ as a |
1690 |
`Django <https://www.djangoproject.com/>`_ cache backend. |
1691 |
|
1692 |
Configuration of Cyclades (and Plankton) |
1693 |
======================================== |
1694 |
|
1695 |
Conf files |
1696 |
---------- |
1697 |
|
1698 |
After installing Cyclades, a number of new configuration files will appear under |
1699 |
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here |
1700 |
only the minimal needed changes to result with a working system. In general, |
1701 |
sane defaults have been chosen for the most of the options, to cover most of the |
1702 |
common scenarios. However, if you want to tweak Cyclades feel free to do so, |
1703 |
once you get familiar with the different options. |
1704 |
|
1705 |
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``: |
1706 |
|
1707 |
.. code-block:: console |
1708 |
|
1709 |
ASTAKOS_URL = 'https://node1.example.com/im/authenticate' |
1710 |
|
1711 |
# Set to False if astakos & cyclades are on the same host |
1712 |
CYCLADES_PROXY_USER_SERVICES = False |
1713 |
|
1714 |
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set |
1715 |
to point to Astakos (this should have the same value with Pithos+'s |
1716 |
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`). |
1717 |
|
1718 |
.. warning:: |
1719 |
|
1720 |
All services must match the quotaholder token and url configured for |
1721 |
quotaholder. |
1722 |
|
1723 |
TODO: Document the Network Options here |
1724 |
|
1725 |
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``: |
1726 |
|
1727 |
.. code-block:: console |
1728 |
|
1729 |
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/' |
1730 |
CLOUDBAR_ACTIVE_SERVICE = '2' |
1731 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services' |
1732 |
CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu' |
1733 |
|
1734 |
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common |
1735 |
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are |
1736 |
used by the Cyclades Web UI to get from Astakos all the information needed to |
1737 |
fill its own cloudbar. So, we put our Astakos deployment urls there. All the |
1738 |
above should have the same values we put in the corresponding variables in |
1739 |
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous |
1740 |
:ref:`Pithos configuration <conf-pithos>` section. |
1741 |
|
1742 |
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos |
1743 |
service. You can see all :ref:`registered services <services-reg>` by running |
1744 |
on the Astakos node (node1): |
1745 |
|
1746 |
.. code-block:: console |
1747 |
|
1748 |
# snf-manage service-list |
1749 |
|
1750 |
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's |
1751 |
``id`` as shown by the above command, in our case ``2``. |
1752 |
|
1753 |
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``: |
1754 |
|
1755 |
.. code-block:: console |
1756 |
|
1757 |
BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos' |
1758 |
BACKEND_BLOCK_PATH = '/srv/pithos/data/' |
1759 |
|
1760 |
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION`` |
1761 |
denotes the Pithos+ database (where the Image files are stored). So we set that |
1762 |
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual |
1763 |
Pithos+ data location. |
1764 |
|
1765 |
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``: |
1766 |
|
1767 |
.. code-block:: console |
1768 |
|
1769 |
AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"] |
1770 |
|
1771 |
The above settings denote the Message Queue. Those settings should have the same |
1772 |
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and |
1773 |
reflect our :ref:`Message Queue setup <rabbitmq-setup>`. |
1774 |
|
1775 |
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``: |
1776 |
|
1777 |
.. code-block:: console |
1778 |
|
1779 |
UI_LOGIN_URL = "https://node1.example.com/im/login" |
1780 |
UI_LOGOUT_URL = "https://node1.example.com/im/logout" |
1781 |
|
1782 |
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users, |
1783 |
if they are not logged in. We point that to Astakos. |
1784 |
|
1785 |
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the |
1786 |
user when he/she logs out. We point that to Astakos, too. |
1787 |
|
1788 |
Edit ``/etc/synnefo/20-snf-cyclades-app-quotas.conf``: |
1789 |
|
1790 |
.. code-block:: console |
1791 |
|
1792 |
CYCLADES_USE_QUOTAHOLDER = True |
1793 |
CYCLADES_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v' |
1794 |
CYCLADES_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w' |
1795 |
|
1796 |
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``: |
1797 |
|
1798 |
.. code-block:: console |
1799 |
|
1800 |
VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600" |
1801 |
VMAPI_BASE_URL = "https://node1.example.com" |
1802 |
|
1803 |
Edit ``/etc/default/vncauthproxy``: |
1804 |
|
1805 |
.. code-block:: console |
1806 |
|
1807 |
CHUID="www-data:nogroup" |
1808 |
|
1809 |
We have now finished with the basic Cyclades and Plankton configuration. |
1810 |
|
1811 |
Database Initialization |
1812 |
----------------------- |
1813 |
|
1814 |
Once Cyclades is configured, we sync the database: |
1815 |
|
1816 |
.. code-block:: console |
1817 |
|
1818 |
$ snf-manage syncdb |
1819 |
$ snf-manage migrate |
1820 |
|
1821 |
and load the initial server flavors: |
1822 |
|
1823 |
.. code-block:: console |
1824 |
|
1825 |
$ snf-manage loaddata flavors |
1826 |
|
1827 |
If everything returns successfully, our database is ready. |
1828 |
|
1829 |
Add the Ganeti backend |
1830 |
---------------------- |
1831 |
|
1832 |
In our installation we assume that we only have one Ganeti cluster, the one we |
1833 |
setup earlier. At this point you have to add this backend (Ganeti cluster) to |
1834 |
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>` |
1835 |
correctly. |
1836 |
|
1837 |
.. code-block:: console |
1838 |
|
1839 |
$ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd |
1840 |
|
1841 |
You can see everything has been setup correctly by running: |
1842 |
|
1843 |
.. code-block:: console |
1844 |
|
1845 |
$ snf-manage backend-list |
1846 |
|
1847 |
Enable the new backend by running: |
1848 |
|
1849 |
.. code-block:: |
1850 |
|
1851 |
$ snf-manage backend-modify --drained False 1 |
1852 |
|
1853 |
.. warning:: Since version 0.13, the backend is set to "drained" by default. |
1854 |
This means that you cannot add VMs to it. The reason for this is that the |
1855 |
nodes should be unavailable to Synnefo until the Administrator explicitly |
1856 |
releases them. To change this setting, use ``snf-manage backend-modify |
1857 |
--drained False <backend-id>``. |
1858 |
|
1859 |
If something is not set correctly, you can modify the backend with the |
1860 |
``snf-manage backend-modify`` command. If something has gone wrong, you could |
1861 |
modify the backend to reflect the Ganeti installation by running: |
1862 |
|
1863 |
.. code-block:: console |
1864 |
|
1865 |
$ snf-manage backend-modify --clustername "ganeti.node1.example.com" |
1866 |
--user=cyclades |
1867 |
--pass=example_rapi_passw0rd |
1868 |
1 |
1869 |
|
1870 |
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding |
1871 |
domain that resolves to the master IP, than the IP itself, to ensure Cyclades |
1872 |
can talk to Ganeti even after a Ganeti master-failover. |
1873 |
|
1874 |
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's |
1875 |
password. Once we setup the first backend to point at our Ganeti cluster, we |
1876 |
update the Cyclades backends status by running: |
1877 |
|
1878 |
.. code-block:: console |
1879 |
|
1880 |
$ snf-manage backend-update-status |
1881 |
|
1882 |
Cyclades can manage multiple Ganeti backends, but for the purpose of this |
1883 |
guide,we won't get into more detail regarding mulitple backends. If you want to |
1884 |
learn more please see /*TODO*/. |
1885 |
|
1886 |
Add a Public Network |
1887 |
---------------------- |
1888 |
|
1889 |
Cyclades supports different Public Networks on different Ganeti backends. |
1890 |
After connecting Cyclades with our Ganeti cluster, we need to setup a Public |
1891 |
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every |
1892 |
created NIC on a bridge. After having a bridge (e.g. br0) created in every |
1893 |
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0': |
1894 |
|
1895 |
.. code-block:: console |
1896 |
|
1897 |
$ snf-manage network-create --subnet=5.6.7.0/27 \ |
1898 |
--gateway=5.6.7.1 \ |
1899 |
--subnet6=2001:648:2FFC:1322::/64 \ |
1900 |
--gateway6=2001:648:2FFC:1322::1 \ |
1901 |
--public --dhcp --flavor=CUSTOM \ |
1902 |
--link=br0 --mode=bridged \ |
1903 |
--name=public_network \ |
1904 |
--backend-id=1 |
1905 |
|
1906 |
This will create the Public Network on both Cyclades and the Ganeti backend. To |
1907 |
make sure everything was setup correctly, also run: |
1908 |
|
1909 |
.. code-block:: console |
1910 |
|
1911 |
$ snf-manage reconcile-networks |
1912 |
|
1913 |
You can see all available networks by running: |
1914 |
|
1915 |
.. code-block:: console |
1916 |
|
1917 |
$ snf-manage network-list |
1918 |
|
1919 |
and inspect each network's state by running: |
1920 |
|
1921 |
.. code-block:: console |
1922 |
|
1923 |
$ snf-manage network-inspect <net_id> |
1924 |
|
1925 |
Finally, you can see the networks from the Ganeti perspective by running on the |
1926 |
Ganeti MASTER: |
1927 |
|
1928 |
.. code-block:: console |
1929 |
|
1930 |
$ gnt-network list |
1931 |
$ gnt-network info <network_name> |
1932 |
|
1933 |
Create pools for Private Networks |
1934 |
--------------------------------- |
1935 |
|
1936 |
To prevent duplicate assignment of resources to different private networks, |
1937 |
Cyclades supports two types of pools: |
1938 |
|
1939 |
- MAC prefix Pool |
1940 |
- Bridge Pool |
1941 |
|
1942 |
As long as those resourses have been provisioned, admin has to define two |
1943 |
these pools in Synnefo: |
1944 |
|
1945 |
|
1946 |
.. code-block:: console |
1947 |
|
1948 |
root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
1949 |
|
1950 |
root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20 |
1951 |
|
1952 |
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`: |
1953 |
|
1954 |
.. code-block:: console |
1955 |
|
1956 |
DEFAULT_MAC_FILTERED_BRIDGE = 'prv0' |
1957 |
|
1958 |
Servers restart |
1959 |
--------------- |
1960 |
|
1961 |
Restart gunicorn on node1: |
1962 |
|
1963 |
.. code-block:: console |
1964 |
|
1965 |
# /etc/init.d/gunicorn restart |
1966 |
|
1967 |
Now let's do the final connections of Cyclades with Ganeti. |
1968 |
|
1969 |
``snf-dispatcher`` initialization |
1970 |
--------------------------------- |
1971 |
|
1972 |
``snf-dispatcher`` dispatches all messages published to the Message Queue and |
1973 |
manages the Cyclades database accordingly. It also initializes all exchanges. By |
1974 |
default it is not enabled during installation of Cyclades, so let's enable it in |
1975 |
its configuration file ``/etc/default/snf-dispatcher``: |
1976 |
|
1977 |
.. code-block:: console |
1978 |
|
1979 |
SNF_DSPTCH_ENABLE=true |
1980 |
|
1981 |
and start the daemon: |
1982 |
|
1983 |
.. code-block:: console |
1984 |
|
1985 |
# /etc/init.d/snf-dispatcher start |
1986 |
|
1987 |
You can see that everything works correctly by tailing its log file |
1988 |
``/var/log/synnefo/dispatcher.log``. |
1989 |
|
1990 |
``snf-ganeti-eventd`` on GANETI MASTER |
1991 |
-------------------------------------- |
1992 |
|
1993 |
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd`` |
1994 |
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package). |
1995 |
The daemon is already installed on the GANETI MASTER (node1 in our case). |
1996 |
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools`` |
1997 |
installation, so we enable it in its configuration file |
1998 |
``/etc/default/snf-ganeti-eventd``: |
1999 |
|
2000 |
.. code-block:: console |
2001 |
|
2002 |
SNF_EVENTD_ENABLE=true |
2003 |
|
2004 |
and start the daemon: |
2005 |
|
2006 |
.. code-block:: console |
2007 |
|
2008 |
# /etc/init.d/snf-ganeti-eventd start |
2009 |
|
2010 |
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER |
2011 |
|
2012 |
Apply Quotas |
2013 |
------------ |
2014 |
|
2015 |
.. code-block:: console |
2016 |
|
2017 |
node1 # snf-manage astakos-init --load-service-resources |
2018 |
node1 # snf-manage astakos-quota --verify |
2019 |
node1 # snf-manage astakos-quota --sync |
2020 |
node2 # snf-manage pithos-reset-usage |
2021 |
node1 # snf-manage cyclades-reset-usage |
2022 |
|
2023 |
If all the above return successfully, then you have finished with the Cyclades |
2024 |
and Plankton installation and setup. |
2025 |
|
2026 |
Let's test our installation now. |
2027 |
|
2028 |
|
2029 |
Testing of Cyclades (and Plankton) |
2030 |
================================== |
2031 |
|
2032 |
Cyclades Web UI |
2033 |
--------------- |
2034 |
|
2035 |
First of all we need to test that our Cyclades Web UI works correctly. Open your |
2036 |
browser and go to the Astakos home page. Login and then click 'cyclades' on the |
2037 |
top cloud bar. This should redirect you to: |
2038 |
|
2039 |
`http://node1.example.com/ui/` |
2040 |
|
2041 |
and the Cyclades home page should appear. If not, please go back and find what |
2042 |
went wrong. Do not proceed if you don't see the Cyclades home page. |
2043 |
|
2044 |
If the Cyclades home page appears, click on the orange button 'New machine'. The |
2045 |
first step of the 'New machine wizard' will appear. This step shows all the |
2046 |
available Images from which you can spawn new VMs. The list should be currently |
2047 |
empty, as we haven't registered any Images yet. Close the wizard and browse the |
2048 |
interface (not many things to see yet). If everything seems to work, let's |
2049 |
register our first Image file. |
2050 |
|
2051 |
Cyclades Images |
2052 |
--------------- |
2053 |
|
2054 |
To test our Cyclades (and Plankton) installation, we will use an Image stored on |
2055 |
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all |
2056 |
steps, even though you may already have uploaded an Image on Pithos+ from a |
2057 |
:ref:`previous <snf-image-images>` section: |
2058 |
|
2059 |
* Upload an Image file to Pithos+ |
2060 |
* Register that Image file to Plankton |
2061 |
* Spawn a new VM from that Image from the Cyclades Web UI |
2062 |
|
2063 |
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_ |
2064 |
command line client to do the uploading and registering of the Image. |
2065 |
|
2066 |
Installation of `kamaki` |
2067 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
2068 |
|
2069 |
You can install `kamaki` anywhere you like, since it is a standalone client of |
2070 |
the APIs and talks to the installation over `http`. For the purpose of this |
2071 |
guide we will assume that we have downloaded the `Debian Squeeze Base Image |
2072 |
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's |
2073 |
``/srv/images`` directory. For that reason we will install `kamaki` on node1, |
2074 |
too. We do this by running: |
2075 |
|
2076 |
.. code-block:: console |
2077 |
|
2078 |
# apt-get install kamaki |
2079 |
|
2080 |
Configuration of kamaki |
2081 |
~~~~~~~~~~~~~~~~~~~~~~~ |
2082 |
|
2083 |
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our |
2084 |
installation. We do this by running: |
2085 |
|
2086 |
.. code-block:: console |
2087 |
|
2088 |
$ kamaki config set astakos.url "https://node1.example.com" |
2089 |
$ kamaki config set compute.url "https://node1.example.com/api/v1.1" |
2090 |
$ kamaki config set image.url "https://node1.example.com/plankton" |
2091 |
$ kamaki config set store.url "https://node2.example.com/v1" |
2092 |
$ kamaki config set global.account "user@example.com" |
2093 |
$ kamaki config set store.enable on |
2094 |
$ kamaki config set store.pithos_extensions on |
2095 |
$ kamaki config set store.url "https://node2.example.com/v1" |
2096 |
$ kamaki config set store.account USER_UUID |
2097 |
$ kamaki config set global.token USER_TOKEN |
2098 |
|
2099 |
The USER_TOKEN and USER_UUID appear on the user's (``user@example.com``) |
2100 |
`Profile` web page on the Astakos Web UI. |
2101 |
|
2102 |
You can see that the new configuration options have been applied correctly, by |
2103 |
running: |
2104 |
|
2105 |
.. code-block:: console |
2106 |
|
2107 |
$ kamaki config list |
2108 |
|
2109 |
Upload an Image file to Pithos+ |
2110 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
2111 |
|
2112 |
Now, that we have set up `kamaki` we will upload the Image that we have |
2113 |
downloaded and stored under ``/srv/images/``. Although we can upload the Image |
2114 |
under the root ``Pithos`` container (as you may have done when uploading the |
2115 |
Image from the Pithos+ Web UI), we will create a new container called ``images`` |
2116 |
and store the Image under that container. We do this for two reasons: |
2117 |
|
2118 |
a) To demonstrate how to create containers other than the default ``Pithos``. |
2119 |
This can be done only with the `kamaki` client and not through the Web UI. |
2120 |
|
2121 |
b) As a best organization practise, so that you won't have your Image files |
2122 |
tangled along with all your other Pithos+ files and directory structures. |
2123 |
|
2124 |
We create the new ``images`` container by running: |
2125 |
|
2126 |
.. code-block:: console |
2127 |
|
2128 |
$ kamaki store create images |
2129 |
|
2130 |
Then, we upload the Image file to that container: |
2131 |
|
2132 |
.. code-block:: console |
2133 |
|
2134 |
$ kamaki store upload --container images \ |
2135 |
/srv/images/debian_base-6.0-7-x86_64.diskdump \ |
2136 |
debian_base-6.0-7-x86_64.diskdump |
2137 |
|
2138 |
The first is the local path and the second is the remote path on Pithos+. If |
2139 |
the new container and the file appears on the Pithos+ Web UI, then you have |
2140 |
successfully created the container and uploaded the Image file. |
2141 |
|
2142 |
Register an existing Image file to Plankton |
2143 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
2144 |
|
2145 |
Once the Image file has been successfully uploaded on Pithos+, then we register |
2146 |
it to Plankton (so that it becomes visible to Cyclades), by running: |
2147 |
|
2148 |
.. code-block:: console |
2149 |
|
2150 |
$ kamaki image register "Debian Base" \ |
2151 |
pithos://USER_UUID/images/debian_base-6.0-7-x86_64.diskdump \ |
2152 |
--public \ |
2153 |
--disk-format=diskdump \ |
2154 |
--property OSFAMILY=linux --property ROOT_PARTITION=1 \ |
2155 |
--property description="Debian Squeeze Base System" \ |
2156 |
--property size=451 --property kernel=2.6.32 --property GUI="No GUI" \ |
2157 |
--property sortorder=1 --property USERS=root --property OS=debian |
2158 |
|
2159 |
This command registers the Pithos+ file |
2160 |
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an |
2161 |
Image in Plankton. This Image will be public (``--public``), so all users will |
2162 |
be able to spawn VMs from it and is of type ``diskdump``. The first two |
2163 |
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest |
2164 |
properties are optional, but recommended, so that the Images appear nicely on |
2165 |
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The |
2166 |
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable |
2167 |
inside the ``20-snf-cyclades-app-ui.conf`` configuration file. |
2168 |
|
2169 |
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed |
2170 |
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see |
2171 |
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are |
2172 |
used to show information on the Cyclades UI. |
2173 |
|
2174 |
Spawn a VM from the Cyclades Web UI |
2175 |
----------------------------------- |
2176 |
|
2177 |
If the registration completes successfully, then go to the Cyclades Web UI from |
2178 |
your browser at: |
2179 |
|
2180 |
`https://node1.example.com/ui/` |
2181 |
|
2182 |
Click on the 'New Machine' button and the first step of the wizard will appear. |
2183 |
Click on 'My Images' (right after 'System' Images) on the left pane of the |
2184 |
wizard. Your previously registered Image "Debian Base" should appear under |
2185 |
'Available Images'. If not, something has gone wrong with the registration. Make |
2186 |
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image |
2187 |
register`` returns successfully with all options and properties as shown above. |
2188 |
|
2189 |
If the Image appears on the list, select it and complete the wizard by selecting |
2190 |
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you |
2191 |
write down your password, because you *WON'T* be able to retrieve it later. |
2192 |
|
2193 |
If everything was setup correctly, after a few minutes your new machine will go |
2194 |
to state 'Running' and you will be able to use it. Click 'Console' to connect |
2195 |
through VNC out of band, or click on the machine's icon to connect directly via |
2196 |
SSH or RDP (for windows machines). |
2197 |
|
2198 |
Congratulations. You have successfully installed the whole Synnefo stack and |
2199 |
connected all components. Go ahead in the next section to test the Network |
2200 |
functionality from inside Cyclades and discover even more features. |
2201 |
|
2202 |
General Testing |
2203 |
=============== |
2204 |
|
2205 |
Notes |
2206 |
===== |
2207 |
|