Revision f8cdf6ec
b/docs/quick-install-admin-guide.rst | ||
---|---|---|
5 | 5 |
|
6 | 6 |
This is the Administrator's installation guide. |
7 | 7 |
|
8 |
It describes how to install the whole synnefo stack on two (2) physical nodes,
|
|
8 |
It describes how to install the whole Synnefo stack on two (2) physical nodes,
|
|
9 | 9 |
with minimum configuration. It installs synnefo from Debian packages, and |
10 |
assumes the nodes run Debian Squeeze. After successful installation, you will
|
|
10 |
assumes the nodes run Debian Wheezy. After successful installation, you will
|
|
11 | 11 |
have the following services running: |
12 | 12 |
|
13 | 13 |
* Identity Management (Astakos) |
... | ... | |
18 | 18 |
|
19 | 19 |
and a single unified Web UI to manage them all. |
20 | 20 |
|
21 |
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are |
|
22 |
not released yet. |
|
23 |
|
|
24 | 21 |
If you just want to install the Object Storage Service (Pithos), follow the |
25 | 22 |
guide and just stop after the "Testing of Pithos" section. |
26 | 23 |
|
... | ... | |
37 | 34 |
|
38 | 35 |
For the rest of the documentation we will refer to the first physical node as |
39 | 36 |
"node1" and the second as "node2". We will also assume that their domain names |
40 |
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
|
|
41 |
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
|
|
37 |
are "node1.example.com" and "node2.example.com" and their public IPs are "203.0.113.1" and
|
|
38 |
"203.0.113.2" respectively. It is important that the two machines are under the same domain name.
|
|
42 | 39 |
In case you choose to follow a private installation you will need to |
43 |
set up a private dns server, using dnsmasq for example. See node1 below for more. |
|
40 |
set up a private dns server, using dnsmasq for example. See node1 below for |
|
41 |
more information on how to do so. |
|
44 | 42 |
|
45 | 43 |
General Prerequisites |
46 | 44 |
===================== |
... | ... | |
51 | 49 |
To be able to download all synnefo components you need to add the following |
52 | 50 |
lines in your ``/etc/apt/sources.list`` file: |
53 | 51 |
|
54 |
| ``deb http://apt.dev.grnet.gr squeeze/``
|
|
55 |
| ``deb-src http://apt.dev.grnet.gr squeeze/``
|
|
52 |
| ``deb http://apt.dev.grnet.gr wheezy/``
|
|
53 |
| ``deb-src http://apt.dev.grnet.gr wheezy/``
|
|
56 | 54 |
|
57 | 55 |
and import the repo's GPG key: |
58 | 56 |
|
59 | 57 |
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -`` |
60 | 58 |
|
61 |
Also add the following line to enable the ``squeeze-backports`` repository, |
|
62 |
which may provide more recent versions of certain packages. The repository |
|
63 |
is deactivated by default and must be specified expicitly in ``apt-get`` |
|
64 |
operations: |
|
59 |
Update your list of packages and continue with the installation: |
|
60 |
|
|
61 |
.. code-block:: console |
|
65 | 62 |
|
66 |
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
|
|
63 |
# apt-get update
|
|
67 | 64 |
|
68 | 65 |
You also need a shared directory visible by both nodes. Pithos will save all |
69 |
data inside this directory. By 'all data', we mean files, images, and pithos
|
|
66 |
data inside this directory. By 'all data', we mean files, images, and Pithos
|
|
70 | 67 |
specific mapping data. If you plan to upload more than one basic image, this |
71 | 68 |
directory should have at least 50GB of free space. During this guide, we will |
72 | 69 |
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos`` |
... | ... | |
96 | 93 |
* rabbitmq (message queue) |
97 | 94 |
* ntp (NTP daemon) |
98 | 95 |
* gevent |
99 |
* dns server
|
|
96 |
* dnsmasq (DNS server)
|
|
100 | 97 |
|
101 | 98 |
You can install apache2, postgresql, ntp and rabbitmq by running: |
102 | 99 |
|
... | ... | |
104 | 101 |
|
105 | 102 |
# apt-get install apache2 postgresql ntp rabbitmq-server |
106 | 103 |
|
107 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
|
108 |
the official debian backports: |
|
104 |
To install gunicorn and gevent, run: |
|
109 | 105 |
|
110 | 106 |
.. code-block:: console |
111 | 107 |
|
112 |
# apt-get -t squeeze-backports install gunicorn |
|
113 |
|
|
114 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
|
115 |
|
|
116 |
.. code-block:: console |
|
117 |
|
|
118 |
# apt-get -t squeeze-backports install python-gevent |
|
108 |
# apt-get install gunicorn python-gevent |
|
119 | 109 |
|
120 | 110 |
On node1, we will create our databases, so you will also need the |
121 | 111 |
python-psycopg2 package: |
... | ... | |
124 | 114 |
|
125 | 115 |
# apt-get install python-psycopg2 |
126 | 116 |
|
127 |
|
|
128 | 117 |
Database setup |
129 | 118 |
~~~~~~~~~~~~~~ |
130 | 119 |
|
... | ... | |
151 | 140 |
postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo; |
152 | 141 |
|
153 | 142 |
Configure the database to listen to all network interfaces. You can do this by |
154 |
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
|
|
143 |
editting the file ``/etc/postgresql/9.1/main/postgresql.conf`` and change
|
|
155 | 144 |
``listen_addresses`` to ``'*'`` : |
156 | 145 |
|
157 | 146 |
.. code-block:: console |
158 | 147 |
|
159 | 148 |
listen_addresses = '*' |
160 | 149 |
|
161 |
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
|
|
150 |
Furthermore, edit ``/etc/postgresql/9.1/main/pg_hba.conf`` to allow node1 and
|
|
162 | 151 |
node2 to connect to the database. Add the following lines under ``#IPv4 local |
163 | 152 |
connections:`` : |
164 | 153 |
|
165 | 154 |
.. code-block:: console |
166 | 155 |
|
167 |
host all all 4.3.2.1/32 md5
|
|
168 |
host all all 4.3.2.2/32 md5
|
|
156 |
host all all 203.0.113.1/32 md5
|
|
157 |
host all all 203.0.113.2/32 md5
|
|
169 | 158 |
|
170 |
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
|
|
159 |
Make sure to substitute "203.0.113.1" and "203.0.113.2" with node1's and node2's
|
|
171 | 160 |
actual IPs. Now, restart the server to apply the changes: |
172 | 161 |
|
173 | 162 |
.. code-block:: console |
174 | 163 |
|
175 | 164 |
# /etc/init.d/postgresql restart |
176 | 165 |
|
177 |
Gunicorn setup |
|
178 |
~~~~~~~~~~~~~~ |
|
179 |
|
|
180 |
Rename the file ``/etc/gunicorn.d/synnefo.example`` to |
|
181 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file: |
|
182 |
|
|
183 |
.. code-block:: console |
|
184 |
|
|
185 |
# mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
|
186 |
|
|
187 |
|
|
188 |
.. warning:: Do NOT start the server yet, because it won't find the |
|
189 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
|
190 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
|
191 |
``--worker-class=sync``. We will start the server after successful |
|
192 |
installation of astakos. If the server is running:: |
|
193 |
|
|
194 |
# /etc/init.d/gunicorn stop |
|
195 | 166 |
|
196 | 167 |
Certificate Creation |
197 | 168 |
~~~~~~~~~~~~~~~~~~~~~ |
198 | 169 |
|
199 |
Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection. |
|
200 |
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate. |
|
170 |
Node1 will host Cyclades. Cyclades should communicate with the other Synnefo |
|
171 |
Services and users over a secure channel. In order for the connection to be |
|
172 |
trusted, the keys provided to Apache below should be signed with a certificate. |
|
201 | 173 |
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate |
202 |
and sign your keys with this. To do so on node1 run |
|
174 |
and sign your keys with this. To do so on node1 run:
|
|
203 | 175 |
|
204 | 176 |
.. code-block:: console |
205 | 177 |
|
206 |
# aptitude install openvpn
|
|
178 |
# apt-get install openvpn
|
|
207 | 179 |
# mkdir /etc/openvpn/easy-rsa |
208 | 180 |
# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa |
209 | 181 |
# cd /etc/openvpn/easy-rsa/2.0 |
... | ... | |
222 | 194 |
|
223 | 195 |
# ./build-ca |
224 | 196 |
|
225 |
The previous will create a ``ca.crt`` file. Copy this file under
|
|
226 |
``/usr/local/share/ca-certificates/`` directory and run : |
|
197 |
The previous will create a ``ca.crt`` file in the directory ``/etc/openvpn/easy-rsa/2.0/keys``.
|
|
198 |
Copy this file under ``/usr/local/share/ca-certificates/`` directory and run :
|
|
227 | 199 |
|
228 | 200 |
.. code-block:: console |
229 | 201 |
|
... | ... | |
237 | 209 |
|
238 | 210 |
# ./build-key-server node1.example.com |
239 | 211 |
|
240 |
This will create a .pem and a .key file in your current folder. Copy these in |
|
241 |
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and |
|
242 |
use them in the apache2 configuration file below instead of the defaults. |
|
212 |
This will create a ``01.pem`` and a ``node1.example.com.key`` files in the |
|
213 |
``/etc/openvpn/easy-rsa/2.0/keys`` directory. Copy these in ``/etc/ssl/certs/`` |
|
214 |
and ``/etc/ssl/private/`` respectively and use them in the apache2 |
|
215 |
configuration file below instead of the defaults. |
|
243 | 216 |
|
244 | 217 |
Apache2 setup |
245 | 218 |
~~~~~~~~~~~~~ |
... | ... | |
343 | 316 |
Pithos data directory setup |
344 | 317 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
345 | 318 |
|
346 |
As mentioned in the General Prerequisites section, there is a directory called
|
|
347 |
``/srv/pithos`` visible by both nodes. We create and setup the ``data`` |
|
319 |
As mentioned in the General Prerequisites section, there should be a directory
|
|
320 |
called ``/srv/pithos`` visible by both nodes. We create and setup the ``data``
|
|
348 | 321 |
directory inside it: |
349 | 322 |
|
350 | 323 |
.. code-block:: console |
351 | 324 |
|
325 |
# mkdir /srv/pithos |
|
352 | 326 |
# cd /srv/pithos |
353 | 327 |
# mkdir data |
354 | 328 |
# chown www-data:www-data data |
355 | 329 |
# chmod g+ws data |
356 | 330 |
|
331 |
This directory must be shared via `NFS <https://en.wikipedia.org/wiki/Network_File_System>`_. |
|
332 |
In order to do this, run: |
|
333 |
|
|
334 |
.. code-block:: console |
|
335 |
|
|
336 |
# apt-get install rpcbind nfs-kernel-server |
|
337 |
|
|
338 |
Now edit ``/etc/exports`` and add the following line: |
|
339 |
|
|
340 |
.. code-block:: console |
|
341 |
|
|
342 |
/srv/pithos/ 203.0.113.2(rw,no_root_squash,sync,subtree_check) |
|
343 |
|
|
344 |
Once done, run: |
|
345 |
|
|
346 |
.. code-block:: console |
|
347 |
|
|
348 |
# /etc/init.d/nfs-kernel-server restart |
|
349 |
|
|
350 |
|
|
357 | 351 |
DNS server setup |
358 | 352 |
~~~~~~~~~~~~~~~~ |
359 | 353 |
|
360 |
If your machines are not under the same domain nameyou have to set up a dns server. |
|
361 |
In order to set up a dns server using dnsmasq do the following |
|
354 |
If your machines are not under the same domain name you have to set up a dns server.
|
|
355 |
In order to set up a dns server using dnsmasq do the following:
|
|
362 | 356 |
|
363 | 357 |
.. code-block:: console |
364 | 358 |
|
365 |
# apt-get install dnsmasq
|
|
359 |
# apt-get install dnsmasq
|
|
366 | 360 |
|
367 |
Then edit you ``/etc/hosts/`` as follows
|
|
361 |
Then edit your ``/etc/hosts/`` file as follows:
|
|
368 | 362 |
|
369 | 363 |
.. code-block:: console |
370 | 364 |
|
371 |
4.3.2.1 node1.example.com
|
|
372 |
4.3.2.2 node2.example.com
|
|
365 |
203.0.113.1 node1.example.com
|
|
366 |
203.0.113.2 node2.example.com
|
|
373 | 367 |
|
374 |
Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and |
|
375 |
the ``interface`` you would like to listen to. |
|
368 |
dnsmasq will serve any IPs/domains found in ``/etc/resolv.conf``. |
|
376 | 369 |
|
377 |
Also add the following in your ``/etc/resolv.conf`` file |
|
370 |
There is a `"bug" in libevent 2.0.5 <http://sourceforge.net/p/levent/bugs/193/>`_ |
|
371 |
, where if you have multiple nameservers in your ``/etc/resolv.conf``, libevent |
|
372 |
will round-robin against them. To avoid this, you must use a single nameserver |
|
373 |
for all your needs. Edit your ``/etc/resolv.conf`` to include your dns server: |
|
378 | 374 |
|
379 | 375 |
.. code-block:: console |
380 | 376 |
|
381 |
nameserver 4.3.2.1 |
|
377 |
nameserver 203.0.113.1 |
|
378 |
|
|
379 |
Because of the aforementioned bug, you can't specify more than one DNS servers |
|
380 |
in your ``/etc/resolv.conf``. In order for dnsmasq to serve domains not in |
|
381 |
``/etc/hosts``, edit ``/etc/dnsmasq.conf`` and change the line starting with |
|
382 |
``#resolv-file=`` to: |
|
383 |
|
|
384 |
.. code-block:: console |
|
385 |
|
|
386 |
resolv-file=/etc/external-dns |
|
387 |
|
|
388 |
Now create the file ``/etc/external-dns`` and specify any extra DNS servers you |
|
389 |
want dnsmasq to query for domains, e.g., 8.8.8.8: |
|
390 |
|
|
391 |
.. code-block:: console |
|
392 |
|
|
393 |
nameserver 8.8.8.8 |
|
394 |
|
|
395 |
In the ``/etc/dnsmasq.conf`` file, you can also specify the ``listen-address`` |
|
396 |
and the ``interface`` you would like dnsmasq to listen to. |
|
397 |
|
|
398 |
Finally, restart dnsmasq: |
|
399 |
|
|
400 |
.. code-block:: console |
|
401 |
|
|
402 |
# /etc/init.d/dnsmasq restart |
|
382 | 403 |
|
383 | 404 |
You are now ready with all general prerequisites concerning node1. Let's go to |
384 | 405 |
node2. |
... | ... | |
395 | 416 |
* ntp (NTP daemon) |
396 | 417 |
* gevent |
397 | 418 |
* certificates |
398 |
* dns setup
|
|
419 |
* dnsmasq (DNS server)
|
|
399 | 420 |
|
400 | 421 |
You can install the above by running: |
401 | 422 |
|
... | ... | |
403 | 424 |
|
404 | 425 |
# apt-get install apache2 postgresql ntp |
405 | 426 |
|
406 |
Make sure to install gunicorn >= v0.12.2. You can do this by installing from |
|
407 |
the official debian backports: |
|
427 |
To install gunicorn and gevent, run: |
|
408 | 428 |
|
409 | 429 |
.. code-block:: console |
410 | 430 |
|
411 |
# apt-get -t squeeze-backports install gunicorn |
|
412 |
|
|
413 |
Also, make sure to install gevent >= 0.13.6. Again from the debian backports: |
|
414 |
|
|
415 |
.. code-block:: console |
|
416 |
|
|
417 |
# apt-get -t squeeze-backports install python-gevent |
|
431 |
# apt-get install gunicorn python-gevent |
|
418 | 432 |
|
419 | 433 |
Node2 will connect to the databases on node1, so you will also need the |
420 | 434 |
python-psycopg2 package: |
... | ... | |
432 | 446 |
for performance/scalability/redundancy reasons, but those kind of setups are out |
433 | 447 |
of the purpose of this guide. |
434 | 448 |
|
435 |
Gunicorn setup |
|
436 |
~~~~~~~~~~~~~~ |
|
437 |
|
|
438 |
Rename the file ``/etc/gunicorn.d/synnefo.example`` to |
|
439 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file |
|
440 |
(as happened for node1): |
|
441 |
|
|
442 |
.. code-block:: console |
|
443 |
|
|
444 |
# mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
|
445 |
|
|
446 |
|
|
447 |
.. warning:: Do NOT start the server yet, because it won't find the |
|
448 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
|
449 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
|
450 |
``--worker-class=sync``. We will start the server after successful |
|
451 |
installation of astakos. If the server is running:: |
|
452 |
|
|
453 |
# /etc/init.d/gunicorn stop |
|
454 |
|
|
455 | 449 |
Apache2 setup |
456 | 450 |
~~~~~~~~~~~~~ |
457 | 451 |
|
... | ... | |
531 | 525 |
~~~~~~~~~~~~~~~~~~~ |
532 | 526 |
|
533 | 527 |
Copy the certificate you created before on node1 (`ca.crt`) under the directory |
534 |
``/usr/local/share/ca-certificate`` |
|
535 |
|
|
536 |
and run: |
|
528 |
``/usr/local/share/ca-certificate`` and run: |
|
537 | 529 |
|
538 | 530 |
.. code-block:: console |
539 | 531 |
|
540 |
# update-ca-certificates
|
|
532 |
# update-ca-certificates
|
|
541 | 533 |
|
542 | 534 |
to update the records. |
543 | 535 |
|
... | ... | |
549 | 541 |
|
550 | 542 |
.. code-block:: console |
551 | 543 |
|
552 |
nameserver 4.3.2.1
|
|
544 |
nameserver 203.0.113.1
|
|
553 | 545 |
|
554 |
to inform the node about the new dns server. |
|
546 |
to inform the node about the new DNS server. |
|
547 |
|
|
548 |
As mentioned before, this should be the only ``nameserver`` entry in |
|
549 |
``/etc/resolv.conf``. |
|
555 | 550 |
|
556 | 551 |
We are now ready with all general prerequisites for node2. Now that we have |
557 | 552 |
finished with all general prerequisites for both nodes, we can start installing |
... | ... | |
560 | 555 |
Installation of Astakos on node1 |
561 | 556 |
================================ |
562 | 557 |
|
563 |
To install astakos, grab the package from our repository (make sure you made
|
|
564 |
the additions needed in your ``/etc/apt/sources.list`` file, as described
|
|
565 |
previously), by running: |
|
558 |
To install Astakos, grab the package from our repository (make sure you made
|
|
559 |
the additions needed in your ``/etc/apt/sources.list`` file and updated, as
|
|
560 |
described previously), by running:
|
|
566 | 561 |
|
567 | 562 |
.. code-block:: console |
568 | 563 |
|
... | ... | |
573 | 568 |
Configuration of Astakos |
574 | 569 |
======================== |
575 | 570 |
|
571 |
Gunicorn setup |
|
572 |
-------------- |
|
573 |
|
|
574 |
Copy the file ``/etc/gunicorn.d/synnefo.example`` to |
|
575 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file: |
|
576 |
|
|
577 |
.. code-block:: console |
|
578 |
|
|
579 |
# mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
|
580 |
|
|
581 |
|
|
582 |
.. warning:: Do NOT start the server yet, because it won't find the |
|
583 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
|
584 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
|
585 |
``--worker-class=sync``. We will start the server after successful |
|
586 |
installation of Astakos. If the server is running:: |
|
587 |
|
|
588 |
# /etc/init.d/gunicorn stop |
|
589 |
|
|
576 | 590 |
Conf Files |
577 | 591 |
---------- |
578 | 592 |
|
579 |
After astakos is successfully installed, you will find the directory
|
|
593 |
After Astakos is successfully installed, you will find the directory
|
|
580 | 594 |
``/etc/synnefo`` and some configuration files inside it. The files contain |
581 | 595 |
commented configuration options, which are the default options. While installing |
582 | 596 |
new snf-* components, new configuration files will appear inside the directory. |
583 | 597 |
In this guide (and for all services), we will edit only the minimum necessary |
584 | 598 |
configuration options, to reflect our setup. Everything else will remain as is. |
585 | 599 |
|
586 |
After getting familiar with synnefo, you will be able to customize the software
|
|
600 |
After getting familiar with Synnefo, you will be able to customize the software
|
|
587 | 601 |
as you wish and fits your needs. Many options are available, to empower the |
588 | 602 |
administrator with extensively customizable setups. |
589 | 603 |
|
590 |
For the snf-webproject component (installed as an astakos dependency), we
|
|
604 |
For the snf-webproject component (installed as an Astakos dependency), we
|
|
591 | 605 |
need the following: |
592 | 606 |
|
593 | 607 |
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to |
... | ... | |
605 | 619 |
'USER': 'synnefo', # Not used with sqlite3. |
606 | 620 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
607 | 621 |
# Set to empty string for localhost. Not used with sqlite3. |
608 |
'HOST': '4.3.2.1',
|
|
622 |
'HOST': '203.0.113.1',
|
|
609 | 623 |
# Set to empty string for default. Not used with sqlite3. |
610 | 624 |
'PORT': '5432', |
611 | 625 |
} |
... | ... | |
620 | 634 |
|
621 | 635 |
SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)' |
622 | 636 |
|
623 |
For astakos specific configuration, edit the following options in
|
|
637 |
For Astakos specific configuration, edit the following options in
|
|
624 | 638 |
``/etc/synnefo/20-snf-astakos-app-settings.conf`` : |
625 | 639 |
|
626 | 640 |
.. code-block:: console |
... | ... | |
630 | 644 |
ASTAKOS_BASE_URL = 'https://node1.example.com/astakos' |
631 | 645 |
|
632 | 646 |
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all |
633 |
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an
|
|
647 |
services). ``ASTAKOS_BASE_URL`` is the Astakos top-level URL. Appending an
|
|
634 | 648 |
extra path (``/astakos`` here) is recommended in order to distinguish |
635 | 649 |
components, if more than one are installed on the same machine. |
636 | 650 |
|
... | ... | |
669 | 683 |
Email delivery configuration |
670 | 684 |
---------------------------- |
671 | 685 |
|
672 |
Many of the ``astakos`` operations require server to notify service users and
|
|
673 |
administrators via email. e.g. right after the signup process the service sents
|
|
674 |
an email to the registered email address containing an email verification url,
|
|
675 |
after the user verifies the email address astakos once again needs to notify
|
|
676 |
administrators with a notice that a new account has just been verified. |
|
686 |
Many of the ``Astakos`` operations require the server to notify service users
|
|
687 |
and administrators via email. e.g. right after the signup process, the service
|
|
688 |
sents an email to the registered email address containing an verification url.
|
|
689 |
After the user verifies the email address, Astakos once again needs to
|
|
690 |
notify administrators with a notice that a new account has just been verified.
|
|
677 | 691 |
|
678 |
More specifically astakos sends emails in the following cases
|
|
692 |
More specifically Astakos sends emails in the following cases
|
|
679 | 693 |
|
680 | 694 |
- An email containing a verification link after each signup process. |
681 | 695 |
- An email to the people listed in ``ADMINS`` setting after each email |
... | ... | |
684 | 698 |
activate the user. |
685 | 699 |
- A welcome email to the user email and an admin notification to ``ADMINS`` |
686 | 700 |
right after each account activation. |
687 |
- Feedback messages submited from astakos contact view and astakos feedback
|
|
701 |
- Feedback messages submited from Astakos contact view and Astakos feedback
|
|
688 | 702 |
API endpoint are sent to contacts listed in ``HELPDESK`` setting. |
689 | 703 |
- Project application request notifications to people included in ``HELPDESK`` |
690 | 704 |
and ``MANAGERS`` settings. |
... | ... | |
695 | 709 |
notifications. A simple configuration, using an external smtp server to |
696 | 710 |
deliver messages, is shown below. Alter the following example to meet your |
697 | 711 |
smtp server characteristics. Notice that the smtp server is needed for a proper |
698 |
installation |
|
712 |
installation. |
|
713 |
|
|
714 |
Edit ``/etc/synnefo/00-snf-common-admins.conf``: |
|
699 | 715 |
|
700 | 716 |
.. code-block:: python |
701 | 717 |
|
702 |
# /etc/synnefo/00-snf-common-admins.conf |
|
703 |
EMAIL_HOST = "mysmtp.server.synnefo.org" |
|
718 |
EMAIL_HOST = "mysmtp.server.example.com" |
|
704 | 719 |
EMAIL_HOST_USER = "<smtpuser>" |
705 | 720 |
EMAIL_HOST_PASSWORD = "<smtppassword>" |
706 | 721 |
|
707 | 722 |
# this gets appended in all email subjects |
708 |
EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] "
|
|
723 |
EMAIL_SUBJECT_PREFIX = "[example.com] "
|
|
709 | 724 |
|
710 | 725 |
# Address to use for outgoing emails |
711 |
DEFAULT_FROM_EMAIL = "server@example.synnefo.org"
|
|
726 |
DEFAULT_FROM_EMAIL = "server@example.com"
|
|
712 | 727 |
|
713 | 728 |
# Email where users can contact for support. This is used in html/email |
714 | 729 |
# templates. |
715 |
CONTACT_EMAIL = "server@example.synnefo.org"
|
|
730 |
CONTACT_EMAIL = "server@example.com"
|
|
716 | 731 |
|
717 | 732 |
# The email address that error messages come from |
718 |
SERVER_EMAIL = "server-errors@example.synnefo.org"
|
|
733 |
SERVER_EMAIL = "server-errors@example.com"
|
|
719 | 734 |
|
720 | 735 |
Notice that since email settings might be required by applications other than |
721 |
astakos they are defined in a different configuration file than the one
|
|
722 |
previously used to set astakos specific settings.
|
|
736 |
Astakos, they are defined in a different configuration file than the one
|
|
737 |
previously used to set Astakos specific settings.
|
|
723 | 738 |
|
724 | 739 |
Refer to |
725 | 740 |
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_ |
726 | 741 |
for additional information on available email settings. |
727 | 742 |
|
728 | 743 |
As refered in the previous section, based on the operation that triggers |
729 |
an email notification, the recipients list differs. Specifically for |
|
744 |
an email notification, the recipients list differs. Specifically, for
|
|
730 | 745 |
emails whose recipients include contacts from your service team |
731 | 746 |
(administrators, managers, helpdesk etc) synnefo provides the following |
732 | 747 |
settings located in ``00-snf-common-admins.conf``: |
733 | 748 |
|
734 | 749 |
.. code-block:: python |
735 | 750 |
|
736 |
ADMINS = (('Admin name', 'admin@example.synnefo.org'),
|
|
737 |
('Admin2 name', 'admin2@example.synnefo.org))
|
|
738 |
MANAGERS = (('Manager name', 'manager@example.synnefo.org'),)
|
|
739 |
HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),)
|
|
751 |
ADMINS = (('Admin name', 'admin@example.com'),
|
|
752 |
('Admin2 name', 'admin2@example.com))
|
|
753 |
MANAGERS = (('Manager name', 'manager@example.com'),)
|
|
754 |
HELPDESK = (('Helpdesk user name', 'helpdesk@example.com'),)
|
|
740 | 755 |
|
741 | 756 |
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing: |
742 | 757 |
|
... | ... | |
803 | 818 |
'USER': 'synnefo', # Not used with sqlite3. |
804 | 819 |
'PASSWORD': 'example_passw0rd', # Not used with sqlite3. |
805 | 820 |
# Set to empty string for localhost. Not used with sqlite3. |
806 |
'HOST': '4.3.2.1',
|
|
821 |
'HOST': '203.0.113.1',
|
|
807 | 822 |
# Set to empty string for default. Not used with sqlite3. |
808 | 823 |
'PORT': '5432', |
809 | 824 |
} |
... | ... | |
820 | 835 |
|
821 | 836 |
At this example we don't need to create a django superuser, so we select |
822 | 837 |
``[no]`` to the question. After a successful sync, we run the migration needed |
823 |
for astakos:
|
|
838 |
for Astakos:
|
|
824 | 839 |
|
825 | 840 |
.. code-block:: console |
826 | 841 |
|
... | ... | |
839 | 854 |
--------------------- |
840 | 855 |
|
841 | 856 |
When the database is ready, we need to register the services. The following |
842 |
command will ask you to register the standard Synnefo components (astakos,
|
|
843 |
cyclades, and pithos) along with the services they provide. Note that you
|
|
844 |
have to register at least astakos in order to have a usable authentication
|
|
857 |
command will ask you to register the standard Synnefo components (Astakos,
|
|
858 |
Cyclades and Pithos) along with the services they provide. Note that you
|
|
859 |
have to register at least Astakos in order to have a usable authentication
|
|
845 | 860 |
system. For each component, you will be asked to provide two URLs: its base |
846 | 861 |
URL and its UI URL. |
847 | 862 |
|
848 | 863 |
The former is the location where the component resides; it should equal |
849 | 864 |
the ``<component_name>_BASE_URL`` as specified in the respective component |
850 |
settings. For example, the base URL for astakos would be
|
|
865 |
settings. For example, the base URL for Astakos would be
|
|
851 | 866 |
``https://node1.example.com/astakos``. |
852 | 867 |
|
853 | 868 |
The latter is the URL that appears in the Cloudbar and leads to the |
... | ... | |
866 | 881 |
.. note:: |
867 | 882 |
|
868 | 883 |
This command is equivalent to running the following series of commands; |
869 |
it registers the three components in astakos and then in each host it
|
|
884 |
it registers the three components in Astakos and then in each host it
|
|
870 | 885 |
exports the respective service definitions, copies the exported json file |
871 |
to the astakos host, where it finally imports it:
|
|
886 |
to the Astakos host, where it finally imports it:
|
|
872 | 887 |
|
873 | 888 |
.. code-block:: console |
874 | 889 |
|
... | ... | |
884 | 899 |
# copy the file to astakos-host |
885 | 900 |
astakos-host$ snf-manage service-import --json pithos.json |
886 | 901 |
|
887 |
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2 |
|
902 |
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2.
|
|
888 | 903 |
|
889 | 904 |
Setting Default Base Quota for Resources |
890 | 905 |
---------------------------------------- |
... | ... | |
977 | 992 |
activation <user_activation>` section. |
978 | 993 |
|
979 | 994 |
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with |
980 |
your browser again. Try to sign in using your new credentials. If the astakos
|
|
995 |
your browser again. Try to sign in using your new credentials. If the Astakos
|
|
981 | 996 |
menu appears and you can see your profile, then you have successfully setup |
982 | 997 |
Astakos. |
983 | 998 |
|
... | ... | |
1001 | 1016 |
|
1002 | 1017 |
# apt-get install snf-pithos-webclient |
1003 | 1018 |
|
1004 |
This package provides the standalone pithos web client. The web client is the
|
|
1005 |
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
|
|
1019 |
This package provides the standalone Pithos web client. The web client is the
|
|
1020 |
web UI for Pithos and will be accessible by clicking "Pithos" on the Astakos
|
|
1006 | 1021 |
interface's cloudbar, at the top of the Astakos homepage. |
1007 | 1022 |
|
1008 | 1023 |
|
... | ... | |
1011 | 1026 |
Configuration of Pithos |
1012 | 1027 |
======================= |
1013 | 1028 |
|
1029 |
Gunicorn setup |
|
1030 |
-------------- |
|
1031 |
|
|
1032 |
Copy the file ``/etc/gunicorn.d/synnefo.example`` to |
|
1033 |
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file |
|
1034 |
(as happened for node1): |
|
1035 |
|
|
1036 |
.. code-block:: console |
|
1037 |
|
|
1038 |
# cp /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo |
|
1039 |
|
|
1040 |
|
|
1041 |
.. warning:: Do NOT start the server yet, because it won't find the |
|
1042 |
``synnefo.settings`` module. Also, in case you are using ``/etc/hosts`` |
|
1043 |
instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to |
|
1044 |
``--worker-class=sync``. We will start the server after successful |
|
1045 |
installation of Astakos. If the server is running:: |
|
1046 |
|
|
1047 |
# /etc/init.d/gunicorn stop |
|
1048 |
|
|
1014 | 1049 |
Conf Files |
1015 | 1050 |
---------- |
1016 | 1051 |
|
1017 | 1052 |
After Pithos is successfully installed, you will find the directory |
1018 | 1053 |
``/etc/synnefo`` and some configuration files inside it, as you did in node1 |
1019 |
after installation of astakos. Here, you will not have to change anything that
|
|
1054 |
after installation of Astakos. Here, you will not have to change anything that
|
|
1020 | 1055 |
has to do with snf-common or snf-webproject. Everything is set at node1. You |
1021 | 1056 |
only need to change settings that have to do with Pithos. Specifically: |
1022 | 1057 |
|
... | ... | |
1050 | 1085 |
|
1051 | 1086 |
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL. |
1052 | 1087 |
|
1053 |
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos.
|
|
1088 |
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with Astakos.
|
|
1054 | 1089 |
It can be retrieved by running on the Astakos node (node1 in our case): |
1055 | 1090 |
|
1056 | 1091 |
.. code-block:: console |
... | ... | |
1066 | 1101 |
then it should be changed to ``True``. |
1067 | 1102 |
|
1068 | 1103 |
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the |
1069 |
Pithos web UI with the astakos web UI (through the top cloudbar):
|
|
1104 |
Pithos web UI with the Astakos web UI (through the top cloudbar):
|
|
1070 | 1105 |
|
1071 | 1106 |
.. code-block:: console |
1072 | 1107 |
|
... | ... | |
1074 | 1109 |
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services' |
1075 | 1110 |
CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu' |
1076 | 1111 |
|
1077 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
|
|
1112 |
The ``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
|
|
1078 | 1113 |
cloudbar. |
1079 | 1114 |
|
1080 | 1115 |
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the |
1081 |
Pithos web client to get from astakos all the information needed to fill its
|
|
1082 |
own cloudbar. So we put our astakos deployment urls there.
|
|
1116 |
Pithos web client to get from Astakos all the information needed to fill its
|
|
1117 |
own cloudbar. So we put our Astakos deployment urls there.
|
|
1083 | 1118 |
|
1084 | 1119 |
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view |
1085 | 1120 |
in order to authenticate itself with astakos during the authorization grant |
... | ... | |
1092 | 1127 |
--------------------- |
1093 | 1128 |
|
1094 | 1129 |
Pithos is pooling-ready without the need of further configuration, because it |
1095 |
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
|
|
1130 |
doesn't use a Django DB. It pools HTTP connections to Astakos and Pithos
|
|
1096 | 1131 |
backend objects for access to the Pithos DB. |
1097 | 1132 |
|
1098 | 1133 |
However, as in Astakos, since we are running with Greenlets, it is also |
... | ... | |
1144 | 1179 |
|
1145 | 1180 |
root@node2:~ # pithos-migrate stamp head |
1146 | 1181 |
|
1182 |
Mount the NFS directory |
|
1183 |
----------------------- |
|
1184 |
|
|
1185 |
First install the package nfs-common by running: |
|
1186 |
|
|
1187 |
.. code-block:: console |
|
1188 |
|
|
1189 |
root@node2:~ # apt-get install nfs-common |
|
1190 |
|
|
1191 |
now create the directory /srv/pithos/ and mount the remote directory to it: |
|
1192 |
|
|
1193 |
.. code-block:: console |
|
1194 |
|
|
1195 |
root@node2:~ # mkdir /srv/pithos/ |
|
1196 |
root@node2:~ # mount -t nfs 203.0.113.1:/srv/pithos/ /srv/pithos/ |
|
1197 |
|
|
1147 | 1198 |
Servers Initialization |
1148 | 1199 |
---------------------- |
1149 | 1200 |
|
... | ... | |
1156 | 1207 |
|
1157 | 1208 |
You have now finished the Pithos setup. Let's test it now. |
1158 | 1209 |
|
1159 |
|
|
1160 | 1210 |
Testing of Pithos |
1161 | 1211 |
================= |
1162 | 1212 |
|
... | ... | |
1164 | 1214 |
|
1165 | 1215 |
``http://node1.example.com/astakos`` |
1166 | 1216 |
|
1167 |
Login, and you will see your profile page. Now, click the "pithos" link on the
|
|
1217 |
Login, and you will see your profile page. Now, click the "Pithos" link on the
|
|
1168 | 1218 |
top black cloudbar. If everything was setup correctly, this will redirect you |
1169 | 1219 |
to: |
1170 | 1220 |
|
1221 |
``https://node2.example.com/ui`` |
|
1171 | 1222 |
|
1172 | 1223 |
and you will see the blue interface of the Pithos application. Click the |
1173 | 1224 |
orange "Upload" button and upload your first file. If the file gets uploaded |
... | ... | |
1194 | 1245 |
please continue with the rest of the guide. |
1195 | 1246 |
|
1196 | 1247 |
|
1248 |
Kamaki |
|
1249 |
====== |
|
1250 |
|
|
1251 |
`Kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_ is an |
|
1252 |
Openstack API client library and command line interface with custom extentions |
|
1253 |
specific to Synnefo. |
|
1254 |
|
|
1255 |
Kamaki Installation and Configuration |
|
1256 |
------------------------------------- |
|
1257 |
|
|
1258 |
To install kamaki run: |
|
1259 |
|
|
1260 |
.. code-block:: console |
|
1261 |
|
|
1262 |
# apt-get install kamaki |
|
1263 |
|
|
1264 |
Now, visit |
|
1265 |
|
|
1266 |
`https://node1.example.com/astakos/ui/` |
|
1267 |
|
|
1268 |
log in and click on ``API access``. Scroll all the way to the bottom of the |
|
1269 |
page, click on the orange ``Download your .kamakirc`` button and save the file |
|
1270 |
as ``.kamakirc`` in your home directory. |
|
1271 |
|
|
1272 |
That's all, kamaki is now configured and you can start using it. For a list of |
|
1273 |
commands, see the `official documentantion <http://www.synnefo.org/docs/kamaki/latest/commands.html>`_. |
|
1274 |
|
|
1197 | 1275 |
Cyclades Prerequisites |
1198 | 1276 |
====================== |
1199 | 1277 |
|
... | ... | |
1210 | 1288 |
|
1211 | 1289 |
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management |
1212 | 1290 |
for Cyclades, so Cyclades requires a working Ganeti installation at the backend. |
1213 |
Please refer to the |
|
1214 |
`ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all the |
|
1215 |
gory details. A successful Ganeti installation concludes with a working |
|
1291 |
Please refer to the `ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all |
|
1292 |
the gory details. A successful Ganeti installation concludes with a working |
|
1216 | 1293 |
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs |
1217 | 1294 |
<GANETI_NODES>`. |
1218 | 1295 |
|
... | ... | |
1226 | 1303 |
We highly recommend that you read the official Ganeti documentation, if you are |
1227 | 1304 |
not familiar with Ganeti. |
1228 | 1305 |
|
1229 |
Unfortunately, the current stable version of the stock Ganeti (v2.6.2) doesn't |
|
1230 |
support IP pool management. This feature will be available in Ganeti >= 2.7. |
|
1231 |
Synnefo depends on the IP pool functionality of Ganeti, so you have to use |
|
1232 |
GRNET provided packages until stable 2.7 is out. These packages will also install |
|
1233 |
the proper version of Ganeti. To do so: |
|
1306 |
Ganeti Prerequisites |
|
1307 |
-------------------- |
|
1308 |
You're gonna need the ``lvm2`` and ``vlan`` packages, so run: |
|
1234 | 1309 |
|
1235 | 1310 |
.. code-block:: console |
1236 | 1311 |
|
1237 |
# apt-get install snf-ganeti ganeti-htools |
|
1312 |
# apt-get install lvm2 vlan |
|
1313 |
|
|
1314 |
Ganeti requires FQDN. To properly configure your nodes please |
|
1315 |
see `this <http://docs.ganeti.org/ganeti/2.6/html/install.html#hostname-issues>`_. |
|
1316 |
|
|
1317 |
Ganeti requires an extra available IP and its FQDN e.g., ``203.0.113.100`` and |
|
1318 |
``ganeti.node1.example.com``. Add this IP to your DNS server configuration, as |
|
1319 |
explained above. |
|
1320 |
|
|
1321 |
Also, Ganeti will need a volume group with the same name e.g., ``ganeti`` |
|
1322 |
across all nodes, of at least 20GiB. To create the volume group, |
|
1323 |
see `this <http://www.tldp.org/HOWTO/LVM-HOWTO/createvgs.html>`_. |
|
1324 |
|
|
1325 |
Moreover, node1 and node2 must have the same dsa, rsa keys and authorised_keys |
|
1326 |
under ``/root/.ssh/`` for password-less root ssh between each other. To |
|
1327 |
generate said keys, see `this <https://wiki.debian.org/SSH#Using_shared_keys>`_. |
|
1328 |
|
|
1329 |
In the following sections, we assume that the public interface of all nodes is |
|
1330 |
``eth0`` and there are two extra interfaces ``eth1`` and ``eth2``, which can |
|
1331 |
also be vlans on your primary interface e.g., ``eth0.1`` and ``eth0.2`` in |
|
1332 |
case you don't have multiple physical interfaces. For information on how to |
|
1333 |
create vlans, please see |
|
1334 |
`this <https://wiki.debian.org/NetworkConfiguration#Howto_use_vlan_.28dot1q.2C_802.1q.2C_trunk.29_.28Etch.2C_Lenny.29>`_. |
|
1238 | 1335 |
|
1239 |
Ganeti will make use of drbd. To enable this and make the configuration pemanent |
|
1240 |
you have to do the following : |
|
1336 |
Finally, setup two bridges on the host machines (e.g: br1/br2 on eth1/eth2 |
|
1337 |
respectively), as described `here <https://wiki.debian.org/BridgeNetworkConnections>`_. |
|
1338 |
|
|
1339 |
Ganeti Installation and Initialization |
|
1340 |
-------------------------------------- |
|
1341 |
|
|
1342 |
We assume that Ganeti will use the KVM hypervisor. To install KVM, run on all |
|
1343 |
Ganeti nodes: |
|
1241 | 1344 |
|
1242 | 1345 |
.. code-block:: console |
1243 | 1346 |
|
1244 |
# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true |
|
1245 |
# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules |
|
1347 |
# apt-get install qemu-kvm |
|
1348 |
|
|
1349 |
It's time to install Ganeti. To be able to use hotplug (which will be part of |
|
1350 |
the official Ganeti 2.10), we recommend using our Ganeti package version: |
|
1351 |
|
|
1352 |
`2.8.2+snapshot1+b64v1+hotplug5+ippoolfix+rapifix+netxen+lockfix2-1~wheezy` |
|
1353 |
|
|
1354 |
Let's briefly explain each patch: |
|
1355 |
|
|
1356 |
* hotplug: hotplug devices (NICs and Disks) (ganeti 2.10) |
|
1357 |
* b64v1: Save bitarray of network IP pools in config file, encoded in base64, instead of 0/1. |
|
1358 |
* ippoolfix: Ability to give an externally reserved IP to an instance (e.g. gateway IP). (ganeti 2.10) |
|
1359 |
* rapifix: Extend RAPI το support 'depends' and 'shutdown_timeout' body arguments. (ganeti 2.9) |
|
1360 |
* netxen: Network configuration for xen instances, exactly like in kvm instances. (ganeti 2.9) |
|
1361 |
* lockfix2: Fixes for 2 locking issues: |
|
1362 |
|
|
1363 |
- Issue 622: Fix for opportunistic locking that caused an assertion error (Patch waiting in ganeti-devel list) |
|
1364 |
- Issue 621: Fix for network locking issue that resulted in: [Lock 'XXXXXX' not found in set 'instance' (it may have been removed)] |
|
1246 | 1365 |
|
1366 |
* snapshot: Add trivial 'snapshot' functionality that is unused by Synnefo or Ganeti. |
|
1247 | 1367 |
|
1248 |
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on |
|
1249 |
both nodes, choose a domain name that resolves to a valid floating IP (let's |
|
1250 |
say it's ``ganeti.node1.example.com``). This IP is needed to communicate with |
|
1251 |
the Ganeti cluster. Make sure node1 and node2 have same dsa,rsa keys and authorised_keys |
|
1252 |
for password-less root ssh between each other. If not then skip passing --no-ssh-init but be |
|
1253 |
aware that it will replace /root/.ssh/* related files and you might lose access to master node. |
|
1254 |
Also, Ganeti will need a volume to host your VMs' disks. So, make sure there is an lvm volume |
|
1255 |
group named ``ganeti``. Finally, setup a bridge interface on the host machines (e.g: br0). This |
|
1256 |
will be needed for the network configuration afterwards. |
|
1368 |
To install Ganeti run: |
|
1369 |
|
|
1370 |
.. code-block:: console |
|
1371 |
|
|
1372 |
# apt-get install snf-ganeti ganeti-htools ganeti-haskell |
|
1373 |
|
|
1374 |
Ganeti will make use of drbd. To enable this and make the configuration |
|
1375 |
permanent you have to do the following : |
|
1376 |
|
|
1377 |
.. code-block:: console |
|
1378 |
|
|
1379 |
# modprobe drbd minor_count=255 usermode_helper=/bin/true |
|
1380 |
# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules |
|
1257 | 1381 |
|
1258 | 1382 |
Then run on node1: |
1259 | 1383 |
|
1260 | 1384 |
.. code-block:: console |
1261 | 1385 |
|
1262 | 1386 |
root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \ |
1263 |
--no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \ |
|
1387 |
--no-etc-hosts --vg-name=ganeti --nic-parameters link=br1 \ |
|
1388 |
--default-iallocator hail \ |
|
1389 |
--hypervisor-parameters kvm:kernel_path=,vnc_bind_address=0.0.0.0 \ |
|
1264 | 1390 |
--master-netdev eth0 ganeti.node1.example.com |
1265 |
root@node1:~ # gnt-cluster modify --default-iallocator hail |
|
1266 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path= |
|
1267 |
root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0 |
|
1268 |
|
|
1391 |
|
|
1269 | 1392 |
root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \ |
1270 | 1393 |
--vm-capable=yes node2.example.com |
1271 | 1394 |
root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti |
1272 | 1395 |
root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default |
1273 | 1396 |
|
1397 |
``br1`` will be the default interface for any newly created VMs. |
|
1398 |
|
|
1274 | 1399 |
You can verify that the ganeti cluster is successfully setup,by running on the |
1275 | 1400 |
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1): |
1276 | 1401 |
|
... | ... | |
1278 | 1403 |
|
1279 | 1404 |
# gnt-cluster verify |
1280 | 1405 |
|
1281 |
For any problems you may stumble upon installing Ganeti, please refer to the |
|
1282 |
`official documentation <http://docs.ganeti.org/ganeti/2.6/html>`_. Installation |
|
1283 |
of Ganeti is out of the scope of this guide. |
|
1284 |
|
|
1285 | 1406 |
.. _cyclades-install-snfimage: |
1286 | 1407 |
|
1287 | 1408 |
snf-image |
... | ... | |
1290 | 1411 |
Installation |
1291 | 1412 |
~~~~~~~~~~~~ |
1292 | 1413 |
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images, |
1293 |
you need the :ref: |
|
1294 |
`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` OS |
|
1414 |
you need the `snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ OS |
|
1295 | 1415 |
Definition installed on *all* VM-capable Ganeti nodes. This means we need |
1296 | 1416 |
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on |
1297 | 1417 |
node1 and node2. You can do this by running on *both* nodes: |
... | ... | |
1309 | 1429 |
snf-image uses ``curl`` for handling URLs. This means that it will |
1310 | 1430 |
not work out of the box if you try to use URLs served by servers which do |
1311 | 1431 |
not have a valid certificate. In case you haven't followed the guide's |
1312 |
directions about the certificates,in order to circumvent this you should edit the file |
|
1432 |
directions about the certificates, in order to circumvent this you should edit the file
|
|
1313 | 1433 |
``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node. |
1314 | 1434 |
|
1315 | 1435 |
Configuration |
... | ... | |
1319 | 1439 |
public URL. More details, are described in the next section. For now, the only |
1320 | 1440 |
thing we need to do, is configure snf-image to access our Pithos backend. |
1321 | 1441 |
|
1322 |
To do this, we need to set the corresponding variables in
|
|
1442 |
To do this, we need to set the corresponding variable in |
|
1323 | 1443 |
``/etc/default/snf-image``, to reflect our Pithos setup: |
1324 | 1444 |
|
1325 | 1445 |
.. code-block:: console |
1326 | 1446 |
|
1327 |
PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos" |
|
1328 |
|
|
1329 | 1447 |
PITHOS_DATA="/srv/pithos/data" |
1330 | 1448 |
|
1331 | 1449 |
If you have installed your Ganeti cluster on different nodes than node1 and |
... | ... | |
1386 | 1504 |
UI or the command line client `kamaki |
1387 | 1505 |
<http://www.synnefo.org/docs/kamaki/latest/index.html>`_. |
1388 | 1506 |
|
1507 |
To upload the file using kamaki, run: |
|
1508 |
|
|
1509 |
.. code-block:: console |
|
1510 |
|
|
1511 |
# kamaki file upload debian_base-6.0-x86_64.diskdump pithos |
|
1512 |
|
|
1389 | 1513 |
Once the Image is uploaded successfully, download the Image's metadata file |
1390 | 1514 |
from the official snf-image page. You will need it, for spawning a VM from |
1391 | 1515 |
Ganeti, in the next section. |
... | ... | |
1401 | 1525 |
|
1402 | 1526 |
Now, it is time to test our installation so far. So, we have Astakos and |
1403 | 1527 |
Pithos installed, we have a working Ganeti installation, the snf-image |
1404 |
definition installed on all VM-capable nodes and a Debian Squeeze Image on |
|
1405 |
Pithos. Make sure you also have the `metadata file |
|
1406 |
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image. |
|
1528 |
definition installed on all VM-capable nodes, a Debian Squeeze Image on |
|
1529 |
Pithos and kamaki installed and configured. Make sure you also have the |
|
1530 |
`metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_ |
|
1531 |
for this image. |
|
1532 |
|
|
1533 |
To spawn a VM from a Pithos file, we need to know: |
|
1534 |
|
|
1535 |
1) The hashmap of the file |
|
1536 |
2) The size of the file |
|
1537 |
|
|
1538 |
If you uploaded the file with kamaki as described above, run: |
|
1539 |
|
|
1540 |
.. code-block:: console |
|
1541 |
|
|
1542 |
# kamaki file info pithos:debian_base-6.0-x86_64.diskdump |
|
1543 |
|
|
1544 |
else, replace ``pithos`` and ``debian_base-6.0-x86_64.diskdump`` with the |
|
1545 |
container and filename you used, when uploading the file. |
|
1546 |
|
|
1547 |
The hashmap is the field ``x-object-hash``, while the size of the file is the |
|
1548 |
``content-length`` field, that ``kamaki file info`` command returns. |
|
1407 | 1549 |
|
1408 | 1550 |
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line: |
1409 | 1551 |
|
1410 | 1552 |
.. code-block:: console |
1411 | 1553 |
|
1412 | 1554 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1413 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1555 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithosmap://<HashMap>/<Size>",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1414 | 1556 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1415 | 1557 |
testvm1 |
1416 | 1558 |
|
... | ... | |
1419 | 1561 |
* ``img_passwd``: the arbitrary root password of your new instance |
1420 | 1562 |
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image |
1421 | 1563 |
* ``img_id``: If you want to deploy an Image stored on Pithos (our case), this |
1422 |
should have the format ``pithos://<UUID>/<container>/<filename>``:
|
|
1423 |
* ``UUID``: the username found in Cyclades Web UI under API access
|
|
1424 |
* ``container``: ``pithos`` (default, if the Web UI was used)
|
|
1425 |
* ``filename``: the name of file (visible also from the Web UI)
|
|
1564 |
should have the format ``pithosmap://<HashMap>/<size>``:
|
|
1565 |
* ``HashMap``: the map of the file
|
|
1566 |
* ``size``: the size of the file, same size as reported in
|
|
1567 |
``ls -la filename``
|
|
1426 | 1568 |
* ``img_properties``: taken from the metadata file. Used only the two mandatory |
1427 | 1569 |
properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more |
1428 | 1570 |
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_ |
... | ... | |
1461 | 1603 |
------------------------- |
1462 | 1604 |
|
1463 | 1605 |
This part is deployment-specific and must be customized based on the specific |
1464 |
needs of the system administrator. However, to do so, the administrator needs |
|
1465 |
to understand how each level handles Virtual Networks, to be able to setup the |
|
1466 |
backend appropriately, before installing Cyclades. To do so, please read the |
|
1467 |
:ref:`Network <networks>` section before proceeding. |
|
1468 |
|
|
1469 |
Since synnefo 0.11 all network actions are managed with the snf-manage |
|
1470 |
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd, |
|
1471 |
snf-network, bridges, vlans) to be already configured correctly. The only |
|
1472 |
actions needed in this point are: |
|
1473 |
|
|
1474 |
a) Have Ganeti with IP pool management support installed. |
|
1475 |
|
|
1476 |
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc. |
|
1606 |
needs of the system administrator. |
|
1477 | 1607 |
|
1478 |
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs. |
|
1479 |
|
|
1480 |
In order to test that everything is setup correctly before installing Cyclades, |
|
1481 |
we will make some testing actions in this section, and the actual setup will be |
|
1482 |
done afterwards with snf-manage commands. |
|
1608 |
In this section, we'll describe the simplest scenario, which will provide |
|
1609 |
access to the public Internet along with private networking capabilities for |
|
1610 |
the VMs. |
|
1483 | 1611 |
|
1484 | 1612 |
.. _snf-network: |
1485 | 1613 |
|
1486 | 1614 |
snf-network |
1487 | 1615 |
~~~~~~~~~~~ |
1488 | 1616 |
|
1489 |
snf-network includes `kvm-vif-bridge` script that is invoked every time |
|
1490 |
a tap (a VM's NIC) is created. Based on environment variables passed by |
|
1491 |
Ganeti it issues various commands depending on the network type the NIC is |
|
1492 |
connected to and sets up a corresponding dhcp lease. |
|
1617 |
snf-network is a set of custom scripts, that perform all the necessary actions, |
|
1618 |
so that VMs have a working networking configuration. |
|
1493 | 1619 |
|
1494 | 1620 |
Install snf-network on all Ganeti nodes: |
1495 | 1621 |
|
... | ... | |
1508 | 1634 |
nfdhcpd |
1509 | 1635 |
~~~~~~~ |
1510 | 1636 |
|
1511 |
Each NIC's IP is chosen by Ganeti (with IP pool management support). |
|
1512 |
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and |
|
1513 |
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will |
|
1514 |
create a dhcp response. |
|
1637 |
nfdhcpd is an NFQUEUE based daemon, answering DHCP requests and running locally |
|
1638 |
on every Ganeti node. Its leases file, gets automatically updated by |
|
1639 |
snf-network and information provided by Ganeti. |
|
1515 | 1640 |
|
1516 | 1641 |
.. code-block:: console |
1517 | 1642 |
|
1518 |
# apt-get install nfqueue-bindings-python=0.3+physindev-1
|
|
1643 |
# apt-get install python-nfqueue=0.4+physindev-1~wheezy
|
|
1519 | 1644 |
# apt-get install nfdhcpd |
1520 | 1645 |
|
1521 | 1646 |
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At |
1522 | 1647 |
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers`` |
1523 |
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
|
|
1524 |
VMs. Once you are finished, restart the server on all nodes:
|
|
1648 |
variable to your DNS IP/s (the one running dnsmasq for instance or you can use
|
|
1649 |
Google's DNS server ``8.8.8.8``). Restart the server on all nodes:
|
|
1525 | 1650 |
|
1526 | 1651 |
.. code-block:: console |
1527 | 1652 |
|
1528 | 1653 |
# /etc/init.d/nfdhcpd restart |
1529 | 1654 |
|
1530 |
If you are using ``ferm``, then you need to run the following: |
|
1531 |
|
|
1532 |
.. code-block:: console |
|
1533 |
|
|
1534 |
# echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf |
|
1535 |
# /etc/init.d/ferm restart |
|
1536 |
|
|
1537 |
or make sure to run after boot: |
|
1655 |
In order for nfdhcpd to receive the VMs requests, we have to mangle all DHCP |
|
1656 |
traffic coming from the corresponding interfaces. To accomplish that run: |
|
1538 | 1657 |
|
1539 | 1658 |
.. code-block:: console |
1540 | 1659 |
|
1541 | 1660 |
# iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42 |
1542 | 1661 |
|
1543 |
and if you have IPv6 enabled: |
|
1544 |
|
|
1545 |
.. code-block:: console |
|
1546 |
|
|
1547 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43 |
|
1548 |
# ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44 |
|
1662 |
and append it to your ``/etc/rc.local``. |
|
1549 | 1663 |
|
1550 | 1664 |
You can check which clients are currently served by nfdhcpd by running: |
1551 | 1665 |
|
... | ... | |
1558 | 1672 |
Public Network Setup |
1559 | 1673 |
-------------------- |
1560 | 1674 |
|
1561 |
To achieve basic networking the simplest way is to have a common bridge (e.g. |
|
1562 |
``br0``, on the same collision domain with the router) where all VMs will |
|
1563 |
connect to. Packets will be "forwarded" to the router and then to the Internet. |
|
1564 |
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to |
|
1565 |
:ref:`Network <networks>` section). |
|
1675 |
In the following section, we'll guide you through a very basic network setup. |
|
1676 |
This assumes the following: |
|
1677 |
|
|
1678 |
* Node1 has access to the public network via eth0. |
|
1679 |
* Node1 will become a NAT server for the VMs. |
|
1680 |
* All nodes have ``br1/br2`` dedicated for the VMs' public/private traffic. |
|
1681 |
* VMs' public network is ``10.0.0.0/24`` with gateway ``10.0.0.1``. |
|
1566 | 1682 |
|
1567 |
Physical Host Setup
|
|
1568 |
~~~~~~~~~~~~~~~~~~~ |
|
1683 |
Setting up the NAT server on node1
|
|
1684 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
1569 | 1685 |
|
1570 |
Assuming ``eth0`` on both hosts is the public interface (directly connected |
|
1571 |
to the router), run on every node: |
|
1686 |
To setup the NAT server on node1, run: |
|
1572 | 1687 |
|
1573 | 1688 |
.. code-block:: console |
1689 |
|
|
1690 |
# ip addr add 10.0.0.1/24 dev br1 |
|
1691 |
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE |
|
1692 |
# echo 1 > /proc/sys/net/ipv4/ip_forward |
|
1574 | 1693 |
|
1575 |
# apt-get install vlan |
|
1576 |
# brctl addbr br0 |
|
1577 |
# ip link set br0 up |
|
1578 |
# vconfig add eth0 100 |
|
1579 |
# ip link set eth0.100 up |
|
1580 |
# brctl addif br0 eth0.100 |
|
1581 |
|
|
1694 |
and append it to your ``/etc/rc.local``. |
|
1695 |
|
|
1582 | 1696 |
|
1583 |
Testing a Public Network
|
|
1584 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
|
1697 |
Testing the Public Networks
|
|
1698 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
1585 | 1699 |
|
1586 |
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you |
|
1587 |
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the |
|
1588 |
network by running: |
|
1700 |
First add the network in Ganati: |
|
1589 | 1701 |
|
1590 | 1702 |
.. code-block:: console |
1591 | 1703 |
|
1592 |
# gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
|
|
1704 |
# gnt-network add --network=10.0.0.0/24 --gateway=10.0.0.1 --tags=nfdhcpd test-net-public
|
|
1593 | 1705 |
|
1594 |
Then, connect the network to all your nodegroups. We assume that we only have |
|
1595 |
one nodegroup (``default``) in our Ganeti cluster: |
|
1706 |
Then, provide connectivity mode and link to the network: |
|
1596 | 1707 |
|
1597 | 1708 |
.. code-block:: console |
1598 | 1709 |
|
1599 |
# gnt-network connect test-net-public default bridged br0
|
|
1710 |
# gnt-network connect test-net-public bridged br1
|
|
1600 | 1711 |
|
1601 | 1712 |
Now, it is time to test that the backend infrastracture is correctly setup for |
1602 |
the Public Network. We will add a new VM, the same way we did it on the |
|
1603 |
previous testing section. However, now will also add one NIC, configured to be |
|
1604 |
managed from our previously defined network. Run on the GANETI-MASTER (node1): |
|
1713 |
the Public Network. We will add a new VM, almost the same way we did it on the |
|
1714 |
previous testing section. However, now we'll also add one NIC, configured to be |
|
1715 |
managed from our previously defined network. |
|
1716 |
|
|
1717 |
Fetch the Debian Old Base image locally (in all nodes), by running: |
|
1718 |
|
|
1719 |
.. code-block:: console |
|
1720 |
|
|
1721 |
# wget http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump -O /var/lib/snf-image/debian_base-6.0-x86_64.diskdump |
|
1722 |
|
|
1723 |
Also in all nodes, bring all ``br*`` interfaces up: |
|
1605 | 1724 |
|
1606 | 1725 |
.. code-block:: console |
1607 | 1726 |
|
1727 |
# ifconfig br1 up |
|
1728 |
# ifconfig br2 up |
|
1729 |
|
|
1730 |
Finally, run on the GANETI-MASTER (node1): |
|
1731 |
|
|
1732 |
.. code-block:: console |
|
1733 |
|
|
1608 | 1734 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1609 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1735 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1610 | 1736 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1611 | 1737 |
--net 0:ip=pool,network=test-net-public \ |
1612 | 1738 |
testvm2 |
1613 | 1739 |
|
1614 |
If the above returns successfully, connect to the new VM through VNC as before and run: |
|
1615 |
|
|
1616 |
.. code-block:: console |
|
1617 |
|
|
1618 |
root@testvm2:~ # ip addr |
|
1619 |
root@testvm2:~ # ip route |
|
1620 |
root@testvm2:~ # cat /etc/resolv.conf |
|
1740 |
The following things should happen: |
|
1621 | 1741 |
|
1622 |
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
|
|
1623 |
(nameserver option in nfdhcpd.conf). This shows correct configuration of
|
|
1624 |
ganeti, snf-network and nfdhcpd.
|
|
1742 |
* Ganeti creates a tap interface.
|
|
1743 |
* snf-network bridges the tap interface to ``br1`` and updates nfdhcpd state.
|
|
1744 |
* nfdhcpd serves 10.0.0.2 IP to the interface of ``testvm2``.
|
|
1625 | 1745 |
|
1626 |
Now ping the outside world. If this works too, then you have also configured
|
|
1627 |
correctly your physical host and router.
|
|
1746 |
Now try to ping the outside world e.g., ``www.synnefo.org`` from inside the VM
|
|
1747 |
(connect to the VM using VNC as before).
|
|
1628 | 1748 |
|
1629 | 1749 |
Make sure everything works as expected, before proceeding with the Private |
1630 | 1750 |
Networks setup. |
... | ... | |
1634 | 1754 |
Private Networks Setup |
1635 | 1755 |
---------------------- |
1636 | 1756 |
|
1637 |
Synnefo supports two types of private networks: |
|
1638 |
|
|
1639 |
- based on MAC filtering |
|
1640 |
- based on physical VLANs |
|
1641 |
|
|
1642 |
Both types provide Layer 2 isolation to the end-user. |
|
1643 |
|
|
1644 |
For the first type a common bridge (e.g. ``prv0``) is needed while for the |
|
1645 |
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different |
|
1646 |
physical VLAN. To this end to assure isolation among end-users' private networks |
|
1647 |
each has to have different MAC prefix (for the filtering to take place) or to be |
|
1648 |
"connected" to a different bridge (VLAN actually). |
|
1649 |
|
|
1650 |
Physical Host Setup |
|
1651 |
~~~~~~~~~~~~~~~~~~~ |
|
1652 |
|
|
1653 |
In order to create the necessary VLAN/bridges, one for MAC filtered private |
|
1654 |
networks and various (e.g. 20) for private networks based on physical VLANs, |
|
1655 |
run on every node: |
|
1656 |
|
|
1657 |
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs |
|
1658 |
configured correctly) connected together, run on every node: |
|
1659 |
|
|
1660 |
.. code-block:: console |
|
1661 |
|
|
1662 |
# modprobe 8021q |
|
1663 |
# $iface=eth0 |
|
1664 |
# for prv in $(seq 0 20); do |
|
1665 |
vlan=$prv |
|
1666 |
bridge=prv$prv |
|
1667 |
vconfig add $iface $vlan |
|
1668 |
ifconfig $iface.$vlan up |
|
1669 |
brctl addbr $bridge |
|
1670 |
brctl setfd $bridge 0 |
|
1671 |
brctl addif $bridge $iface.$vlan |
|
1672 |
ifconfig $bridge up |
|
1673 |
done |
|
1674 |
|
|
1675 |
The above will do the following : |
|
1676 |
|
|
1677 |
* provision 21 new bridges: ``prv0`` - ``prv20`` |
|
1678 |
* provision 21 new vlans: ``eth0.0`` - ``eth0.20`` |
|
1679 |
* add the corresponding vlan to the equivalent bridge |
|
1680 |
|
|
1681 |
You can run ``brctl show`` on both nodes to see if everything was setup |
|
1682 |
correctly. |
|
1757 |
In this section, we'll describe a basic network configuration, that will provide |
|
1758 |
isolated private networks to the end-users. All private network traffic, will |
|
1759 |
pass through ``br1`` and isolation will be guaranteed with a specific set of |
|
1760 |
``ebtables`` rules. |
|
1683 | 1761 |
|
1684 | 1762 |
Testing the Private Networks |
1685 | 1763 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
1686 | 1764 |
|
1687 |
To test the Private Networks, we will create two instances and put them in the |
|
1688 |
same Private Networks (one MAC Filtered and one Physical VLAN). This means |
|
1689 |
that the instances will have a second NIC connected to the ``prv0`` |
|
1690 |
pre-provisioned bridge and a third to ``prv1``. |
|
1691 |
|
|
1692 |
We run the same command as in the Public Network testing section, but with one |
|
1693 |
more argument for the second NIC: |
|
1765 |
We'll create two instances and connect them to the same Private Network. This |
|
1766 |
means that the instances will have a second NIC connected to the ``br1``. |
|
1694 | 1767 |
|
1695 | 1768 |
.. code-block:: console |
1696 | 1769 |
|
1697 |
# gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac |
|
1698 |
# gnt-network connect test-net-prv-mac default bridged prv0 |
|
1699 |
|
|
1700 |
# gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan |
|
1701 |
# gnt-network connect test-net-prv-vlan default bridged prv1 |
|
1770 |
# gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --tags=nfdhcpd,private-filtered test-net-prv-mac |
|
1771 |
# gnt-network connect test-net-prv-mac bridged br1 |
|
1702 | 1772 |
|
1703 | 1773 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1704 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1774 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1705 | 1775 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1706 | 1776 |
--net 0:ip=pool,network=test-net-public \ |
1707 | 1777 |
--net 1:ip=pool,network=test-net-prv-mac \ |
1708 |
--net 2:ip=none,network=test-net-prv-vlan \ |
|
1709 | 1778 |
testvm3 |
1710 | 1779 |
|
1711 | 1780 |
# gnt-instance add -o snf-image+default --os-parameters \ |
1712 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1781 |
img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
|
|
1713 | 1782 |
-t plain --disk 0:size=2G --no-name-check --no-ip-check \ |
1714 | 1783 |
--net 0:ip=pool,network=test-net-public \ |
1715 |
--net 1:ip=pool,network=test-net-prv-mac \ |
|
1716 |
--net 2:ip=none,network=test-net-prv-vlan \ |
|
1784 |
--net 1:ip=pool,network=test-net-prv-mac -n node2 \ |
|
1717 | 1785 |
testvm4 |
1718 | 1786 |
|
1719 |
Above, we create two instances with first NIC connected to the internet, their |
|
1720 |
second NIC connected to a MAC filtered private Network and their third NIC |
|
1721 |
connected to the first Physical VLAN Private Network. Now, connect to the |
|
1787 |
Above, we create two instances with the first NIC connected to the internet and |
|
1788 |
their second NIC connected to a MAC filtered private Network. Now, connect to the |
|
1722 | 1789 |
instances using VNC and make sure everything works as expected: |
1723 | 1790 |
|
1724 | 1791 |
a) The instances have access to the public internet through their first eth |
1725 |
interface (``eth0``), which has been automatically assigned a public IP. |
|
1726 |
|
|
1727 |
b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``) |
|
1728 |
|
|
1729 |
c) ip link set ``eth1``/``eth2`` up |
|
1792 |
interface (``eth0``), which has been automatically assigned a "public" IP. |
|
1730 | 1793 |
|
1731 |
d) dhclient ``eth1``/``eth2``
|
|
1794 |
b) ``eth1`` will have mac prefix ``aa:00:55``
|
|
1732 | 1795 |
|
1733 |
e) On testvm3 ping 192.168.1.2/10.0.0.2
|
|
1796 |
c) On testvm3 ping 192.168.1.2
|
|
1734 | 1797 |
|
1735 | 1798 |
If everything works as expected, then you have finished the Network Setup at the |
1736 | 1799 |
backend for both types of Networks (Public & Private). |
... | ... | |
1863 | 1926 |
Appending an extra path (``/cyclades`` here) is recommended in order to |
1864 | 1927 |
distinguish components, if more than one are installed on the same machine. |
1865 | 1928 |
|
1866 |
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with astakos.
|
|
1929 |
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with Astakos.
|
|
1867 | 1930 |
It can be retrieved by running on the Astakos node (node1 in our case): |
1868 | 1931 |
|
1869 | 1932 |
.. code-block:: console |
... | ... | |
1948 | 2011 |
|
1949 | 2012 |
In our installation we assume that we only have one Ganeti cluster, the one we |
1950 | 2013 |
setup earlier. At this point you have to add this backend (Ganeti cluster) to |
1951 |
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
|
|
2014 |
Cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
|
|
1952 | 2015 |
correctly. |
1953 | 2016 |
|
1954 | 2017 |
.. code-block:: console |
... | ... | |
2006 | 2069 |
Cyclades supports different Public Networks on different Ganeti backends. |
2007 | 2070 |
After connecting Cyclades with our Ganeti cluster, we need to setup a Public |
2008 | 2071 |
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every |
2009 |
created NIC on a bridge. After having a bridge (e.g. br0) created in every |
|
2010 |
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0': |
|
2072 |
created NIC on a bridge. |
|
2011 | 2073 |
|
2012 | 2074 |
.. code-block:: console |
2013 | 2075 |
|
2014 |
$ snf-manage network-create --subnet=5.6.7.0/27 \ |
|
2015 |
--gateway=5.6.7.1 \ |
|
2016 |
--subnet6=2001:648:2FFC:1322::/64 \ |
|
2017 |
--gateway6=2001:648:2FFC:1322::1 \ |
|
2018 |
--public --dhcp=True --flavor=CUSTOM \ |
|
2019 |
--link=br0 --mode=bridged \ |
|
2076 |
$ snf-manage network-create --subnet=10.0.0.0/24 \ |
|
2077 |
--gateway=10.0.0.1 \ |
|
2078 |
--public --dhcp --flavor=CUSTOM \ |
|
2079 |
--link=br1 --mode=bridged \ |
|
2020 | 2080 |
--name=public_network \ |
2021 | 2081 |
--backend-id=1 |
2022 | 2082 |
|
... | ... | |
2025 | 2085 |
|
2026 | 2086 |
.. code-block:: console |
2027 | 2087 |
|
2028 |
$ snf-manage reconcile-networks |
|
2088 |
# snf-manage reconcile-networks |
|
2089 |
|
|
2090 |
You can use ``snf-manage reconcile-networks --fix-all`` to fix any |
|
2091 |
inconsistencies that may have arisen. |
|
2029 | 2092 |
|
2030 | 2093 |
You can see all available networks by running: |
2031 | 2094 |
|
2032 | 2095 |
.. code-block:: console |
2033 | 2096 |
|
2034 |
$ snf-manage network-list
|
|
2097 |
# snf-manage network-list
|
|
2035 | 2098 |
|
2036 | 2099 |
and inspect each network's state by running: |
2037 | 2100 |
|
2038 | 2101 |
.. code-block:: console |
2039 | 2102 |
|
2040 |
$ snf-manage network-inspect <net_id>
|
|
2103 |
# snf-manage network-inspect <net_id>
|
|
2041 | 2104 |
|
2042 | 2105 |
Finally, you can see the networks from the Ganeti perspective by running on the |
2043 | 2106 |
Ganeti MASTER: |
2044 | 2107 |
|
2045 | 2108 |
.. code-block:: console |
2046 | 2109 |
|
2047 |
$ gnt-network list
|
|
2048 |
$ gnt-network info <network_name>
|
|
2110 |
# gnt-network list
|
|
2111 |
# gnt-network info <network_name>
|
|
2049 | 2112 |
|
2050 | 2113 |
Create pools for Private Networks |
2051 | 2114 |
--------------------------------- |
... | ... | |
2062 | 2125 |
|
2063 | 2126 |
.. code-block:: console |
2064 | 2127 |
|
2065 |
root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
|
2066 |
|
|
2067 |
root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20 |
|
2128 |
# snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
|
2068 | 2129 |
|
2069 |
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`: |
|
2130 |
Also, change the Synnefo setting in :file:`/etc/synnefo/20-snf-cyclades-app-api.conf`:
|
|
2070 | 2131 |
|
2071 | 2132 |
.. code-block:: console |
2072 | 2133 |
|
2073 |
DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
|
|
2134 |
DEFAULT_MAC_FILTERED_BRIDGE = 'br2'
|
|
2074 | 2135 |
|
2075 | 2136 |
Servers restart |
2076 | 2137 |
--------------- |
... | ... | |
2160 | 2221 |
--------------- |
2161 | 2222 |
|
2162 | 2223 |
First of all we need to test that our Cyclades Web UI works correctly. Open your |
2163 |
browser and go to the Astakos home page. Login and then click 'cyclades' on the
|
|
2224 |
browser and go to the Astakos home page. Login and then click 'Cyclades' on the
|
|
2164 | 2225 |
top cloud bar. This should redirect you to: |
2165 | 2226 |
|
Also available in: Unified diff