Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 8314e2fc

History | View | Annotate | Download (78.7 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole Synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Wheezy. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
If you just want to install the Object Storage Service (Pithos), follow the
22
guide and just stop after the "Testing of Pithos" section.
23

    
24

    
25
Installation of Synnefo / Introduction
26
======================================
27

    
28
We will install the services with the above list's order. The last three
29
services will be installed in a single step (at the end), because at the moment
30
they are contained in the same software component (Cyclades). Furthermore, we
31
will install all services in the first physical node, except Pithos which will
32
be installed in the second, due to a conflict between the snf-pithos-app and
33
snf-cyclades-app component (scheduled to be fixed in the next version).
34

    
35
For the rest of the documentation we will refer to the first physical node as
36
"node1" and the second as "node2". We will also assume that their domain names
37
are "node1.example.com" and "node2.example.com" and their public IPs are "203.0.113.1" and
38
"203.0.113.2" respectively. It is important that the two machines are under the same domain name.
39
In case you choose to follow a private installation you will need to
40
set up a private dns server, using dnsmasq for example. See node1 below for
41
more information on how to do so.
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos, Cyclades).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr wheezy/``
53
| ``deb-src http://apt.dev.grnet.gr wheezy/``
54

    
55
and import the repo's GPG key:
56

    
57
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
58

    
59
Update your list of packages and continue with the installation:
60

    
61
.. code-block:: console
62

    
63
   # apt-get update
64

    
65
You also need a shared directory visible by both nodes. Pithos will save all
66
data inside this directory. By 'all data', we mean files, images, and Pithos
67
specific mapping data. If you plan to upload more than one basic image, this
68
directory should have at least 50GB of free space. During this guide, we will
69
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
70
to node2 (be sure to set no_root_squash flag). Node2 has this directory
71
mounted under ``/srv/pithos``, too.
72

    
73
Before starting the synnefo installation, you will need basic third party
74
software to be installed and configured on the physical nodes. We will describe
75
each node's general prerequisites separately. Any additional configuration,
76
specific to a synnefo service for each node, will be described at the service's
77
section.
78

    
79
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
80
system clocks (e.g. by running ntpd).
81

    
82
Node1
83
-----
84

    
85

    
86
General Synnefo dependencies
87
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88

    
89
		* apache (http server)
90
		* public certificate
91
		* gunicorn (WSGI http server)
92
		* postgresql (database)
93
		* rabbitmq (message queue)
94
		* ntp (NTP daemon)
95
		* gevent
96
		* dnsmasq (DNS server)
97

    
98
You can install apache2, postgresql, ntp and rabbitmq by running:
99

    
100
.. code-block:: console
101

    
102
   # apt-get install apache2 postgresql ntp rabbitmq-server
103

    
104
To install gunicorn and gevent, run:
105

    
106
.. code-block:: console
107

    
108
   # apt-get install gunicorn python-gevent
109

    
110
On node1, we will create our databases, so you will also need the
111
python-psycopg2 package:
112

    
113
.. code-block:: console
114

    
115
   # apt-get install python-psycopg2
116

    
117
Database setup
118
~~~~~~~~~~~~~~
119

    
120
On node1, we create a database called ``snf_apps``, that will host all django
121
apps related tables. We also create the user ``synnefo`` and grant him all
122
privileges on the database. We do this by running:
123

    
124
.. code-block:: console
125

    
126
    root@node1:~ # su - postgres
127
    postgres@node1:~ $ psql
128
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
129
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
130
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
131

    
132
We also create the database ``snf_pithos`` needed by the Pithos backend and
133
grant the ``synnefo`` user all privileges on the database. This database could
134
be created on node2 instead, but we do it on node1 for simplicity. We will
135
create all needed databases on node1 and then node2 will connect to them.
136

    
137
.. code-block:: console
138

    
139
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
140
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
141

    
142
Configure the database to listen to all network interfaces. You can do this by
143
editting the file ``/etc/postgresql/9.1/main/postgresql.conf`` and change
144
``listen_addresses`` to ``'*'`` :
145

    
146
.. code-block:: console
147

    
148
    listen_addresses = '*'
149

    
150
Furthermore, edit ``/etc/postgresql/9.1/main/pg_hba.conf`` to allow node1 and
151
node2 to connect to the database. Add the following lines under ``#IPv4 local
152
connections:`` :
153

    
154
.. code-block:: console
155

    
156
    host		all	all	203.0.113.1/32	md5
157
    host		all	all	203.0.113.2/32	md5
158

    
159
Make sure to substitute "203.0.113.1" and "203.0.113.2" with node1's and node2's
160
actual IPs. Now, restart the server to apply the changes:
161

    
162
.. code-block:: console
163

    
164
   # /etc/init.d/postgresql restart
165

    
166

    
167
Certificate Creation
168
~~~~~~~~~~~~~~~~~~~~~
169

    
170
Node1 will host Cyclades. Cyclades should communicate with the other Synnefo
171
Services and users over a secure channel. In order for the connection to be
172
trusted, the keys provided to Apache below should be signed with a certificate.
173
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
174
and sign your keys with this. To do so on node1 run:
175

    
176
.. code-block:: console
177

    
178
		# apt-get install openvpn
179
		# mkdir /etc/openvpn/easy-rsa
180
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
181
		# cd /etc/openvpn/easy-rsa/2.0
182
		# vim vars
183

    
184
In vars you can set your own parameters such as KEY_COUNTRY
185

    
186
.. code-block:: console
187

    
188
	# . ./vars
189
	# ./clean-all
190

    
191
Now you can create the certificate
192

    
193
.. code-block:: console
194

    
195
		# ./build-ca
196

    
197
The previous will create a ``ca.crt`` file in the directory ``/etc/openvpn/easy-rsa/2.0/keys``.
198
Copy this file under ``/usr/local/share/ca-certificates/`` directory and run :
199

    
200
.. code-block:: console
201

    
202
		# update-ca-certificates
203

    
204
to update the records. You will have to do the following on node2 as well.
205

    
206
Now you can create the keys and sign them with the certificate
207

    
208
.. code-block:: console
209

    
210
		# ./build-key-server node1.example.com
211

    
212
This will create a ``01.pem`` and a ``node1.example.com.key`` files in the
213
``/etc/openvpn/easy-rsa/2.0/keys`` directory. Copy these in ``/etc/ssl/certs/``
214
and ``/etc/ssl/private/`` respectively and use them in the apache2
215
configuration file below instead of the defaults.
216

    
217
Apache2 setup
218
~~~~~~~~~~~~~
219

    
220
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
221
following:
222

    
223
.. code-block:: console
224

    
225
    <VirtualHost *:80>
226
        ServerName node1.example.com
227

    
228
        RewriteEngine On
229
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
230
        RewriteRule ^(.*)$ - [F,L]
231
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
232
    </VirtualHost>
233

    
234

    
235
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
236
following:
237

    
238
.. code-block:: console
239

    
240
    <IfModule mod_ssl.c>
241
    <VirtualHost _default_:443>
242
        ServerName node1.example.com
243

    
244
        Alias /static "/usr/share/synnefo/static"
245

    
246
        #  SetEnv no-gzip
247
        #  SetEnv dont-vary
248

    
249
       AllowEncodedSlashes On
250

    
251
       RequestHeader set X-Forwarded-Protocol "https"
252

    
253
    <Proxy * >
254
        Order allow,deny
255
        Allow from all
256
    </Proxy>
257

    
258
        SetEnv                proxy-sendchunked
259
        SSLProxyEngine        off
260
        ProxyErrorOverride    off
261

    
262
        ProxyPass        /static !
263
        ProxyPass        / http://localhost:8080/ retry=0
264
        ProxyPassReverse / http://localhost:8080/
265

    
266
        RewriteEngine On
267
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
268
        RewriteRule ^(.*)$ - [F,L]
269

    
270
        SSLEngine on
271
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
272
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
273
    </VirtualHost>
274
    </IfModule>
275

    
276
Now enable sites and modules by running:
277

    
278
.. code-block:: console
279

    
280
   # a2enmod ssl
281
   # a2enmod rewrite
282
   # a2dissite default
283
   # a2ensite synnefo
284
   # a2ensite synnefo-ssl
285
   # a2enmod headers
286
   # a2enmod proxy_http
287

    
288
.. note:: This isn't really needed, but it's a good security practice to disable
289
    directory listing in apache::
290

    
291
        # a2dismod autoindex
292

    
293

    
294
.. warning:: Do NOT start/restart the server yet. If the server is running::
295

    
296
       # /etc/init.d/apache2 stop
297

    
298

    
299
.. _rabbitmq-setup:
300

    
301
Message Queue setup
302
~~~~~~~~~~~~~~~~~~~
303

    
304
The message queue will run on node1, so we need to create the appropriate
305
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
306
exchanges:
307

    
308
.. code-block:: console
309

    
310
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
311
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
312

    
313
We do not need to initialize the exchanges. This will be done automatically,
314
during the Cyclades setup.
315

    
316
Pithos data directory setup
317
~~~~~~~~~~~~~~~~~~~~~~~~~~~
318

    
319
As mentioned in the General Prerequisites section, there should be a directory
320
called ``/srv/pithos`` visible by both nodes. We create and setup the ``data``
321
directory inside it:
322

    
323
.. code-block:: console
324

    
325
   # mkdir /srv/pithos
326
   # cd /srv/pithos
327
   # mkdir data
328
   # chown www-data:www-data data
329
   # chmod g+ws data
330

    
331
This directory must be shared via `NFS <https://en.wikipedia.org/wiki/Network_File_System>`_.
332
In order to do this, run:
333

    
334
.. code-block:: console
335

    
336
   # apt-get install rpcbind nfs-kernel-server
337

    
338
Now edit ``/etc/exports`` and add the following line:
339

    
340
.. code-block:: console
341

    
342
   /srv/pithos/ 203.0.113.2(rw,no_root_squash,sync,subtree_check)
343

    
344
Once done, run:
345

    
346
.. code-block:: console
347

    
348
   # /etc/init.d/nfs-kernel-server restart
349

    
350

    
351
DNS server setup
352
~~~~~~~~~~~~~~~~
353

    
354
If your machines are not under the same domain name you have to set up a dns server.
355
In order to set up a dns server using dnsmasq do the following:
356

    
357
.. code-block:: console
358

    
359
   # apt-get install dnsmasq
360

    
361
Then edit your ``/etc/hosts/`` file as follows:
362

    
363
.. code-block:: console
364

    
365
		203.0.113.1     node1.example.com
366
		203.0.113.2     node2.example.com
367

    
368
dnsmasq will serve any IPs/domains found in ``/etc/resolv.conf``.
369

    
370
There is a `"bug" in libevent 2.0.5 <http://sourceforge.net/p/levent/bugs/193/>`_
371
, where if you have multiple nameservers in your ``/etc/resolv.conf``, libevent
372
will round-robin against them. To avoid this, you must use a single nameserver
373
for all your needs. Edit your ``/etc/resolv.conf`` to include your dns server:
374

    
375
.. code-block:: console
376

    
377
   nameserver 203.0.113.1
378

    
379
Because of the aforementioned bug, you can't specify more than one DNS servers
380
in your ``/etc/resolv.conf``. In order for dnsmasq to serve domains not in
381
``/etc/hosts``, edit ``/etc/dnsmasq.conf`` and change the line starting with
382
``#resolv-file=`` to:
383

    
384
.. code-block:: console
385

    
386
   resolv-file=/etc/external-dns
387

    
388
Now create the file ``/etc/external-dns`` and specify any extra DNS servers you
389
want dnsmasq to query for domains, e.g., 8.8.8.8:
390

    
391
.. code-block:: console
392

    
393
   nameserver 8.8.8.8
394

    
395
In the ``/etc/dnsmasq.conf`` file, you can also specify the ``listen-address``
396
and the ``interface`` you would like dnsmasq to listen to.
397

    
398
Finally, restart dnsmasq:
399

    
400
.. code-block:: console
401

    
402
   # /etc/init.d/dnsmasq restart
403

    
404
You are now ready with all general prerequisites concerning node1. Let's go to
405
node2.
406

    
407
Node2
408
-----
409

    
410
General Synnefo dependencies
411
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
412

    
413
    * apache (http server)
414
    * gunicorn (WSGI http server)
415
    * postgresql (database)
416
    * ntp (NTP daemon)
417
    * gevent
418
    * certificates
419
    * dnsmasq (DNS server)
420

    
421
You can install the above by running:
422

    
423
.. code-block:: console
424

    
425
   # apt-get install apache2 postgresql ntp
426

    
427
To install gunicorn and gevent, run:
428

    
429
.. code-block:: console
430

    
431
   # apt-get install gunicorn python-gevent
432

    
433
Node2 will connect to the databases on node1, so you will also need the
434
python-psycopg2 package:
435

    
436
.. code-block:: console
437

    
438
   # apt-get install python-psycopg2
439

    
440
Database setup
441
~~~~~~~~~~~~~~
442

    
443
All databases have been created and setup on node1, so we do not need to take
444
any action here. From node2, we will just connect to them. When you get familiar
445
with the software you may choose to run different databases on different nodes,
446
for performance/scalability/redundancy reasons, but those kind of setups are out
447
of the purpose of this guide.
448

    
449
Apache2 setup
450
~~~~~~~~~~~~~
451

    
452
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
453
following:
454

    
455
.. code-block:: console
456

    
457
    <VirtualHost *:80>
458
        ServerName node2.example.com
459

    
460
        RewriteEngine On
461
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
462
        RewriteRule ^(.*)$ - [F,L]
463
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
464
    </VirtualHost>
465

    
466
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
467
containing the following:
468

    
469
.. code-block:: console
470

    
471
    <IfModule mod_ssl.c>
472
    <VirtualHost _default_:443>
473
        ServerName node2.example.com
474

    
475
        Alias /static "/usr/share/synnefo/static"
476

    
477
        SetEnv no-gzip
478
        SetEnv dont-vary
479
        AllowEncodedSlashes On
480

    
481
        RequestHeader set X-Forwarded-Protocol "https"
482

    
483
        <Proxy * >
484
            Order allow,deny
485
            Allow from all
486
        </Proxy>
487

    
488
        SetEnv                proxy-sendchunked
489
        SSLProxyEngine        off
490
        ProxyErrorOverride    off
491

    
492
        ProxyPass        /static !
493
        ProxyPass        / http://localhost:8080/ retry=0
494
        ProxyPassReverse / http://localhost:8080/
495

    
496
        SSLEngine on
497
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
498
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
499
    </VirtualHost>
500
    </IfModule>
501

    
502
As in node1, enable sites and modules by running:
503

    
504
.. code-block:: console
505

    
506
   # a2enmod ssl
507
   # a2enmod rewrite
508
   # a2dissite default
509
   # a2ensite synnefo
510
   # a2ensite synnefo-ssl
511
   # a2enmod headers
512
   # a2enmod proxy_http
513

    
514
.. note:: This isn't really needed, but it's a good security practice to disable
515
    directory listing in apache::
516

    
517
        # a2dismod autoindex
518

    
519
.. warning:: Do NOT start/restart the server yet. If the server is running::
520

    
521
       # /etc/init.d/apache2 stop
522

    
523

    
524
Acquire certificate
525
~~~~~~~~~~~~~~~~~~~
526

    
527
Copy the certificate you created before on node1 (`ca.crt`) under the directory
528
``/usr/local/share/ca-certificate`` and run:
529

    
530
.. code-block:: console
531

    
532
   # update-ca-certificates
533

    
534
to update the records.
535

    
536

    
537
DNS Setup
538
~~~~~~~~~
539

    
540
Add the following line in ``/etc/resolv.conf`` file
541

    
542
.. code-block:: console
543

    
544
   nameserver 203.0.113.1
545

    
546
to inform the node about the new DNS server.
547

    
548
As mentioned before, this should be the only ``nameserver`` entry in
549
``/etc/resolv.conf``.
550

    
551
We are now ready with all general prerequisites for node2. Now that we have
552
finished with all general prerequisites for both nodes, we can start installing
553
the services. First, let's install Astakos on node1.
554

    
555
Installation of Astakos on node1
556
================================
557

    
558
To install Astakos, grab the package from our repository (make sure  you made
559
the additions needed in your ``/etc/apt/sources.list`` file and updated, as
560
described previously), by running:
561

    
562
.. code-block:: console
563

    
564
   # apt-get install snf-astakos-app snf-pithos-backend
565

    
566
.. _conf-astakos:
567

    
568
Configuration of Astakos
569
========================
570

    
571
Gunicorn setup
572
--------------
573

    
574
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
575
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
576

    
577
.. code-block:: console
578

    
579
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
580

    
581

    
582
.. warning:: Do NOT start the server yet, because it won't find the
583
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
584
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
585
    ``--worker-class=sync``. We will start the server after successful
586
    installation of Astakos. If the server is running::
587

    
588
       # /etc/init.d/gunicorn stop
589

    
590
Conf Files
591
----------
592

    
593
After Astakos is successfully installed, you will find the directory
594
``/etc/synnefo`` and some configuration files inside it. The files contain
595
commented configuration options, which are the default options. While installing
596
new snf-* components, new configuration files will appear inside the directory.
597
In this guide (and for all services), we will edit only the minimum necessary
598
configuration options, to reflect our setup. Everything else will remain as is.
599

    
600
After getting familiar with Synnefo, you will be able to customize the software
601
as you wish and fits your needs. Many options are available, to empower the
602
administrator with extensively customizable setups.
603

    
604
For the snf-webproject component (installed as an Astakos dependency), we
605
need the following:
606

    
607
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
608
uncomment and edit the ``DATABASES`` block to reflect our database:
609

    
610
.. code-block:: console
611

    
612
    DATABASES = {
613
     'default': {
614
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
615
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
616
         # ATTENTION: This *must* be the absolute path if using sqlite3.
617
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
618
         'NAME': 'snf_apps',
619
         'USER': 'synnefo',                      # Not used with sqlite3.
620
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
621
         # Set to empty string for localhost. Not used with sqlite3.
622
         'HOST': '203.0.113.1',
623
         # Set to empty string for default. Not used with sqlite3.
624
         'PORT': '5432',
625
     }
626
    }
627

    
628
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
629
``SECRET_KEY``. This is a Django specific setting which is used to provide a
630
seed in secret-key hashing algorithms. Set this to a random string of your
631
choice and keep it private:
632

    
633
.. code-block:: console
634

    
635
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
636

    
637
For Astakos specific configuration, edit the following options in
638
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
639

    
640
.. code-block:: console
641

    
642
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
643

    
644
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
645

    
646
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
647
services). ``ASTAKOS_BASE_URL`` is the Astakos top-level URL. Appending an
648
extra path (``/astakos`` here) is recommended in order to distinguish
649
components, if more than one are installed on the same machine.
650

    
651
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
652
    If you would like to enable it, you have to edit the following options:
653

    
654
    .. code-block:: console
655

    
656
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
657
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
658
        ASTAKOS_RECAPTCHA_USE_SSL = True
659
        ASTAKOS_RECAPTCHA_ENABLED = True
660

    
661
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
662
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
663

    
664
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
665

    
666
.. code-block:: console
667

    
668
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
669

    
670
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
671

    
672
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
673

    
674
Those settings have to do with the black cloudbar endpoints and will be
675
described in more detail later on in this guide. For now, just edit the domain
676
to point at node1 which is where we have installed Astakos.
677

    
678
If you are an advanced user and want to use the Shibboleth Authentication
679
method, read the relative :ref:`section <shibboleth-auth>`.
680

    
681
.. _email-configuration:
682

    
683
Email delivery configuration
684
----------------------------
685

    
686
Many of the ``Astakos`` operations require the server to notify service users
687
and administrators via email. e.g. right after the signup process, the service
688
sents an email to the registered email address containing an verification url.
689
After the user verifies the email address, Astakos once again needs to
690
notify administrators with a notice that a new account has just been verified.
691

    
692
More specifically Astakos sends emails in the following cases
693

    
694
- An email containing a verification link after each signup process.
695
- An email to the people listed in ``ADMINS`` setting after each email
696
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
697
  notifies administrators that an additional action is required in order to
698
  activate the user.
699
- A welcome email to the user email and an admin notification to ``ADMINS``
700
  right after each account activation.
701
- Feedback messages submited from Astakos contact view and Astakos feedback
702
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
703
- Project application request notifications to people included in ``HELPDESK``
704
  and ``MANAGERS`` settings.
705
- Notifications after each project members action (join request, membership
706
  accepted/declinde etc.) to project members or project owners.
707

    
708
Astakos uses the Django internal email delivering mechanism to send email
709
notifications. A simple configuration, using an external smtp server to
710
deliver messages, is shown below. Alter the following example to meet your
711
smtp server characteristics. Notice that the smtp server is needed for a proper
712
installation.
713

    
714
Edit ``/etc/synnefo/00-snf-common-admins.conf``:
715

    
716
.. code-block:: python
717

    
718
    EMAIL_HOST = "mysmtp.server.example.com"
719
    EMAIL_HOST_USER = "<smtpuser>"
720
    EMAIL_HOST_PASSWORD = "<smtppassword>"
721

    
722
    # this gets appended in all email subjects
723
    EMAIL_SUBJECT_PREFIX = "[example.com] "
724

    
725
    # Address to use for outgoing emails
726
    DEFAULT_FROM_EMAIL = "server@example.com"
727

    
728
    # Email where users can contact for support. This is used in html/email
729
    # templates.
730
    CONTACT_EMAIL = "server@example.com"
731

    
732
    # The email address that error messages come from
733
    SERVER_EMAIL = "server-errors@example.com"
734

    
735
Notice that since email settings might be required by applications other than
736
Astakos, they are defined in a different configuration file than the one
737
previously used to set Astakos specific settings.
738

    
739
Refer to
740
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
741
for additional information on available email settings.
742

    
743
As refered in the previous section, based on the operation that triggers
744
an email notification, the recipients list differs. Specifically, for
745
emails whose recipients include contacts from your service team
746
(administrators, managers, helpdesk etc) synnefo provides the following
747
settings located in ``00-snf-common-admins.conf``:
748

    
749
.. code-block:: python
750

    
751
    ADMINS = (('Admin name', 'admin@example.com'),
752
              ('Admin2 name', 'admin2@example.com))
753
    MANAGERS = (('Manager name', 'manager@example.com'),)
754
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.com'),)
755

    
756
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:
757

    
758
.. code-block:: python
759

    
760
    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
761
    EMAIL_FILE_PATH = '/tmp/app-messages'
762

    
763

    
764
Enable Pooling
765
--------------
766

    
767
This section can be bypassed, but we strongly recommend you apply the following,
768
since they result in a significant performance boost.
769

    
770
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
771
around Psycopg2. This allows independent Django requests to reuse pooled DB
772
connections, with significant performance gains.
773

    
774
To use, first monkey-patch psycopg2. For Django, run this before the
775
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
776

    
777
.. code-block:: console
778

    
779
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
780
    monkey_patch_psycopg2()
781

    
782
Since we are running with greenlets, we should modify psycopg2 behavior, so it
783
works properly in a greenlet context:
784

    
785
.. code-block:: console
786

    
787
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
788
    make_psycopg_green()
789

    
790
Use the Psycopg2 driver as usual. For Django, this means using
791
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
792
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
793
driver, through ``DATABASES.OPTIONS`` in Django.
794

    
795
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
796
file that looks like this:
797

    
798
.. code-block:: console
799

    
800
    # Monkey-patch psycopg2
801
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
802
    monkey_patch_psycopg2()
803

    
804
    # If running with greenlets
805
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
806
    make_psycopg_green()
807

    
808
    DATABASES = {
809
     'default': {
810
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
811
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
812
         'OPTIONS': {'synnefo_poolsize': 8},
813

    
814
         # ATTENTION: This *must* be the absolute path if using sqlite3.
815
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
816
         'NAME': 'snf_apps',
817
         'USER': 'synnefo',                      # Not used with sqlite3.
818
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
819
         # Set to empty string for localhost. Not used with sqlite3.
820
         'HOST': '203.0.113.1',
821
         # Set to empty string for default. Not used with sqlite3.
822
         'PORT': '5432',
823
     }
824
    }
825

    
826
Database Initialization
827
-----------------------
828

    
829
After configuration is done, we initialize the database by running:
830

    
831
.. code-block:: console
832

    
833
    # snf-manage syncdb
834

    
835
At this example we don't need to create a django superuser, so we select
836
``[no]`` to the question. After a successful sync, we run the migration needed
837
for Astakos:
838

    
839
.. code-block:: console
840

    
841
    # snf-manage migrate im
842
    # snf-manage migrate quotaholder_app
843

    
844
Then, we load the pre-defined user groups
845

    
846
.. code-block:: console
847

    
848
    # snf-manage loaddata groups
849

    
850
.. _services-reg:
851

    
852
Services Registration
853
---------------------
854

    
855
When the database is ready, we need to register the services. The following
856
command will ask you to register the standard Synnefo components (Astakos,
857
Cyclades and Pithos) along with the services they provide. Note that you
858
have to register at least Astakos in order to have a usable authentication
859
system. For each component, you will be asked to provide two URLs: its base
860
URL and its UI URL.
861

    
862
The former is the location where the component resides; it should equal
863
the ``<component_name>_BASE_URL`` as specified in the respective component
864
settings. For example, the base URL for Astakos would be
865
``https://node1.example.com/astakos``.
866

    
867
The latter is the URL that appears in the Cloudbar and leads to the
868
component UI. If you want to follow the default setup, set
869
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
870
URL as explained before. (You can later change the UI URL with
871
``snf-manage component-modify <component_name> --url new_ui_url``.)
872

    
873
The command will also register automatically the resource definitions
874
offered by the services.
875

    
876
.. code-block:: console
877

    
878
    # snf-component-register
879

    
880
.. note::
881

    
882
   This command is equivalent to running the following series of commands;
883
   it registers the three components in Astakos and then in each host it
884
   exports the respective service definitions, copies the exported json file
885
   to the Astakos host, where it finally imports it:
886

    
887
    .. code-block:: console
888

    
889
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
890
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
891
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
892
       astakos-host$ snf-manage service-export-astakos > astakos.json
893
       astakos-host$ snf-manage service-import --json astakos.json
894
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
895
       # copy the file to astakos-host
896
       astakos-host$ snf-manage service-import --json cyclades.json
897
       pithos-host$ snf-manage service-export-pithos > pithos.json
898
       # copy the file to astakos-host
899
       astakos-host$ snf-manage service-import --json pithos.json
900

    
901
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2.
902

    
903
Setting Default Base Quota for Resources
904
----------------------------------------
905

    
906
We now have to specify the limit on resources that each user can employ
907
(exempting resources offered by projects). When specifying storage or
908
memory size limits consider to add an appropriate size suffix to the
909
numeric value, i.e. 10240 MB, 10 GB etc.
910

    
911
.. code-block:: console
912

    
913
    # snf-manage resource-modify --default-quota-interactive
914

    
915
.. _pithos_view_registration:
916

    
917
Register pithos view as an OAuth 2.0 client
918
-------------------------------------------
919

    
920
Starting from synnefo version 0.15, the pithos view, in order to get access to
921
the data of a protect pithos resource, has to be granted authorization for the
922
specific resource by astakos.
923

    
924
During the authorization grant procedure, it has to authenticate itself with
925
astakos since the later has to prevent serving requests by unknown/unauthorized
926
clients.
927

    
928
Each oauth 2.0 client is identified by a client identifier (client_id).
929
Moreover, the confidential clients are authenticated via a password
930
(client_secret).
931
Then, each client has to declare at least a redirect URI so that astakos will
932
be able to validate the redirect URI provided during the authorization code
933
request.
934
If a client is trusted (like a pithos view) astakos grants access on behalf
935
of the resource owner, otherwise the resource owner has to be asked.
936

    
937
To register the pithos view as an OAuth 2.0 client in astakos, we have to run
938
the following command::
939

    
940
    snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url https://node2.example.com/pithos/ui/view
941

    
942
Servers Initialization
943
----------------------
944

    
945
Finally, we initialize the servers on node1:
946

    
947
.. code-block:: console
948

    
949
    root@node1:~ # /etc/init.d/gunicorn restart
950
    root@node1:~ # /etc/init.d/apache2 restart
951

    
952
We have now finished the Astakos setup. Let's test it now.
953

    
954

    
955
Testing of Astakos
956
==================
957

    
958
Open your favorite browser and go to:
959

    
960
``http://node1.example.com/astakos``
961

    
962
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
963
the "welcome" door of Astakos, then you have successfully setup Astakos.
964

    
965
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
966
and fill all your data at the sign up form. Then click "SUBMIT". You should now
967
see a green box on the top, which informs you that you made a successful request
968
and the request has been sent to the administrators. So far so good, let's
969
assume that you created the user with username ``user@example.com``.
970

    
971
Now we need to activate that user. Return to a command prompt at node1 and run:
972

    
973
.. code-block:: console
974

    
975
    root@node1:~ # snf-manage user-list
976

    
977
This command should show you a list with only one user; the one we just created.
978
This user should have an id with a value of ``1`` and flag "active" and
979
"verified" set to False. Now run:
980

    
981
.. code-block:: console
982

    
983
    root@node1:~ # snf-manage user-modify 1 --verify --accept
984

    
985
This verifies the user email and activates the user.
986
When running in production, the activation is done automatically with different
987
types of moderation, that Astakos supports. You can see the moderation methods
988
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
989
documentation. In production, you can also manually activate a user, by sending
990
him/her an activation email. See how to do this at the :ref:`User
991
activation <user_activation>` section.
992

    
993
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
994
your browser again. Try to sign in using your new credentials. If the Astakos
995
menu appears and you can see your profile, then you have successfully setup
996
Astakos.
997

    
998
Let's continue to install Pithos now.
999

    
1000

    
1001
Installation of Pithos on node2
1002
===============================
1003

    
1004
To install Pithos, grab the packages from our repository (make sure  you made
1005
the additions needed in your ``/etc/apt/sources.list`` file, as described
1006
previously), by running:
1007

    
1008
.. code-block:: console
1009

    
1010
   # apt-get install snf-pithos-app snf-pithos-backend
1011

    
1012
Now, install the pithos web interface:
1013

    
1014
.. code-block:: console
1015

    
1016
   # apt-get install snf-pithos-webclient
1017

    
1018
This package provides the standalone Pithos web client. The web client is the
1019
web UI for Pithos and will be accessible by clicking "Pithos" on the Astakos
1020
interface's cloudbar, at the top of the Astakos homepage.
1021

    
1022

    
1023
.. _conf-pithos:
1024

    
1025
Configuration of Pithos
1026
=======================
1027

    
1028
Gunicorn setup
1029
--------------
1030

    
1031
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
1032
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
1033
(as happened for node1):
1034

    
1035
.. code-block:: console
1036

    
1037
    # cp /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
1038

    
1039

    
1040
.. warning:: Do NOT start the server yet, because it won't find the
1041
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
1042
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
1043
    ``--worker-class=sync``. We will start the server after successful
1044
    installation of Astakos. If the server is running::
1045

    
1046
       # /etc/init.d/gunicorn stop
1047

    
1048
Conf Files
1049
----------
1050

    
1051
After Pithos is successfully installed, you will find the directory
1052
``/etc/synnefo`` and some configuration files inside it, as you did in node1
1053
after installation of Astakos. Here, you will not have to change anything that
1054
has to do with snf-common or snf-webproject. Everything is set at node1. You
1055
only need to change settings that have to do with Pithos. Specifically:
1056

    
1057
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
1058
this options:
1059

    
1060
.. code-block:: console
1061

    
1062
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1063

    
1064
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1065
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1066
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1067

    
1068
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1069

    
1070

    
1071
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
1072
find the Pithos backend database. Above we tell Pithos that its database is
1073
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
1074
``example_passw0rd``.  All those settings where setup during node1's "Database
1075
setup" section.
1076

    
1077
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
1078
the Pithos backend data. Above we tell Pithos to store its data under
1079
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1080
directory at node1's "Pithos data directory setup" section.
1081

    
1082
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1083
The Astakos service is used for user management (authentication, quotas, etc.)
1084

    
1085
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.
1086

    
1087
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1088
It can be retrieved by running on the Astakos node (node1 in our case):
1089

    
1090
.. code-block:: console
1091

    
1092
   # snf-manage component-list
1093

    
1094
The token has been generated automatically during the :ref:`Pithos service
1095
registration <services-reg>`.
1096

    
1097
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
1098
object checksums. This results to improved performance during object uploading.
1099
However, if compatibility with the OpenStack Object Storage API is important
1100
then it should be changed to ``True``.
1101

    
1102
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1103
Pithos web UI with the Astakos web UI (through the top cloudbar):
1104

    
1105
.. code-block:: console
1106

    
1107
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1108
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1109
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1110

    
1111
The ``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1112
cloudbar.
1113

    
1114
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1115
Pithos web client to get from Astakos all the information needed to fill its
1116
own cloudbar. So we put our Astakos deployment urls there.
1117

    
1118
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view
1119
in order to authenticate itself with astakos during the authorization grant
1120
procedure and it should container the credentials issued for the pithos view
1121
in `the pithos view registration step`__.
1122

    
1123
__ pithos_view_registration_
1124

    
1125
Pooling and Greenlets
1126
---------------------
1127

    
1128
Pithos is pooling-ready without the need of further configuration, because it
1129
doesn't use a Django DB. It pools HTTP connections to Astakos and Pithos
1130
backend objects for access to the Pithos DB.
1131

    
1132
However, as in Astakos, since we are running with Greenlets, it is also
1133
recommended to modify psycopg2 behavior so it works properly in a greenlet
1134
context. This means adding the following lines at the top of your
1135
``/etc/synnefo/10-snf-webproject-database.conf`` file:
1136

    
1137
.. code-block:: console
1138

    
1139
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
1140
    make_psycopg_green()
1141

    
1142
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
1143
mentioned above, depending on your setup) argument on your
1144
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
1145
like this:
1146

    
1147
.. code-block:: console
1148

    
1149
    CONFIG = {
1150
     'mode': 'django',
1151
     'environment': {
1152
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
1153
     },
1154
     'working_dir': '/etc/synnefo',
1155
     'user': 'www-data',
1156
     'group': 'www-data',
1157
     'args': (
1158
       '--bind=127.0.0.1:8080',
1159
       '--workers=4',
1160
       '--worker-class=gevent',
1161
       '--log-level=debug',
1162
       '--timeout=43200'
1163
     ),
1164
    }
1165

    
1166
Stamp Database Revision
1167
-----------------------
1168

    
1169
Pithos uses the alembic_ database migrations tool.
1170

    
1171
.. _alembic: http://alembic.readthedocs.org
1172

    
1173
After a successful installation, we should stamp it at the most recent
1174
revision, so that future migrations know where to start upgrading in
1175
the migration history.
1176

    
1177
.. code-block:: console
1178

    
1179
    root@node2:~ # pithos-migrate stamp head
1180

    
1181
Mount the NFS directory
1182
-----------------------
1183

    
1184
First install the package nfs-common by running:
1185

    
1186
.. code-block:: console
1187

    
1188
   root@node2:~ # apt-get install nfs-common
1189

    
1190
now create the directory /srv/pithos/ and mount the remote directory to it:
1191

    
1192
.. code-block:: console
1193

    
1194
   root@node2:~ # mkdir /srv/pithos/
1195
   root@node2:~ # mount -t nfs 203.0.113.1:/srv/pithos/ /srv/pithos/
1196

    
1197
Servers Initialization
1198
----------------------
1199

    
1200
After configuration is done, we initialize the servers on node2:
1201

    
1202
.. code-block:: console
1203

    
1204
    root@node2:~ # /etc/init.d/gunicorn restart
1205
    root@node2:~ # /etc/init.d/apache2 restart
1206

    
1207
You have now finished the Pithos setup. Let's test it now.
1208

    
1209
Testing of Pithos
1210
=================
1211

    
1212
Open your browser and go to the Astakos homepage:
1213

    
1214
``http://node1.example.com/astakos``
1215

    
1216
Login, and you will see your profile page. Now, click the "Pithos" link on the
1217
top black cloudbar. If everything was setup correctly, this will redirect you
1218
to:
1219

    
1220
``https://node2.example.com/ui``
1221

    
1222
and you will see the blue interface of the Pithos application.  Click the
1223
orange "Upload" button and upload your first file. If the file gets uploaded
1224
successfully, then this is your first sign of a successful Pithos installation.
1225
Go ahead and experiment with the interface to make sure everything works
1226
correctly.
1227

    
1228
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1229

    
1230
If you don't stumble on any problems, then you have successfully installed
1231
Pithos, which you can use as a standalone File Storage Service.
1232

    
1233
If you would like to do more, such as:
1234

    
1235
    * Spawning VMs
1236
    * Spawning VMs from Images stored on Pithos
1237
    * Uploading your custom Images to Pithos
1238
    * Spawning VMs from those custom Images
1239
    * Registering existing Pithos files as Images
1240
    * Connect VMs to the Internet
1241
    * Create Private Networks
1242
    * Add VMs to Private Networks
1243

    
1244
please continue with the rest of the guide.
1245

    
1246

    
1247
Kamaki
1248
======
1249

    
1250
`Kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_ is an
1251
Openstack API client library and command line interface with custom extentions
1252
specific to Synnefo.
1253

    
1254
Kamaki Installation and Configuration
1255
-------------------------------------
1256

    
1257
To install kamaki run:
1258

    
1259
.. code-block:: console
1260

    
1261
   # apt-get install kamaki
1262

    
1263
Now, visit
1264

    
1265
 `https://node1.example.com/astakos/ui/`
1266

    
1267
log in and click on ``API access``. Scroll all the way to the bottom of the
1268
page, click on the orange ``Download your .kamakirc`` button and save the file
1269
as ``.kamakirc`` in your home directory.
1270

    
1271
That's all, kamaki is now configured and you can start using it. For a list of
1272
commands, see the `official documentantion <http://www.synnefo.org/docs/kamaki/latest/commands.html>`_.
1273

    
1274
Cyclades Prerequisites
1275
======================
1276

    
1277
Before proceeding with the Cyclades installation, make sure you have
1278
successfully set up Astakos and Pithos first, because Cyclades depends on
1279
them. If you don't have a working Astakos and Pithos installation yet, please
1280
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1281

    
1282
Besides Astakos and Pithos, you will also need a number of additional working
1283
prerequisites, before you start the Cyclades installation.
1284

    
1285
Ganeti
1286
------
1287

    
1288
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1289
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1290
Please refer to the `ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all
1291
the gory details. A successful Ganeti installation concludes with a working
1292
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1293
<GANETI_NODES>`.
1294

    
1295
The above Ganeti cluster can run on different physical machines than node1 and
1296
node2 and can scale independently, according to your needs.
1297

    
1298
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1299
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1300
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1301

    
1302
We highly recommend that you read the official Ganeti documentation, if you are
1303
not familiar with Ganeti.
1304

    
1305
Ganeti Prerequisites
1306
--------------------
1307
You're gonna need the ``lvm2`` and ``vlan`` packages, so run:
1308

    
1309
.. code-block:: console
1310

    
1311
   # apt-get install lvm2 vlan
1312

    
1313
Ganeti requires FQDN. To properly configure your nodes please
1314
see `this <http://docs.ganeti.org/ganeti/2.6/html/install.html#hostname-issues>`_.
1315

    
1316
Ganeti requires an extra available IP and its FQDN e.g., ``203.0.113.100`` and
1317
``ganeti.node1.example.com``. Add this IP to your DNS server configuration, as
1318
explained above.
1319

    
1320
Also, Ganeti will need a volume group with the same name e.g., ``ganeti``
1321
across all nodes, of at least 20GiB. To create the volume group,
1322
see `this <http://www.tldp.org/HOWTO/LVM-HOWTO/createvgs.html>`_.
1323

    
1324
Moreover, node1 and node2 must have the same dsa, rsa keys and authorised_keys
1325
under ``/root/.ssh/`` for password-less root ssh between each other. To
1326
generate said keys, see `this <https://wiki.debian.org/SSH#Using_shared_keys>`_.
1327

    
1328
In the following sections, we assume that the public interface of all nodes is
1329
``eth0`` and there are two extra interfaces ``eth1`` and ``eth2``, which can
1330
also be vlans on your primary interface e.g., ``eth0.1`` and ``eth0.2``  in
1331
case you don't have multiple physical interfaces. For information on how to
1332
create vlans, please see
1333
`this <https://wiki.debian.org/NetworkConfiguration#Howto_use_vlan_.28dot1q.2C_802.1q.2C_trunk.29_.28Etch.2C_Lenny.29>`_.
1334

    
1335
Finally, setup two bridges on the host machines (e.g: br1/br2 on eth1/eth2
1336
respectively), as described `here <https://wiki.debian.org/BridgeNetworkConnections>`_.
1337

    
1338
Ganeti Installation and Initialization
1339
--------------------------------------
1340

    
1341
We assume that Ganeti will use the KVM hypervisor. To install KVM, run on all
1342
Ganeti nodes:
1343

    
1344
.. code-block:: console
1345

    
1346
   # apt-get install qemu-kvm
1347

    
1348
It's time to install Ganeti. To be able to use hotplug (which will be part of
1349
the official Ganeti 2.10), we recommend using our Ganeti package version:
1350

    
1351
``2.8.2+snapshot1+b64v1+kvmopts1+extfix1+hotplug5+lockfix3+ippoolfix+rapifix+netxen-1~wheezy``
1352

    
1353
Let's briefly explain each patch:
1354

    
1355
    * hotplug: hotplug devices (NICs and Disks) (ganeti 2.10).
1356
    * b64v1: Save bitarray of network IP pools in config file, encoded in
1357
      base64, instead of 0/1.
1358
    * ippoolfix: Ability to give an externally reserved IP to an instance (e.g.
1359
      gateway IP)  (ganeti 2.10).
1360
    * kvmopts: Export disk geometry to kvm command and add migration
1361
      capabilities.
1362
    * extfix: Includes:
1363

    
1364
      * exports logical id in hooks.
1365
      * adds better arbitrary params support (modification, deletion).
1366
      * cache, heads, cyls arbitrary params reach kvm command.
1367

    
1368
    * rapifix: Extend RAPI το support 'depends' and 'shutdown_timeout' body
1369
      arguments. (ganeti 2.9).
1370
    * netxen: Network configuration for xen instances, exactly like in kvm
1371
      instances. (ganeti 2.9).
1372
    * lockfix2: Fixes for 2 locking issues:
1373

    
1374
      * Issue 622: Fix for opportunistic locking that caused an assertion
1375
        error (Patch waiting in ganeti-devel list).
1376
      * Issue 621: Fix for network locking issue that resulted in: [Lock
1377
        'XXXXXX' not found in set 'instance' (it may have been removed)].
1378

    
1379
    * snapshot: Add trivial 'snapshot' functionality that is unused by Synnefo
1380
      or Ganeti.
1381

    
1382
To install Ganeti run:
1383

    
1384
.. code-block:: console
1385

    
1386
   # apt-get install snf-ganeti ganeti-htools ganeti-haskell
1387

    
1388
Ganeti will make use of drbd. To enable this and make the configuration
1389
permanent you have to do the following :
1390

    
1391
.. code-block:: console
1392

    
1393
   # modprobe drbd minor_count=255 usermode_helper=/bin/true
1394
   # echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
1395

    
1396
Then run on node1:
1397

    
1398
.. code-block:: console
1399

    
1400
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1401
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br1 \
1402
                    --default-iallocator hail \
1403
                    --hypervisor-parameters kvm:kernel_path=,vnc_bind_address=0.0.0.0 \
1404
                    --master-netdev eth0 ganeti.node1.example.com
1405

    
1406
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1407
                    --vm-capable=yes node2.example.com
1408
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1409
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1410

    
1411
``br1`` will be the default interface for any newly created VMs.
1412

    
1413
You can verify that the ganeti cluster is successfully setup, by running on the
1414
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1415

    
1416
.. code-block:: console
1417

    
1418
   # gnt-cluster verify
1419

    
1420
.. _cyclades-install-snfimage:
1421

    
1422
snf-image
1423
---------
1424

    
1425
Installation
1426
~~~~~~~~~~~~
1427
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1428
you need the `snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ OS
1429
Definition installed on *all* VM-capable Ganeti nodes. This means we need
1430
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on
1431
node1 and node2. You can do this by running on *both* nodes:
1432

    
1433
.. code-block:: console
1434

    
1435
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1436

    
1437
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1438
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1439
able to access the Pithos database. This is why, we also install them on *all*
1440
VM-capable Ganeti nodes.
1441

    
1442
.. warning::
1443
		snf-image uses ``curl`` for handling URLs. This means that it will
1444
		not  work out of the box if you try to use URLs served by servers which do
1445
		not have a valid certificate. In case you haven't followed the guide's
1446
		directions about the certificates, in order to circumvent this you should edit the file
1447
		``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
1448

    
1449
Configuration
1450
~~~~~~~~~~~~~
1451
snf-image supports native access to Images stored on Pithos. This means that
1452
it can talk directly to the Pithos backend, without the need of providing a
1453
public URL. More details, are described in the next section. For now, the only
1454
thing we need to do, is configure snf-image to access our Pithos backend.
1455

    
1456
To do this, we need to set the corresponding variable in
1457
``/etc/default/snf-image``, to reflect our Pithos setup:
1458

    
1459
.. code-block:: console
1460

    
1461
    PITHOS_DATA="/srv/pithos/data"
1462

    
1463
If you have installed your Ganeti cluster on different nodes than node1 and
1464
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1465

    
1466
If you would like to use Images that are also/only stored locally, you need to
1467
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1468
Pithos.
1469

    
1470
Testing
1471
~~~~~~~
1472
You can test that snf-image is successfully installed by running on the
1473
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1474

    
1475
.. code-block:: console
1476

    
1477
   # gnt-os diagnose
1478

    
1479
This should return ``valid`` for snf-image.
1480

    
1481
If you are interested to learn more about snf-image's internals (and even use
1482
it alongside Ganeti without Synnefo), please see
1483
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information
1484
concerning installation instructions, documentation on the design and
1485
implementation, and supported Image formats.
1486

    
1487
.. _snf-image-images:
1488

    
1489
Actual Images for snf-image
1490
---------------------------
1491

    
1492
Now that snf-image is installed successfully we need to provide it with some
1493
Images.
1494
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1495
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We
1496
recommend the use of the ``diskdump`` format. For more information about
1497
snf-image Image formats see `here
1498
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_.
1499

    
1500
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1501
also supports three (3) different locations for the above Images to be stored:
1502

    
1503
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1504
      in :file:`/etc/default/snf-image`)
1505
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1506
    * On Pithos (accessible natively, not only by its public URL)
1507

    
1508
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1509
on the official `snf-image page
1510
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The
1511
image is of type ``diskdump``. We will store it in our new Pithos installation.
1512

    
1513
To do so, do the following:
1514

    
1515
a) Download the Image from the official snf-image page.
1516

    
1517
b) Upload the Image to your Pithos installation, either using the Pithos Web
1518
   UI or the command line client `kamaki
1519
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1520

    
1521
To upload the file using kamaki, run:
1522

    
1523
.. code-block:: console
1524

    
1525
   # kamaki file upload debian_base-6.0-x86_64.diskdump pithos
1526

    
1527
Once the Image is uploaded successfully, download the Image's metadata file
1528
from the official snf-image page. You will need it, for spawning a VM from
1529
Ganeti, in the next section.
1530

    
1531
Of course, you can repeat the procedure to upload more Images, available from
1532
the `official snf-image page
1533
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_.
1534

    
1535
.. _ganeti-with-pithos-images:
1536

    
1537
Spawning a VM from a Pithos Image, using Ganeti
1538
-----------------------------------------------
1539

    
1540
Now, it is time to test our installation so far. So, we have Astakos and
1541
Pithos installed, we have a working Ganeti installation, the snf-image
1542
definition installed on all VM-capable nodes, a Debian Squeeze Image on
1543
Pithos and kamaki installed and configured. Make sure you also have the
1544
`metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_
1545
for this image.
1546

    
1547
To spawn a VM from a Pithos file, we need to know:
1548

    
1549
    1) The hashmap of the file
1550
    2) The size of the file
1551

    
1552
If you uploaded the file with kamaki as described above, run:
1553

    
1554
.. code-block:: console
1555

    
1556
   # kamaki file info pithos:debian_base-6.0-x86_64.diskdump
1557

    
1558
else, replace ``pithos`` and ``debian_base-6.0-x86_64.diskdump`` with the
1559
container and filename you used, when uploading the file.
1560

    
1561
The hashmap is the field ``x-object-hash``, while the size of the file is the
1562
``content-length`` field, that ``kamaki file info`` command returns.
1563

    
1564
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1565

    
1566
.. code-block:: console
1567

    
1568
   # gnt-instance add -o snf-image+default --os-parameters \
1569
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithosmap://<HashMap>/<Size>",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1570
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1571
                      testvm1
1572

    
1573
In the above command:
1574

    
1575
 * ``img_passwd``: the arbitrary root password of your new instance
1576
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1577
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1578
   should have the format ``pithosmap://<HashMap>/<size>``:
1579

    
1580
               * ``HashMap``: the map of the file
1581
               * ``size``: the size of the file, same size as reported in
1582
                 ``ls -l filename``
1583

    
1584
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1585
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1586
                       <http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_
1587

    
1588
If the ``gnt-instance add`` command returns successfully, then run:
1589

    
1590
.. code-block:: console
1591

    
1592
   # gnt-instance info testvm1 | grep "console connection"
1593

    
1594
to find out where to connect using VNC. If you can connect successfully and can
1595
login to your new instance using the root password ``my_vm_example_passw0rd``,
1596
then everything works as expected and you have your new Debian Base VM up and
1597
running.
1598

    
1599
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1600
to access the Pithos database and the Pithos backend data (newer versions
1601
require UUID instead of a username). Another issue you may encounter is that in
1602
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1603
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1604
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1605
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1606
are sure everything works till this point.
1607

    
1608
If everything works, you have successfully connected Ganeti with Pithos. Let's
1609
move on to networking now.
1610

    
1611
.. warning::
1612

    
1613
    You can bypass the networking sections and go straight to
1614
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1615
    the Cyclades Network Service, but only the Cyclades Compute Service
1616
    (recommended for now).
1617

    
1618
Networking Setup Overview
1619
-------------------------
1620

    
1621
This part is deployment-specific and must be customized based on the specific
1622
needs of the system administrator. Synnefo supports a lot of different
1623
networking configurations in the backend (spanning from very simple to more
1624
advanced), which are not in the scope of this guide.
1625

    
1626
In this section, we'll describe the simplest scenario, which will enable the
1627
VMs to have access to the public Internet and also access to arbitrary private
1628
networks.
1629

    
1630
At the end of this section the networking setup on the two nodes will look like
1631
this:
1632

    
1633
.. image:: images/install-guide-networks.png
1634
   :width: 70%
1635
   :target: _images/install-guide-networks.png
1636

    
1637
.. _snf-network:
1638

    
1639
snf-network
1640
~~~~~~~~~~~
1641

    
1642
snf-network is a set of custom scripts, that perform all the necessary actions,
1643
so that VMs have a working networking configuration.
1644

    
1645
Install snf-network on all Ganeti nodes:
1646

    
1647
.. code-block:: console
1648

    
1649
   # apt-get install snf-network
1650

    
1651
Then, in :file:`/etc/default/snf-network` set:
1652

    
1653
.. code-block:: console
1654

    
1655
   MAC_MASK=ff:ff:f0:00:00:00
1656

    
1657
.. _nfdhcpd:
1658

    
1659
nfdhcpd
1660
~~~~~~~
1661

    
1662
nfdhcpd is an NFQUEUE based daemon, answering DHCP requests and running locally
1663
on every Ganeti node. Its leases file, gets automatically updated by
1664
snf-network and information provided by Ganeti.
1665

    
1666
.. code-block:: console
1667

    
1668
   # apt-get install python-nfqueue=0.4+physindev-1~wheezy
1669
   # apt-get install nfdhcpd
1670

    
1671
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1672
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1673
variable to your DNS IP/s (the one running dnsmasq for instance or you can use
1674
Google's DNS server ``8.8.8.8``). Restart the server on all nodes:
1675

    
1676
.. code-block:: console
1677

    
1678
   # /etc/init.d/nfdhcpd restart
1679

    
1680
In order for nfdhcpd to receive the VMs requests, we have to mangle all DHCP
1681
traffic coming from the corresponding interfaces. To accomplish that run:
1682

    
1683
.. code-block:: console
1684

    
1685
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1686

    
1687
and append it to your ``/etc/rc.local``.
1688

    
1689
You can check which clients are currently served by nfdhcpd by running:
1690

    
1691
.. code-block:: console
1692

    
1693
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1694

    
1695
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1696

    
1697
Public Network Setup
1698
--------------------
1699

    
1700
In the following section, we'll guide you through a very basic network setup.
1701
This assumes the following:
1702

    
1703
    * Node1 has access to the public network via eth0.
1704
    * Node1 will become a NAT server for the VMs.
1705
    * All nodes have ``br1/br2`` dedicated for the VMs' public/private traffic.
1706
    * VMs' public network is ``10.0.0.0/24`` with gateway ``10.0.0.1``.
1707

    
1708
Setting up the NAT server on node1
1709
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1710

    
1711
To setup the NAT server on node1, run:
1712

    
1713
.. code-block:: console
1714

    
1715
   # ip addr add 10.0.0.1/24 dev br1
1716
   # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
1717
   # echo 1 > /proc/sys/net/ipv4/ip_forward
1718

    
1719
and append it to your ``/etc/rc.local``.
1720

    
1721

    
1722
Testing the Public Networks
1723
~~~~~~~~~~~~~~~~~~~~~~~~~~~
1724

    
1725
First add the network in Ganati:
1726

    
1727
.. code-block:: console
1728

    
1729
   # gnt-network add --network=10.0.0.0/24 --gateway=10.0.0.1 --tags=nfdhcpd test-net-public
1730

    
1731
Then, provide connectivity mode and link to the network:
1732

    
1733
.. code-block:: console
1734

    
1735
   # gnt-network connect test-net-public bridged br1
1736

    
1737
Now, it is time to test that the backend infrastracture is correctly setup for
1738
the Public Network. We will add a new VM, almost the same way we did it on the
1739
previous testing section. However, now we'll also add one NIC, configured to be
1740
managed from our previously defined network.
1741

    
1742
Fetch the Debian Old Base image locally (in all nodes), by running:
1743

    
1744
.. code-block:: console
1745

    
1746
   # wget http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump -O /var/lib/snf-image/debian_base-6.0-x86_64.diskdump
1747

    
1748
Also in all nodes, bring all ``br*`` interfaces up:
1749

    
1750
.. code-block:: console
1751

    
1752
   # ifconfig br1 up
1753
   # ifconfig br2 up
1754

    
1755
Finally, run on the GANETI-MASTER (node1):
1756

    
1757
.. code-block:: console
1758

    
1759
   # gnt-instance add -o snf-image+default --os-parameters \
1760
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1761
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1762
                      --net 0:ip=pool,network=test-net-public \
1763
                      testvm2
1764

    
1765
The following things should happen:
1766

    
1767
    * Ganeti creates a tap interface.
1768
    * snf-network bridges the tap interface to ``br1`` and updates nfdhcpd state.
1769
    * nfdhcpd serves 10.0.0.2 IP to the interface of ``testvm2``.
1770

    
1771
Now try to ping the outside world e.g., ``www.synnefo.org`` from inside the VM
1772
(connect to the VM using VNC as before).
1773

    
1774
Make sure everything works as expected, before proceeding with the Private
1775
Networks setup.
1776

    
1777
.. _private-networks-setup:
1778

    
1779
Private Networks Setup
1780
----------------------
1781

    
1782
In this section, we'll describe a basic network configuration, that will provide
1783
isolated private networks to the end-users. All private network traffic, will
1784
pass through ``br1`` and isolation will be guaranteed with a specific set of
1785
``ebtables`` rules.
1786

    
1787
Testing the Private Networks
1788
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1789

    
1790
We'll create two instances and connect them to the same Private Network. This
1791
means that the instances will have a second NIC connected to the ``br1``.
1792

    
1793
.. code-block:: console
1794

    
1795
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --tags=nfdhcpd,private-filtered test-net-prv-mac
1796
   # gnt-network connect test-net-prv-mac bridged br1
1797

    
1798
   # gnt-instance add -o snf-image+default --os-parameters \
1799
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1800
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1801
                      --net 0:ip=pool,network=test-net-public \
1802
                      --net 1:ip=pool,network=test-net-prv-mac \
1803
                      testvm3
1804

    
1805
   # gnt-instance add -o snf-image+default --os-parameters \
1806
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1807
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1808
                      --net 0:ip=pool,network=test-net-public \
1809
                      --net 1:ip=pool,network=test-net-prv-mac -n node2 \
1810
                      testvm4
1811

    
1812
Above, we create two instances with the first NIC connected to the internet and
1813
their second NIC connected to a MAC filtered private Network. Now, connect to the
1814
instances using VNC and make sure everything works as expected:
1815

    
1816
 a) The instances have access to the public internet through their first eth
1817
    interface (``eth0``), which has been automatically assigned a "public" IP.
1818

    
1819
 b) ``eth1`` will have mac prefix ``aa:00:55``
1820

    
1821
 c) On testvm3  ping 192.168.1.2
1822

    
1823
If everything works as expected, then you have finished the Network Setup at the
1824
backend for both types of Networks (Public & Private).
1825

    
1826
.. _cyclades-gtools:
1827

    
1828
Cyclades Ganeti tools
1829
---------------------
1830

    
1831
In order for Ganeti to be connected with Cyclades later on, we need the
1832
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1833
case). You can install them by running in both nodes:
1834

    
1835
.. code-block:: console
1836

    
1837
   # apt-get install snf-cyclades-gtools
1838

    
1839
This will install the following:
1840

    
1841
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1842
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1843

    
1844
Configure ``snf-cyclades-gtools``
1845
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1846

    
1847
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1848
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1849
that need it:
1850

    
1851
.. code-block:: console
1852

    
1853
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1854

    
1855
The above variables should reflect your :ref:`Message Queue setup
1856
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1857

    
1858
Connect ``snf-image`` with ``snf-progress-monitor``
1859
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1860

    
1861
Finally, we need to configure ``snf-image`` to publish progress messages during
1862
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1863
set the corresponding variable to ``snf-progress-monitor``:
1864

    
1865
.. code-block:: console
1866

    
1867
   PROGRESS_MONITOR="snf-progress-monitor"
1868

    
1869
This file should be editted in all Ganeti nodes.
1870

    
1871
.. _rapi-user:
1872

    
1873
Synnefo RAPI user
1874
-----------------
1875

    
1876
As a last step before installing Cyclades, create a new RAPI user that will
1877
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1878
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1879
You can do this, by first running:
1880

    
1881
.. code-block:: console
1882

    
1883
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1884

    
1885
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1886

    
1887
.. code-block:: console
1888

    
1889
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1890

    
1891
More about Ganeti's RAPI users `here.
1892
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_
1893

    
1894
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1895
to the actual Cyclades installation.
1896

    
1897

    
1898
Installation of Cyclades on node1
1899
=================================
1900

    
1901
This section describes the installation of Cyclades. Cyclades is Synnefo's
1902
Compute service. The Image Service will get installed automatically along with
1903
Cyclades, because it is contained in the same Synnefo component.
1904

    
1905
We will install Cyclades on node1. To do so, we install the corresponding
1906
package by running on node1:
1907

    
1908
.. code-block:: console
1909

    
1910
   # apt-get install snf-cyclades-app memcached python-memcache
1911

    
1912
If all packages install successfully, then Cyclades are installed and we
1913
proceed with their configuration.
1914

    
1915
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1916
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1917
This is achieved by storing all sensitive information to a CACHE backend and
1918
exporting it via VMAPI. The cache entries are invalidated after the first
1919
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1920
`Django <https://www.djangoproject.com/>`_ cache backend.
1921

    
1922
Configuration of Cyclades
1923
=========================
1924

    
1925
Conf files
1926
----------
1927

    
1928
After installing Cyclades, a number of new configuration files will appear under
1929
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1930
only the minimal needed changes to result with a working system. In general,
1931
sane defaults have been chosen for the most of the options, to cover most of the
1932
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1933
once you get familiar with the different options.
1934

    
1935
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1936

    
1937
.. code-block:: console
1938

    
1939
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1940
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1941

    
1942
   CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w'
1943

    
1944
The ``ASTAKOS_AUTH_URL`` denotes the Astakos endpoint for Cyclades,
1945
which is used for all user management, including authentication.
1946
Since our Astakos, Cyclades, and Pithos installations belong together,
1947
they should all have identical ``ASTAKOS_AUTH_URL`` setting
1948
(see also, :ref:`previously <conf-pithos>`).
1949

    
1950
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL.
1951
Appending an extra path (``/cyclades`` here) is recommended in order to
1952
distinguish components, if more than one are installed on the same machine.
1953

    
1954
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1955
It can be retrieved by running on the Astakos node (node1 in our case):
1956

    
1957
.. code-block:: console
1958

    
1959
   # snf-manage component-list
1960

    
1961
The token has been generated automatically during the :ref:`Cyclades service
1962
registration <services-reg>`.
1963

    
1964
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1965

    
1966
.. code-block:: console
1967

    
1968
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1969
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1970
   CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1971

    
1972
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1973
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1974
used by the Cyclades Web UI to get from Astakos all the information needed to
1975
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1976
above should have the same values we put in the corresponding variables in
1977
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1978
:ref:`Pithos configuration <conf-pithos>` section.
1979

    
1980
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1981

    
1982
.. code-block:: console
1983

    
1984
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1985
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1986

    
1987
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1988
denotes the Pithos database (where the Image files are stored). So we set that
1989
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1990
Pithos data location.
1991

    
1992
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1993

    
1994
.. code-block:: console
1995

    
1996
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1997

    
1998
The above settings denote the Message Queue. Those settings should have the same
1999
values as in ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` file, and
2000
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
2001

    
2002
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
2003

    
2004
.. code-block:: console
2005

    
2006
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
2007

    
2008
Edit ``/etc/default/vncauthproxy``:
2009

    
2010
.. code-block:: console
2011

    
2012
   CHUID="nobody:www-data"
2013

    
2014
We have now finished with the basic Cyclades configuration.
2015

    
2016
Database Initialization
2017
-----------------------
2018

    
2019
Once Cyclades is configured, we sync the database:
2020

    
2021
.. code-block:: console
2022

    
2023
   $ snf-manage syncdb
2024
   $ snf-manage migrate
2025

    
2026
and load the initial server flavors:
2027

    
2028
.. code-block:: console
2029

    
2030
   $ snf-manage loaddata flavors
2031

    
2032
If everything returns successfully, our database is ready.
2033

    
2034
Add the Ganeti backend
2035
----------------------
2036

    
2037
In our installation we assume that we only have one Ganeti cluster, the one we
2038
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
2039
Cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
2040
correctly.
2041

    
2042
.. code-block:: console
2043

    
2044
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
2045

    
2046
You can see everything has been setup correctly by running:
2047

    
2048
.. code-block:: console
2049

    
2050
   $ snf-manage backend-list
2051

    
2052
Enable the new backend by running:
2053

    
2054
.. code-block::
2055

    
2056
   $ snf-manage backend-modify --drained False 1
2057

    
2058
.. warning:: Since version 0.13, the backend is set to "drained" by default.
2059
    This means that you cannot add VMs to it. The reason for this is that the
2060
    nodes should be unavailable to Synnefo until the Administrator explicitly
2061
    releases them. To change this setting, use ``snf-manage backend-modify
2062
    --drained False <backend-id>``.
2063

    
2064
If something is not set correctly, you can modify the backend with the
2065
``snf-manage backend-modify`` command. If something has gone wrong, you could
2066
modify the backend to reflect the Ganeti installation by running:
2067

    
2068
.. code-block:: console
2069

    
2070
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
2071
                               --user=cyclades
2072
                               --pass=example_rapi_passw0rd
2073
                               1
2074

    
2075
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
2076
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
2077
can talk to Ganeti even after a Ganeti master-failover.
2078

    
2079
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
2080
password.  Once we setup the first backend to point at our Ganeti cluster, we
2081
update the Cyclades backends status by running:
2082

    
2083
.. code-block:: console
2084

    
2085
   $ snf-manage backend-update-status
2086

    
2087
Cyclades can manage multiple Ganeti backends, but for the purpose of this
2088
guide,we won't get into more detail regarding mulitple backends. If you want to
2089
learn more please see /*TODO*/.
2090

    
2091
Add a Public Network
2092
----------------------
2093

    
2094
Cyclades supports different Public Networks on different Ganeti backends.
2095
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
2096
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
2097
created NIC on a bridge.
2098

    
2099
.. code-block:: console
2100

    
2101
   $ snf-manage network-create --subnet=10.0.0.0/24 \
2102
                               --gateway=10.0.0.1 \
2103
                               --public --dhcp --flavor=CUSTOM \
2104
                               --link=br1 --mode=bridged \
2105
                               --name=public_network \
2106
                               --backend-id=1
2107

    
2108
This will create the Public Network on both Cyclades and the Ganeti backend. To
2109
make sure everything was setup correctly, also run:
2110

    
2111
.. code-block:: console
2112

    
2113
   # snf-manage reconcile-networks
2114

    
2115
You can use ``snf-manage reconcile-networks --fix-all`` to fix any
2116
inconsistencies that may have arisen.
2117

    
2118
You can see all available networks by running:
2119

    
2120
.. code-block:: console
2121

    
2122
   # snf-manage network-list
2123

    
2124
and inspect each network's state by running:
2125

    
2126
.. code-block:: console
2127

    
2128
   # snf-manage network-inspect <net_id>
2129

    
2130
Finally, you can see the networks from the Ganeti perspective by running on the
2131
Ganeti MASTER:
2132

    
2133
.. code-block:: console
2134

    
2135
   # gnt-network list
2136
   # gnt-network info <network_name>
2137

    
2138
Create pools for Private Networks
2139
---------------------------------
2140

    
2141
To prevent duplicate assignment of resources to different private networks,
2142
Cyclades supports two types of pools:
2143

    
2144
 - MAC prefix Pool
2145
 - Bridge Pool
2146

    
2147
As long as those resourses have been provisioned, admin has to define two
2148
these pools in Synnefo:
2149

    
2150

    
2151
.. code-block:: console
2152

    
2153
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
2154

    
2155
Also, change the Synnefo setting in :file:`/etc/synnefo/20-snf-cyclades-app-api.conf`:
2156

    
2157
.. code-block:: console
2158

    
2159
   DEFAULT_MAC_FILTERED_BRIDGE = 'br2'
2160

    
2161
Servers restart
2162
---------------
2163

    
2164
Restart gunicorn on node1:
2165

    
2166
.. code-block:: console
2167

    
2168
   # /etc/init.d/gunicorn restart
2169

    
2170
Now let's do the final connections of Cyclades with Ganeti.
2171

    
2172
``snf-dispatcher`` initialization
2173
---------------------------------
2174

    
2175
``snf-dispatcher`` dispatches all messages published to the Message Queue and
2176
manages the Cyclades database accordingly. It also initializes all exchanges. By
2177
default it is not enabled during installation of Cyclades, so let's enable it in
2178
its configuration file ``/etc/default/snf-dispatcher``:
2179

    
2180
.. code-block:: console
2181

    
2182
   SNF_DSPTCH_ENABLE=true
2183

    
2184
and start the daemon:
2185

    
2186
.. code-block:: console
2187

    
2188
   # /etc/init.d/snf-dispatcher start
2189

    
2190
You can see that everything works correctly by tailing its log file
2191
``/var/log/synnefo/dispatcher.log``.
2192

    
2193
``snf-ganeti-eventd`` on GANETI MASTER
2194
--------------------------------------
2195

    
2196
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
2197
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
2198
The daemon is already installed on the GANETI MASTER (node1 in our case).
2199
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
2200
installation, so we enable it in its configuration file
2201
``/etc/default/snf-ganeti-eventd``:
2202

    
2203
.. code-block:: console
2204

    
2205
   SNF_EVENTD_ENABLE=true
2206

    
2207
and start the daemon:
2208

    
2209
.. code-block:: console
2210

    
2211
   # /etc/init.d/snf-ganeti-eventd start
2212

    
2213
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
2214

    
2215
Apply Quota
2216
-----------
2217

    
2218
The following commands will check and fix the integrity of user quota.
2219
In a freshly installed system, these commands have no effect and can be
2220
skipped.
2221

    
2222
.. code-block:: console
2223

    
2224
   node1 # snf-manage quota --sync
2225
   node1 # snf-manage reconcile-resources-astakos --fix
2226
   node2 # snf-manage reconcile-resources-pithos --fix
2227
   node1 # snf-manage reconcile-resources-cyclades --fix
2228

    
2229
VM stats configuration
2230
----------------------
2231

    
2232
Please refer to the documentation in the :ref:`admin guide <admin-guide-stats>`
2233
for deploying and configuring snf-stats-app and collectd.
2234

    
2235

    
2236
If all the above return successfully, then you have finished with the Cyclades
2237
installation and setup.
2238

    
2239
Let's test our installation now.
2240

    
2241

    
2242
Testing of Cyclades
2243
===================
2244

    
2245
Cyclades Web UI
2246
---------------
2247

    
2248
First of all we need to test that our Cyclades Web UI works correctly. Open your
2249
browser and go to the Astakos home page. Login and then click 'Cyclades' on the
2250
top cloud bar. This should redirect you to:
2251

    
2252
 `http://node1.example.com/cyclades/ui/`
2253

    
2254
and the Cyclades home page should appear. If not, please go back and find what
2255
went wrong. Do not proceed if you don't see the Cyclades home page.
2256

    
2257
If the Cyclades home page appears, click on the orange button 'New machine'. The
2258
first step of the 'New machine wizard' will appear. This step shows all the
2259
available Images from which you can spawn new VMs. The list should be currently
2260
empty, as we haven't registered any Images yet. Close the wizard and browse the
2261
interface (not many things to see yet). If everything seems to work, let's
2262
register our first Image file.
2263

    
2264
Cyclades Images
2265
---------------
2266

    
2267
To test our Cyclades installation, we will use an Image stored on Pithos to
2268
spawn a new VM from the Cyclades interface. We will describe all steps, even
2269
though you may already have uploaded an Image on Pithos from a :ref:`previous
2270
<snf-image-images>` section:
2271

    
2272
 * Upload an Image file to Pithos
2273
 * Register that Image file to Cyclades
2274
 * Spawn a new VM from that Image from the Cyclades Web UI
2275

    
2276
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2277
command line client to do the uploading and registering of the Image.
2278

    
2279
Register an existing Image file to Cyclades
2280
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2281

    
2282
For the purposes of the following example, we assume that the user has uploaded
2283
a file in container ``pithos`` called ``debian_base-6.0-x86_64``. Moreover,
2284
he should have the appropriate `metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_.
2285

    
2286
Once the Image file has been successfully uploaded on Pithos then we register
2287
it to Cyclades, by running:
2288

    
2289
.. code-block:: console
2290

    
2291
   $ kamaki image register "Debian Base" pithos:debian_base-6.0-x86_64 \
2292
     --metafile debian_base-6.0-x86_64.diskdump.meta --public
2293

    
2294
This command registers a Pithos file as an Image in Cyclades. This Image will
2295
be public (``--public``), so all users will be able to spawn VMs from it.
2296

    
2297
Spawn a VM from the Cyclades Web UI
2298
-----------------------------------
2299

    
2300
If the registration completes successfully, then go to the Cyclades Web UI from
2301
your browser at:
2302

    
2303
 `https://node1.example.com/cyclades/ui/`
2304

    
2305
Click on the 'New Machine' button and the first step of the wizard will appear.
2306
Click on 'My Images' (right after 'System' Images) on the left pane of the
2307
wizard. Your previously registered Image "Debian Base" should appear under
2308
'Available Images'. If not, something has gone wrong with the registration. Make
2309
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2310
register`` returns successfully with all options and properties as shown above.
2311

    
2312
If the Image appears on the list, select it and complete the wizard by selecting
2313
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2314
write down your password, because you *WON'T* be able to retrieve it later.
2315

    
2316
If everything was setup correctly, after a few minutes your new machine will go
2317
to state 'Running' and you will be able to use it. Click 'Console' to connect
2318
through VNC out of band, or click on the machine's icon to connect directly via
2319
SSH or RDP (for windows machines).
2320

    
2321
Congratulations. You have successfully installed the whole Synnefo stack and
2322
connected all components.