Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ f8cdf6ec

History | View | Annotate | Download (78.1 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole Synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Wheezy. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
If you just want to install the Object Storage Service (Pithos), follow the
22
guide and just stop after the "Testing of Pithos" section.
23

    
24

    
25
Installation of Synnefo / Introduction
26
======================================
27

    
28
We will install the services with the above list's order. The last three
29
services will be installed in a single step (at the end), because at the moment
30
they are contained in the same software component (Cyclades). Furthermore, we
31
will install all services in the first physical node, except Pithos which will
32
be installed in the second, due to a conflict between the snf-pithos-app and
33
snf-cyclades-app component (scheduled to be fixed in the next version).
34

    
35
For the rest of the documentation we will refer to the first physical node as
36
"node1" and the second as "node2". We will also assume that their domain names
37
are "node1.example.com" and "node2.example.com" and their public IPs are "203.0.113.1" and
38
"203.0.113.2" respectively. It is important that the two machines are under the same domain name.
39
In case you choose to follow a private installation you will need to
40
set up a private dns server, using dnsmasq for example. See node1 below for 
41
more information on how to do so.
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos, Cyclades).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr wheezy/``
53
| ``deb-src http://apt.dev.grnet.gr wheezy/``
54

    
55
and import the repo's GPG key:
56

    
57
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
58

    
59
Update your list of packages and continue with the installation:
60

    
61
.. code-block:: console
62

    
63
   # apt-get update
64

    
65
You also need a shared directory visible by both nodes. Pithos will save all
66
data inside this directory. By 'all data', we mean files, images, and Pithos
67
specific mapping data. If you plan to upload more than one basic image, this
68
directory should have at least 50GB of free space. During this guide, we will
69
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
70
to node2 (be sure to set no_root_squash flag). Node2 has this directory
71
mounted under ``/srv/pithos``, too.
72

    
73
Before starting the synnefo installation, you will need basic third party
74
software to be installed and configured on the physical nodes. We will describe
75
each node's general prerequisites separately. Any additional configuration,
76
specific to a synnefo service for each node, will be described at the service's
77
section.
78

    
79
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
80
system clocks (e.g. by running ntpd).
81

    
82
Node1
83
-----
84

    
85

    
86
General Synnefo dependencies
87
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88

    
89
		* apache (http server)
90
		* public certificate
91
		* gunicorn (WSGI http server)
92
		* postgresql (database)
93
		* rabbitmq (message queue)
94
		* ntp (NTP daemon)
95
		* gevent
96
		* dnsmasq (DNS server)
97

    
98
You can install apache2, postgresql, ntp and rabbitmq by running:
99

    
100
.. code-block:: console
101

    
102
   # apt-get install apache2 postgresql ntp rabbitmq-server
103

    
104
To install gunicorn and gevent, run:
105

    
106
.. code-block:: console
107

    
108
   # apt-get install gunicorn python-gevent
109

    
110
On node1, we will create our databases, so you will also need the
111
python-psycopg2 package:
112

    
113
.. code-block:: console
114

    
115
   # apt-get install python-psycopg2
116

    
117
Database setup
118
~~~~~~~~~~~~~~
119

    
120
On node1, we create a database called ``snf_apps``, that will host all django
121
apps related tables. We also create the user ``synnefo`` and grant him all
122
privileges on the database. We do this by running:
123

    
124
.. code-block:: console
125

    
126
    root@node1:~ # su - postgres
127
    postgres@node1:~ $ psql
128
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
129
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
130
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
131

    
132
We also create the database ``snf_pithos`` needed by the Pithos backend and
133
grant the ``synnefo`` user all privileges on the database. This database could
134
be created on node2 instead, but we do it on node1 for simplicity. We will
135
create all needed databases on node1 and then node2 will connect to them.
136

    
137
.. code-block:: console
138

    
139
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
140
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
141

    
142
Configure the database to listen to all network interfaces. You can do this by
143
editting the file ``/etc/postgresql/9.1/main/postgresql.conf`` and change
144
``listen_addresses`` to ``'*'`` :
145

    
146
.. code-block:: console
147

    
148
    listen_addresses = '*'
149

    
150
Furthermore, edit ``/etc/postgresql/9.1/main/pg_hba.conf`` to allow node1 and
151
node2 to connect to the database. Add the following lines under ``#IPv4 local
152
connections:`` :
153

    
154
.. code-block:: console
155

    
156
    host		all	all	203.0.113.1/32	md5
157
    host		all	all	203.0.113.2/32	md5
158

    
159
Make sure to substitute "203.0.113.1" and "203.0.113.2" with node1's and node2's
160
actual IPs. Now, restart the server to apply the changes:
161

    
162
.. code-block:: console
163

    
164
   # /etc/init.d/postgresql restart
165

    
166

    
167
Certificate Creation
168
~~~~~~~~~~~~~~~~~~~~~
169

    
170
Node1 will host Cyclades. Cyclades should communicate with the other Synnefo 
171
Services and users over a secure channel. In order for the connection to be 
172
trusted, the keys provided to Apache below should be signed with a certificate.
173
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
174
and sign your keys with this. To do so on node1 run:
175

    
176
.. code-block:: console
177

    
178
		# apt-get install openvpn
179
		# mkdir /etc/openvpn/easy-rsa
180
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
181
		# cd /etc/openvpn/easy-rsa/2.0
182
		# vim vars
183

    
184
In vars you can set your own parameters such as KEY_COUNTRY
185

    
186
.. code-block:: console
187

    
188
	# . ./vars
189
	# ./clean-all
190

    
191
Now you can create the certificate
192

    
193
.. code-block:: console
194

    
195
		# ./build-ca
196

    
197
The previous will create a ``ca.crt`` file in the directory ``/etc/openvpn/easy-rsa/2.0/keys``.
198
Copy this file under ``/usr/local/share/ca-certificates/`` directory and run :
199

    
200
.. code-block:: console
201

    
202
		# update-ca-certificates
203

    
204
to update the records. You will have to do the following on node2 as well.
205

    
206
Now you can create the keys and sign them with the certificate
207

    
208
.. code-block:: console
209

    
210
		# ./build-key-server node1.example.com
211

    
212
This will create a ``01.pem`` and a ``node1.example.com.key`` files in the 
213
``/etc/openvpn/easy-rsa/2.0/keys`` directory. Copy these in ``/etc/ssl/certs/``
214
and ``/etc/ssl/private/`` respectively and use them in the apache2 
215
configuration file below instead of the defaults.
216

    
217
Apache2 setup
218
~~~~~~~~~~~~~
219

    
220
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
221
following:
222

    
223
.. code-block:: console
224

    
225
    <VirtualHost *:80>
226
        ServerName node1.example.com
227

    
228
        RewriteEngine On
229
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
230
        RewriteRule ^(.*)$ - [F,L]
231
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
232
    </VirtualHost>
233

    
234

    
235
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
236
following:
237

    
238
.. code-block:: console
239

    
240
    <IfModule mod_ssl.c>
241
    <VirtualHost _default_:443>
242
        ServerName node1.example.com
243

    
244
        Alias /static "/usr/share/synnefo/static"
245

    
246
        #  SetEnv no-gzip
247
        #  SetEnv dont-vary
248

    
249
       AllowEncodedSlashes On
250

    
251
       RequestHeader set X-Forwarded-Protocol "https"
252

    
253
    <Proxy * >
254
        Order allow,deny
255
        Allow from all
256
    </Proxy>
257

    
258
        SetEnv                proxy-sendchunked
259
        SSLProxyEngine        off
260
        ProxyErrorOverride    off
261

    
262
        ProxyPass        /static !
263
        ProxyPass        / http://localhost:8080/ retry=0
264
        ProxyPassReverse / http://localhost:8080/
265

    
266
        RewriteEngine On
267
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
268
        RewriteRule ^(.*)$ - [F,L]
269

    
270
        SSLEngine on
271
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
272
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
273
    </VirtualHost>
274
    </IfModule>
275

    
276
Now enable sites and modules by running:
277

    
278
.. code-block:: console
279

    
280
   # a2enmod ssl
281
   # a2enmod rewrite
282
   # a2dissite default
283
   # a2ensite synnefo
284
   # a2ensite synnefo-ssl
285
   # a2enmod headers
286
   # a2enmod proxy_http
287

    
288
.. note:: This isn't really needed, but it's a good security practice to disable
289
    directory listing in apache::
290

    
291
        # a2dismod autoindex
292

    
293

    
294
.. warning:: Do NOT start/restart the server yet. If the server is running::
295

    
296
       # /etc/init.d/apache2 stop
297

    
298

    
299
.. _rabbitmq-setup:
300

    
301
Message Queue setup
302
~~~~~~~~~~~~~~~~~~~
303

    
304
The message queue will run on node1, so we need to create the appropriate
305
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
306
exchanges:
307

    
308
.. code-block:: console
309

    
310
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
311
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
312

    
313
We do not need to initialize the exchanges. This will be done automatically,
314
during the Cyclades setup.
315

    
316
Pithos data directory setup
317
~~~~~~~~~~~~~~~~~~~~~~~~~~~
318

    
319
As mentioned in the General Prerequisites section, there should be a directory 
320
called ``/srv/pithos`` visible by both nodes. We create and setup the ``data``
321
directory inside it:
322

    
323
.. code-block:: console
324

    
325
   # mkdir /srv/pithos
326
   # cd /srv/pithos
327
   # mkdir data
328
   # chown www-data:www-data data
329
   # chmod g+ws data
330

    
331
This directory must be shared via `NFS <https://en.wikipedia.org/wiki/Network_File_System>`_.
332
In order to do this, run:
333

    
334
.. code-block:: console
335

    
336
   # apt-get install rpcbind nfs-kernel-server
337

    
338
Now edit ``/etc/exports`` and add the following line:
339

    
340
.. code-block:: console
341
   
342
   /srv/pithos/ 203.0.113.2(rw,no_root_squash,sync,subtree_check)
343

    
344
Once done, run:
345

    
346
.. code-block:: console
347

    
348
   # /etc/init.d/nfs-kernel-server restart
349

    
350

    
351
DNS server setup
352
~~~~~~~~~~~~~~~~
353

    
354
If your machines are not under the same domain name you have to set up a dns server.
355
In order to set up a dns server using dnsmasq do the following:
356

    
357
.. code-block:: console
358

    
359
   # apt-get install dnsmasq
360

    
361
Then edit your ``/etc/hosts/`` file as follows:
362

    
363
.. code-block:: console
364

    
365
		203.0.113.1     node1.example.com
366
		203.0.113.2     node2.example.com
367

    
368
dnsmasq will serve any IPs/domains found in ``/etc/resolv.conf``.
369

    
370
There is a `"bug" in libevent 2.0.5 <http://sourceforge.net/p/levent/bugs/193/>`_
371
, where if you have multiple nameservers in your ``/etc/resolv.conf``, libevent
372
will round-robin against them. To avoid this, you must use a single nameserver
373
for all your needs. Edit your ``/etc/resolv.conf`` to include your dns server: 
374

    
375
.. code-block:: console
376

    
377
   nameserver 203.0.113.1
378

    
379
Because of the aforementioned bug, you can't specify more than one DNS servers
380
in your ``/etc/resolv.conf``. In order for dnsmasq to serve domains not in 
381
``/etc/hosts``, edit ``/etc/dnsmasq.conf`` and change the line starting with 
382
``#resolv-file=`` to:
383

    
384
.. code-block:: console
385

    
386
   resolv-file=/etc/external-dns
387

    
388
Now create the file ``/etc/external-dns`` and specify any extra DNS servers you
389
want dnsmasq to query for domains, e.g., 8.8.8.8:
390

    
391
.. code-block:: console
392

    
393
   nameserver 8.8.8.8
394

    
395
In the ``/etc/dnsmasq.conf`` file, you can also specify the ``listen-address`` 
396
and the ``interface`` you would like dnsmasq to listen to.
397

    
398
Finally, restart dnsmasq:
399

    
400
.. code-block:: console
401

    
402
   # /etc/init.d/dnsmasq restart
403

    
404
You are now ready with all general prerequisites concerning node1. Let's go to
405
node2.
406

    
407
Node2
408
-----
409

    
410
General Synnefo dependencies
411
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
412

    
413
    * apache (http server)
414
    * gunicorn (WSGI http server)
415
    * postgresql (database)
416
    * ntp (NTP daemon)
417
    * gevent
418
    * certificates
419
    * dnsmasq (DNS server)
420

    
421
You can install the above by running:
422

    
423
.. code-block:: console
424

    
425
   # apt-get install apache2 postgresql ntp
426

    
427
To install gunicorn and gevent, run:
428

    
429
.. code-block:: console
430

    
431
   # apt-get install gunicorn python-gevent
432

    
433
Node2 will connect to the databases on node1, so you will also need the
434
python-psycopg2 package:
435

    
436
.. code-block:: console
437

    
438
   # apt-get install python-psycopg2
439

    
440
Database setup
441
~~~~~~~~~~~~~~
442

    
443
All databases have been created and setup on node1, so we do not need to take
444
any action here. From node2, we will just connect to them. When you get familiar
445
with the software you may choose to run different databases on different nodes,
446
for performance/scalability/redundancy reasons, but those kind of setups are out
447
of the purpose of this guide.
448

    
449
Apache2 setup
450
~~~~~~~~~~~~~
451

    
452
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
453
following:
454

    
455
.. code-block:: console
456

    
457
    <VirtualHost *:80>
458
        ServerName node2.example.com
459

    
460
        RewriteEngine On
461
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
462
        RewriteRule ^(.*)$ - [F,L]
463
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
464
    </VirtualHost>
465

    
466
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
467
containing the following:
468

    
469
.. code-block:: console
470

    
471
    <IfModule mod_ssl.c>
472
    <VirtualHost _default_:443>
473
        ServerName node2.example.com
474

    
475
        Alias /static "/usr/share/synnefo/static"
476

    
477
        SetEnv no-gzip
478
        SetEnv dont-vary
479
        AllowEncodedSlashes On
480

    
481
        RequestHeader set X-Forwarded-Protocol "https"
482

    
483
        <Proxy * >
484
            Order allow,deny
485
            Allow from all
486
        </Proxy>
487

    
488
        SetEnv                proxy-sendchunked
489
        SSLProxyEngine        off
490
        ProxyErrorOverride    off
491

    
492
        ProxyPass        /static !
493
        ProxyPass        / http://localhost:8080/ retry=0
494
        ProxyPassReverse / http://localhost:8080/
495

    
496
        SSLEngine on
497
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
498
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
499
    </VirtualHost>
500
    </IfModule>
501

    
502
As in node1, enable sites and modules by running:
503

    
504
.. code-block:: console
505

    
506
   # a2enmod ssl
507
   # a2enmod rewrite
508
   # a2dissite default
509
   # a2ensite synnefo
510
   # a2ensite synnefo-ssl
511
   # a2enmod headers
512
   # a2enmod proxy_http
513

    
514
.. note:: This isn't really needed, but it's a good security practice to disable
515
    directory listing in apache::
516

    
517
        # a2dismod autoindex
518

    
519
.. warning:: Do NOT start/restart the server yet. If the server is running::
520

    
521
       # /etc/init.d/apache2 stop
522

    
523

    
524
Acquire certificate
525
~~~~~~~~~~~~~~~~~~~
526

    
527
Copy the certificate you created before on node1 (`ca.crt`) under the directory
528
``/usr/local/share/ca-certificate`` and run:
529

    
530
.. code-block:: console
531

    
532
   # update-ca-certificates
533

    
534
to update the records.
535

    
536

    
537
DNS Setup
538
~~~~~~~~~
539

    
540
Add the following line in ``/etc/resolv.conf`` file
541

    
542
.. code-block:: console
543

    
544
   nameserver 203.0.113.1
545

    
546
to inform the node about the new DNS server.
547

    
548
As mentioned before, this should be the only ``nameserver`` entry in 
549
``/etc/resolv.conf``.
550

    
551
We are now ready with all general prerequisites for node2. Now that we have
552
finished with all general prerequisites for both nodes, we can start installing
553
the services. First, let's install Astakos on node1.
554

    
555
Installation of Astakos on node1
556
================================
557

    
558
To install Astakos, grab the package from our repository (make sure  you made
559
the additions needed in your ``/etc/apt/sources.list`` file and updated, as 
560
described previously), by running:
561

    
562
.. code-block:: console
563

    
564
   # apt-get install snf-astakos-app snf-pithos-backend
565

    
566
.. _conf-astakos:
567

    
568
Configuration of Astakos
569
========================
570

    
571
Gunicorn setup
572
--------------
573

    
574
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
575
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
576

    
577
.. code-block:: console
578

    
579
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
580

    
581

    
582
.. warning:: Do NOT start the server yet, because it won't find the
583
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
584
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
585
    ``--worker-class=sync``. We will start the server after successful
586
    installation of Astakos. If the server is running::
587

    
588
       # /etc/init.d/gunicorn stop
589

    
590
Conf Files
591
----------
592

    
593
After Astakos is successfully installed, you will find the directory
594
``/etc/synnefo`` and some configuration files inside it. The files contain
595
commented configuration options, which are the default options. While installing
596
new snf-* components, new configuration files will appear inside the directory.
597
In this guide (and for all services), we will edit only the minimum necessary
598
configuration options, to reflect our setup. Everything else will remain as is.
599

    
600
After getting familiar with Synnefo, you will be able to customize the software
601
as you wish and fits your needs. Many options are available, to empower the
602
administrator with extensively customizable setups.
603

    
604
For the snf-webproject component (installed as an Astakos dependency), we
605
need the following:
606

    
607
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
608
uncomment and edit the ``DATABASES`` block to reflect our database:
609

    
610
.. code-block:: console
611

    
612
    DATABASES = {
613
     'default': {
614
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
615
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
616
         # ATTENTION: This *must* be the absolute path if using sqlite3.
617
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
618
         'NAME': 'snf_apps',
619
         'USER': 'synnefo',                      # Not used with sqlite3.
620
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
621
         # Set to empty string for localhost. Not used with sqlite3.
622
         'HOST': '203.0.113.1',
623
         # Set to empty string for default. Not used with sqlite3.
624
         'PORT': '5432',
625
     }
626
    }
627

    
628
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
629
``SECRET_KEY``. This is a Django specific setting which is used to provide a
630
seed in secret-key hashing algorithms. Set this to a random string of your
631
choice and keep it private:
632

    
633
.. code-block:: console
634

    
635
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
636

    
637
For Astakos specific configuration, edit the following options in
638
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
639

    
640
.. code-block:: console
641

    
642
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
643

    
644
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
645

    
646
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
647
services). ``ASTAKOS_BASE_URL`` is the Astakos top-level URL. Appending an
648
extra path (``/astakos`` here) is recommended in order to distinguish
649
components, if more than one are installed on the same machine.
650

    
651
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
652
    If you would like to enable it, you have to edit the following options:
653

    
654
    .. code-block:: console
655

    
656
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
657
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
658
        ASTAKOS_RECAPTCHA_USE_SSL = True
659
        ASTAKOS_RECAPTCHA_ENABLED = True
660

    
661
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
662
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
663

    
664
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
665

    
666
.. code-block:: console
667

    
668
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
669

    
670
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
671

    
672
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
673

    
674
Those settings have to do with the black cloudbar endpoints and will be
675
described in more detail later on in this guide. For now, just edit the domain
676
to point at node1 which is where we have installed Astakos.
677

    
678
If you are an advanced user and want to use the Shibboleth Authentication
679
method, read the relative :ref:`section <shibboleth-auth>`.
680

    
681
.. _email-configuration:
682

    
683
Email delivery configuration
684
----------------------------
685

    
686
Many of the ``Astakos`` operations require the server to notify service users 
687
and administrators via email. e.g. right after the signup process, the service 
688
sents an email to the registered email address containing an verification url. 
689
After the user verifies the email address, Astakos once again needs to 
690
notify administrators with a notice that a new account has just been verified.
691

    
692
More specifically Astakos sends emails in the following cases
693

    
694
- An email containing a verification link after each signup process.
695
- An email to the people listed in ``ADMINS`` setting after each email
696
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
697
  notifies administrators that an additional action is required in order to
698
  activate the user.
699
- A welcome email to the user email and an admin notification to ``ADMINS``
700
  right after each account activation.
701
- Feedback messages submited from Astakos contact view and Astakos feedback
702
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
703
- Project application request notifications to people included in ``HELPDESK``
704
  and ``MANAGERS`` settings.
705
- Notifications after each project members action (join request, membership
706
  accepted/declinde etc.) to project members or project owners.
707

    
708
Astakos uses the Django internal email delivering mechanism to send email
709
notifications. A simple configuration, using an external smtp server to
710
deliver messages, is shown below. Alter the following example to meet your
711
smtp server characteristics. Notice that the smtp server is needed for a proper
712
installation.
713

    
714
Edit ``/etc/synnefo/00-snf-common-admins.conf``:
715

    
716
.. code-block:: python
717

    
718
    EMAIL_HOST = "mysmtp.server.example.com"
719
    EMAIL_HOST_USER = "<smtpuser>"
720
    EMAIL_HOST_PASSWORD = "<smtppassword>"
721

    
722
    # this gets appended in all email subjects
723
    EMAIL_SUBJECT_PREFIX = "[example.com] "
724

    
725
    # Address to use for outgoing emails
726
    DEFAULT_FROM_EMAIL = "server@example.com"
727

    
728
    # Email where users can contact for support. This is used in html/email
729
    # templates.
730
    CONTACT_EMAIL = "server@example.com"
731

    
732
    # The email address that error messages come from
733
    SERVER_EMAIL = "server-errors@example.com"
734

    
735
Notice that since email settings might be required by applications other than
736
Astakos, they are defined in a different configuration file than the one
737
previously used to set Astakos specific settings.
738

    
739
Refer to
740
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
741
for additional information on available email settings.
742

    
743
As refered in the previous section, based on the operation that triggers
744
an email notification, the recipients list differs. Specifically, for
745
emails whose recipients include contacts from your service team
746
(administrators, managers, helpdesk etc) synnefo provides the following
747
settings located in ``00-snf-common-admins.conf``:
748

    
749
.. code-block:: python
750

    
751
    ADMINS = (('Admin name', 'admin@example.com'),
752
              ('Admin2 name', 'admin2@example.com))
753
    MANAGERS = (('Manager name', 'manager@example.com'),)
754
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.com'),)
755

    
756
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:
757

    
758
.. code-block:: python
759

    
760
    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
761
    EMAIL_FILE_PATH = '/tmp/app-messages' 
762
  
763

    
764

    
765
Enable Pooling
766
--------------
767

    
768
This section can be bypassed, but we strongly recommend you apply the following,
769
since they result in a significant performance boost.
770

    
771
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
772
around Psycopg2. This allows independent Django requests to reuse pooled DB
773
connections, with significant performance gains.
774

    
775
To use, first monkey-patch psycopg2. For Django, run this before the
776
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
777

    
778
.. code-block:: console
779

    
780
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
781
    monkey_patch_psycopg2()
782

    
783
Since we are running with greenlets, we should modify psycopg2 behavior, so it
784
works properly in a greenlet context:
785

    
786
.. code-block:: console
787

    
788
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
789
    make_psycopg_green()
790

    
791
Use the Psycopg2 driver as usual. For Django, this means using
792
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
793
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
794
driver, through ``DATABASES.OPTIONS`` in Django.
795

    
796
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
797
file that looks like this:
798

    
799
.. code-block:: console
800

    
801
    # Monkey-patch psycopg2
802
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
803
    monkey_patch_psycopg2()
804

    
805
    # If running with greenlets
806
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
807
    make_psycopg_green()
808

    
809
    DATABASES = {
810
     'default': {
811
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
812
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
813
         'OPTIONS': {'synnefo_poolsize': 8},
814

    
815
         # ATTENTION: This *must* be the absolute path if using sqlite3.
816
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
817
         'NAME': 'snf_apps',
818
         'USER': 'synnefo',                      # Not used with sqlite3.
819
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
820
         # Set to empty string for localhost. Not used with sqlite3.
821
         'HOST': '203.0.113.1',
822
         # Set to empty string for default. Not used with sqlite3.
823
         'PORT': '5432',
824
     }
825
    }
826

    
827
Database Initialization
828
-----------------------
829

    
830
After configuration is done, we initialize the database by running:
831

    
832
.. code-block:: console
833

    
834
    # snf-manage syncdb
835

    
836
At this example we don't need to create a django superuser, so we select
837
``[no]`` to the question. After a successful sync, we run the migration needed
838
for Astakos:
839

    
840
.. code-block:: console
841

    
842
    # snf-manage migrate im
843
    # snf-manage migrate quotaholder_app
844

    
845
Then, we load the pre-defined user groups
846

    
847
.. code-block:: console
848

    
849
    # snf-manage loaddata groups
850

    
851
.. _services-reg:
852

    
853
Services Registration
854
---------------------
855

    
856
When the database is ready, we need to register the services. The following
857
command will ask you to register the standard Synnefo components (Astakos,
858
Cyclades and Pithos) along with the services they provide. Note that you
859
have to register at least Astakos in order to have a usable authentication
860
system. For each component, you will be asked to provide two URLs: its base
861
URL and its UI URL.
862

    
863
The former is the location where the component resides; it should equal
864
the ``<component_name>_BASE_URL`` as specified in the respective component
865
settings. For example, the base URL for Astakos would be
866
``https://node1.example.com/astakos``.
867

    
868
The latter is the URL that appears in the Cloudbar and leads to the
869
component UI. If you want to follow the default setup, set
870
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
871
URL as explained before. (You can later change the UI URL with
872
``snf-manage component-modify <component_name> --url new_ui_url``.)
873

    
874
The command will also register automatically the resource definitions
875
offered by the services.
876

    
877
.. code-block:: console
878

    
879
    # snf-component-register
880

    
881
.. note::
882

    
883
   This command is equivalent to running the following series of commands;
884
   it registers the three components in Astakos and then in each host it
885
   exports the respective service definitions, copies the exported json file
886
   to the Astakos host, where it finally imports it:
887

    
888
    .. code-block:: console
889

    
890
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
891
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
892
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
893
       astakos-host$ snf-manage service-export-astakos > astakos.json
894
       astakos-host$ snf-manage service-import --json astakos.json
895
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
896
       # copy the file to astakos-host
897
       astakos-host$ snf-manage service-import --json cyclades.json
898
       pithos-host$ snf-manage service-export-pithos > pithos.json
899
       # copy the file to astakos-host
900
       astakos-host$ snf-manage service-import --json pithos.json
901

    
902
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2.
903

    
904
Setting Default Base Quota for Resources
905
----------------------------------------
906

    
907
We now have to specify the limit on resources that each user can employ
908
(exempting resources offered by projects). When specifying storage or
909
memory size limits consider to add an appropriate size suffix to the
910
numeric value, i.e. 10240 MB, 10 GB etc.
911

    
912
.. code-block:: console
913

    
914
    # snf-manage resource-modify --default-quota-interactive
915

    
916
.. _pithos_view_registration:
917

    
918
Register pithos view as an OAuth 2.0 client
919
-------------------------------------------
920

    
921
Starting from synnefo version 0.15, the pithos view, in order to get access to
922
the data of a protect pithos resource, has to be granted authorization for the
923
specific resource by astakos.
924

    
925
During the authorization grant procedure, it has to authenticate itself with
926
astakos since the later has to prevent serving requests by unknown/unauthorized
927
clients.
928

    
929
Each oauth 2.0 client is identified by a client identifier (client_id).
930
Moreover, the confidential clients are authenticated via a password
931
(client_secret).
932
Then, each client has to declare at least a redirect URI so that astakos will
933
be able to validate the redirect URI provided during the authorization code
934
request.
935
If a client is trusted (like a pithos view) astakos grants access on behalf
936
of the resource owner, otherwise the resource owner has to be asked.
937

    
938
To register the pithos view as an OAuth 2.0 client in astakos, we have to run
939
the following command::
940

    
941
    snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url https://node2.example.com/pithos/ui/view
942

    
943
Servers Initialization
944
----------------------
945

    
946
Finally, we initialize the servers on node1:
947

    
948
.. code-block:: console
949

    
950
    root@node1:~ # /etc/init.d/gunicorn restart
951
    root@node1:~ # /etc/init.d/apache2 restart
952

    
953
We have now finished the Astakos setup. Let's test it now.
954

    
955

    
956
Testing of Astakos
957
==================
958

    
959
Open your favorite browser and go to:
960

    
961
``http://node1.example.com/astakos``
962

    
963
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
964
the "welcome" door of Astakos, then you have successfully setup Astakos.
965

    
966
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
967
and fill all your data at the sign up form. Then click "SUBMIT". You should now
968
see a green box on the top, which informs you that you made a successful request
969
and the request has been sent to the administrators. So far so good, let's
970
assume that you created the user with username ``user@example.com``.
971

    
972
Now we need to activate that user. Return to a command prompt at node1 and run:
973

    
974
.. code-block:: console
975

    
976
    root@node1:~ # snf-manage user-list
977

    
978
This command should show you a list with only one user; the one we just created.
979
This user should have an id with a value of ``1`` and flag "active" and
980
"verified" set to False. Now run:
981

    
982
.. code-block:: console
983

    
984
    root@node1:~ # snf-manage user-modify 1 --verify --accept
985

    
986
This verifies the user email and activates the user.
987
When running in production, the activation is done automatically with different
988
types of moderation, that Astakos supports. You can see the moderation methods
989
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
990
documentation. In production, you can also manually activate a user, by sending
991
him/her an activation email. See how to do this at the :ref:`User
992
activation <user_activation>` section.
993

    
994
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
995
your browser again. Try to sign in using your new credentials. If the Astakos
996
menu appears and you can see your profile, then you have successfully setup
997
Astakos.
998

    
999
Let's continue to install Pithos now.
1000

    
1001

    
1002
Installation of Pithos on node2
1003
===============================
1004

    
1005
To install Pithos, grab the packages from our repository (make sure  you made
1006
the additions needed in your ``/etc/apt/sources.list`` file, as described
1007
previously), by running:
1008

    
1009
.. code-block:: console
1010

    
1011
   # apt-get install snf-pithos-app snf-pithos-backend
1012

    
1013
Now, install the pithos web interface:
1014

    
1015
.. code-block:: console
1016

    
1017
   # apt-get install snf-pithos-webclient
1018

    
1019
This package provides the standalone Pithos web client. The web client is the
1020
web UI for Pithos and will be accessible by clicking "Pithos" on the Astakos
1021
interface's cloudbar, at the top of the Astakos homepage.
1022

    
1023

    
1024
.. _conf-pithos:
1025

    
1026
Configuration of Pithos
1027
=======================
1028

    
1029
Gunicorn setup
1030
--------------
1031

    
1032
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
1033
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
1034
(as happened for node1):
1035

    
1036
.. code-block:: console
1037

    
1038
    # cp /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
1039

    
1040

    
1041
.. warning:: Do NOT start the server yet, because it won't find the
1042
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
1043
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
1044
    ``--worker-class=sync``. We will start the server after successful
1045
    installation of Astakos. If the server is running::
1046

    
1047
       # /etc/init.d/gunicorn stop
1048

    
1049
Conf Files
1050
----------
1051

    
1052
After Pithos is successfully installed, you will find the directory
1053
``/etc/synnefo`` and some configuration files inside it, as you did in node1
1054
after installation of Astakos. Here, you will not have to change anything that
1055
has to do with snf-common or snf-webproject. Everything is set at node1. You
1056
only need to change settings that have to do with Pithos. Specifically:
1057

    
1058
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
1059
this options:
1060

    
1061
.. code-block:: console
1062

    
1063
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1064

    
1065
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1066
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1067
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1068

    
1069
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1070

    
1071

    
1072
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
1073
find the Pithos backend database. Above we tell Pithos that its database is
1074
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
1075
``example_passw0rd``.  All those settings where setup during node1's "Database
1076
setup" section.
1077

    
1078
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
1079
the Pithos backend data. Above we tell Pithos to store its data under
1080
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1081
directory at node1's "Pithos data directory setup" section.
1082

    
1083
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1084
The Astakos service is used for user management (authentication, quotas, etc.)
1085

    
1086
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.
1087

    
1088
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1089
It can be retrieved by running on the Astakos node (node1 in our case):
1090

    
1091
.. code-block:: console
1092

    
1093
   # snf-manage component-list
1094

    
1095
The token has been generated automatically during the :ref:`Pithos service
1096
registration <services-reg>`.
1097

    
1098
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
1099
object checksums. This results to improved performance during object uploading.
1100
However, if compatibility with the OpenStack Object Storage API is important
1101
then it should be changed to ``True``.
1102

    
1103
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1104
Pithos web UI with the Astakos web UI (through the top cloudbar):
1105

    
1106
.. code-block:: console
1107

    
1108
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1109
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1110
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1111

    
1112
The ``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1113
cloudbar.
1114

    
1115
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1116
Pithos web client to get from Astakos all the information needed to fill its
1117
own cloudbar. So we put our Astakos deployment urls there.
1118

    
1119
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view
1120
in order to authenticate itself with astakos during the authorization grant
1121
procedure and it should container the credentials issued for the pithos view
1122
in `the pithos view registration step`__.
1123

    
1124
__ pithos_view_registration_
1125

    
1126
Pooling and Greenlets
1127
---------------------
1128

    
1129
Pithos is pooling-ready without the need of further configuration, because it
1130
doesn't use a Django DB. It pools HTTP connections to Astakos and Pithos
1131
backend objects for access to the Pithos DB.
1132

    
1133
However, as in Astakos, since we are running with Greenlets, it is also
1134
recommended to modify psycopg2 behavior so it works properly in a greenlet
1135
context. This means adding the following lines at the top of your
1136
``/etc/synnefo/10-snf-webproject-database.conf`` file:
1137

    
1138
.. code-block:: console
1139

    
1140
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
1141
    make_psycopg_green()
1142

    
1143
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
1144
mentioned above, depending on your setup) argument on your
1145
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
1146
like this:
1147

    
1148
.. code-block:: console
1149

    
1150
    CONFIG = {
1151
     'mode': 'django',
1152
     'environment': {
1153
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
1154
     },
1155
     'working_dir': '/etc/synnefo',
1156
     'user': 'www-data',
1157
     'group': 'www-data',
1158
     'args': (
1159
       '--bind=127.0.0.1:8080',
1160
       '--workers=4',
1161
       '--worker-class=gevent',
1162
       '--log-level=debug',
1163
       '--timeout=43200'
1164
     ),
1165
    }
1166

    
1167
Stamp Database Revision
1168
-----------------------
1169

    
1170
Pithos uses the alembic_ database migrations tool.
1171

    
1172
.. _alembic: http://alembic.readthedocs.org
1173

    
1174
After a successful installation, we should stamp it at the most recent
1175
revision, so that future migrations know where to start upgrading in
1176
the migration history.
1177

    
1178
.. code-block:: console
1179

    
1180
    root@node2:~ # pithos-migrate stamp head
1181

    
1182
Mount the NFS directory
1183
-----------------------
1184

    
1185
First install the package nfs-common by running:
1186

    
1187
.. code-block:: console
1188

    
1189
   root@node2:~ # apt-get install nfs-common
1190

    
1191
now create the directory /srv/pithos/ and mount the remote directory to it:
1192

    
1193
.. code-block:: console
1194

    
1195
   root@node2:~ # mkdir /srv/pithos/
1196
   root@node2:~ # mount -t nfs 203.0.113.1:/srv/pithos/ /srv/pithos/
1197

    
1198
Servers Initialization
1199
----------------------
1200

    
1201
After configuration is done, we initialize the servers on node2:
1202

    
1203
.. code-block:: console
1204

    
1205
    root@node2:~ # /etc/init.d/gunicorn restart
1206
    root@node2:~ # /etc/init.d/apache2 restart
1207

    
1208
You have now finished the Pithos setup. Let's test it now.
1209

    
1210
Testing of Pithos
1211
=================
1212

    
1213
Open your browser and go to the Astakos homepage:
1214

    
1215
``http://node1.example.com/astakos``
1216

    
1217
Login, and you will see your profile page. Now, click the "Pithos" link on the
1218
top black cloudbar. If everything was setup correctly, this will redirect you
1219
to:
1220

    
1221
``https://node2.example.com/ui``
1222

    
1223
and you will see the blue interface of the Pithos application.  Click the
1224
orange "Upload" button and upload your first file. If the file gets uploaded
1225
successfully, then this is your first sign of a successful Pithos installation.
1226
Go ahead and experiment with the interface to make sure everything works
1227
correctly.
1228

    
1229
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1230

    
1231
If you don't stumble on any problems, then you have successfully installed
1232
Pithos, which you can use as a standalone File Storage Service.
1233

    
1234
If you would like to do more, such as:
1235

    
1236
    * Spawning VMs
1237
    * Spawning VMs from Images stored on Pithos
1238
    * Uploading your custom Images to Pithos
1239
    * Spawning VMs from those custom Images
1240
    * Registering existing Pithos files as Images
1241
    * Connect VMs to the Internet
1242
    * Create Private Networks
1243
    * Add VMs to Private Networks
1244

    
1245
please continue with the rest of the guide.
1246

    
1247

    
1248
Kamaki
1249
======
1250

    
1251
`Kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_ is an 
1252
Openstack API client library and command line interface with custom extentions 
1253
specific to Synnefo.
1254

    
1255
Kamaki Installation and Configuration
1256
-------------------------------------
1257

    
1258
To install kamaki run:
1259

    
1260
.. code-block:: console
1261

    
1262
   # apt-get install kamaki
1263

    
1264
Now, visit 
1265

    
1266
 `https://node1.example.com/astakos/ui/`
1267

    
1268
log in and click on ``API access``. Scroll all the way to the bottom of the 
1269
page, click on the orange ``Download your .kamakirc`` button and save the file
1270
as ``.kamakirc`` in your home directory.
1271

    
1272
That's all, kamaki is now configured and you can start using it. For a list of
1273
commands, see the `official documentantion <http://www.synnefo.org/docs/kamaki/latest/commands.html>`_.
1274

    
1275
Cyclades Prerequisites
1276
======================
1277

    
1278
Before proceeding with the Cyclades installation, make sure you have
1279
successfully set up Astakos and Pithos first, because Cyclades depends on
1280
them. If you don't have a working Astakos and Pithos installation yet, please
1281
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1282

    
1283
Besides Astakos and Pithos, you will also need a number of additional working
1284
prerequisites, before you start the Cyclades installation.
1285

    
1286
Ganeti
1287
------
1288

    
1289
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1290
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1291
Please refer to the `ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all 
1292
the gory details. A successful Ganeti installation concludes with a working
1293
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1294
<GANETI_NODES>`.
1295

    
1296
The above Ganeti cluster can run on different physical machines than node1 and
1297
node2 and can scale independently, according to your needs.
1298

    
1299
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1300
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1301
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1302

    
1303
We highly recommend that you read the official Ganeti documentation, if you are
1304
not familiar with Ganeti.
1305

    
1306
Ganeti Prerequisites
1307
--------------------
1308
You're gonna need the ``lvm2`` and ``vlan`` packages, so run:
1309

    
1310
.. code-block:: console
1311

    
1312
   # apt-get install lvm2 vlan
1313

    
1314
Ganeti requires FQDN. To properly configure your nodes please
1315
see `this <http://docs.ganeti.org/ganeti/2.6/html/install.html#hostname-issues>`_.
1316

    
1317
Ganeti requires an extra available IP and its FQDN e.g., ``203.0.113.100`` and 
1318
``ganeti.node1.example.com``. Add this IP to your DNS server configuration, as 
1319
explained above.
1320

    
1321
Also, Ganeti will need a volume group with the same name e.g., ``ganeti`` 
1322
across all nodes, of at least 20GiB. To create the volume group, 
1323
see `this <http://www.tldp.org/HOWTO/LVM-HOWTO/createvgs.html>`_.
1324

    
1325
Moreover, node1 and node2 must have the same dsa, rsa keys and authorised_keys
1326
under ``/root/.ssh/`` for password-less root ssh between each other. To 
1327
generate said keys, see `this <https://wiki.debian.org/SSH#Using_shared_keys>`_.
1328

    
1329
In the following sections, we assume that the public interface of all nodes is
1330
``eth0`` and there are two extra interfaces ``eth1`` and ``eth2``, which can 
1331
also be vlans on your primary interface e.g., ``eth0.1`` and ``eth0.2``  in 
1332
case you don't have multiple physical interfaces. For information on how to 
1333
create vlans, please see
1334
`this <https://wiki.debian.org/NetworkConfiguration#Howto_use_vlan_.28dot1q.2C_802.1q.2C_trunk.29_.28Etch.2C_Lenny.29>`_.
1335

    
1336
Finally, setup two bridges on the host machines (e.g: br1/br2 on eth1/eth2 
1337
respectively), as described `here <https://wiki.debian.org/BridgeNetworkConnections>`_.
1338

    
1339
Ganeti Installation and Initialization
1340
--------------------------------------
1341

    
1342
We assume that Ganeti will use the KVM hypervisor. To install KVM, run on all 
1343
Ganeti nodes:
1344

    
1345
.. code-block:: console
1346

    
1347
   # apt-get install qemu-kvm
1348

    
1349
It's time to install Ganeti. To be able to use hotplug (which will be part of 
1350
the official Ganeti 2.10), we recommend using our Ganeti package version:
1351

    
1352
`2.8.2+snapshot1+b64v1+hotplug5+ippoolfix+rapifix+netxen+lockfix2-1~wheezy`
1353

    
1354
Let's briefly explain each patch:
1355

    
1356
    * hotplug: hotplug devices (NICs and Disks) (ganeti 2.10)
1357
    * b64v1: Save bitarray of network IP pools in config file, encoded in base64, instead of 0/1.
1358
    * ippoolfix: Ability to give an externally reserved IP to an instance (e.g. gateway IP).  (ganeti 2.10)
1359
    * rapifix: Extend RAPI το support 'depends' and 'shutdown_timeout' body arguments. (ganeti 2.9)
1360
    * netxen: Network configuration for xen instances, exactly like in kvm instances. (ganeti 2.9)
1361
    * lockfix2: Fixes for 2 locking issues:
1362

    
1363
      - Issue 622: Fix for opportunistic locking that caused an assertion error (Patch waiting in ganeti-devel list)
1364
      - Issue 621: Fix for network locking issue that resulted in: [Lock 'XXXXXX' not found in set 'instance' (it may have been removed)]
1365

    
1366
    * snapshot: Add trivial 'snapshot' functionality that is unused by Synnefo or Ganeti.
1367

    
1368
To install Ganeti run:
1369

    
1370
.. code-block:: console
1371

    
1372
   # apt-get install snf-ganeti ganeti-htools ganeti-haskell
1373

    
1374
Ganeti will make use of drbd. To enable this and make the configuration 
1375
permanent you have to do the following :
1376

    
1377
.. code-block:: console
1378

    
1379
   # modprobe drbd minor_count=255 usermode_helper=/bin/true
1380
   # echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
1381

    
1382
Then run on node1:
1383

    
1384
.. code-block:: console
1385

    
1386
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1387
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br1 \
1388
                    --default-iallocator hail \
1389
                    --hypervisor-parameters kvm:kernel_path=,vnc_bind_address=0.0.0.0 \
1390
                    --master-netdev eth0 ganeti.node1.example.com
1391
    
1392
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1393
                    --vm-capable=yes node2.example.com
1394
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1395
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1396

    
1397
``br1`` will be the default interface for any newly created VMs.
1398

    
1399
You can verify that the ganeti cluster is successfully setup,by running on the
1400
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1401

    
1402
.. code-block:: console
1403

    
1404
   # gnt-cluster verify
1405

    
1406
.. _cyclades-install-snfimage:
1407

    
1408
snf-image
1409
---------
1410

    
1411
Installation
1412
~~~~~~~~~~~~
1413
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1414
you need the `snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ OS
1415
Definition installed on *all* VM-capable Ganeti nodes. This means we need
1416
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on
1417
node1 and node2. You can do this by running on *both* nodes:
1418

    
1419
.. code-block:: console
1420

    
1421
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1422

    
1423
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1424
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1425
able to access the Pithos database. This is why, we also install them on *all*
1426
VM-capable Ganeti nodes.
1427

    
1428
.. warning::
1429
		snf-image uses ``curl`` for handling URLs. This means that it will
1430
		not  work out of the box if you try to use URLs served by servers which do
1431
		not have a valid certificate. In case you haven't followed the guide's
1432
		directions about the certificates, in order to circumvent this you should edit the file
1433
		``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
1434

    
1435
Configuration
1436
~~~~~~~~~~~~~
1437
snf-image supports native access to Images stored on Pithos. This means that
1438
it can talk directly to the Pithos backend, without the need of providing a
1439
public URL. More details, are described in the next section. For now, the only
1440
thing we need to do, is configure snf-image to access our Pithos backend.
1441

    
1442
To do this, we need to set the corresponding variable in
1443
``/etc/default/snf-image``, to reflect our Pithos setup:
1444

    
1445
.. code-block:: console
1446

    
1447
    PITHOS_DATA="/srv/pithos/data"
1448

    
1449
If you have installed your Ganeti cluster on different nodes than node1 and
1450
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1451

    
1452
If you would like to use Images that are also/only stored locally, you need to
1453
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1454
Pithos.
1455

    
1456
Testing
1457
~~~~~~~
1458
You can test that snf-image is successfully installed by running on the
1459
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1460

    
1461
.. code-block:: console
1462

    
1463
   # gnt-os diagnose
1464

    
1465
This should return ``valid`` for snf-image.
1466

    
1467
If you are interested to learn more about snf-image's internals (and even use
1468
it alongside Ganeti without Synnefo), please see
1469
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information
1470
concerning installation instructions, documentation on the design and
1471
implementation, and supported Image formats.
1472

    
1473
.. _snf-image-images:
1474

    
1475
Actual Images for snf-image
1476
---------------------------
1477

    
1478
Now that snf-image is installed successfully we need to provide it with some
1479
Images.
1480
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1481
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We
1482
recommend the use of the ``diskdump`` format. For more information about
1483
snf-image Image formats see `here
1484
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_.
1485

    
1486
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1487
also supports three (3) different locations for the above Images to be stored:
1488

    
1489
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1490
      in :file:`/etc/default/snf-image`)
1491
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1492
    * On Pithos (accessible natively, not only by its public URL)
1493

    
1494
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1495
on the official `snf-image page
1496
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The
1497
image is of type ``diskdump``. We will store it in our new Pithos installation.
1498

    
1499
To do so, do the following:
1500

    
1501
a) Download the Image from the official snf-image page.
1502

    
1503
b) Upload the Image to your Pithos installation, either using the Pithos Web
1504
   UI or the command line client `kamaki
1505
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1506

    
1507
To upload the file using kamaki, run:
1508

    
1509
.. code-block:: console
1510
   
1511
   # kamaki file upload debian_base-6.0-x86_64.diskdump pithos
1512

    
1513
Once the Image is uploaded successfully, download the Image's metadata file
1514
from the official snf-image page. You will need it, for spawning a VM from
1515
Ganeti, in the next section.
1516

    
1517
Of course, you can repeat the procedure to upload more Images, available from
1518
the `official snf-image page
1519
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_.
1520

    
1521
.. _ganeti-with-pithos-images:
1522

    
1523
Spawning a VM from a Pithos Image, using Ganeti
1524
-----------------------------------------------
1525

    
1526
Now, it is time to test our installation so far. So, we have Astakos and
1527
Pithos installed, we have a working Ganeti installation, the snf-image
1528
definition installed on all VM-capable nodes, a Debian Squeeze Image on
1529
Pithos and kamaki installed and configured. Make sure you also have the 
1530
`metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_ 
1531
for this image.
1532

    
1533
To spawn a VM from a Pithos file, we need to know:
1534

    
1535
    1) The hashmap of the file
1536
    2) The size of the file
1537

    
1538
If you uploaded the file with kamaki as described above, run:
1539
    
1540
.. code-block:: console
1541

    
1542
   # kamaki file info pithos:debian_base-6.0-x86_64.diskdump 
1543

    
1544
else, replace ``pithos`` and ``debian_base-6.0-x86_64.diskdump`` with the 
1545
container and filename you used, when uploading the file.
1546

    
1547
The hashmap is the field ``x-object-hash``, while the size of the file is the
1548
``content-length`` field, that ``kamaki file info`` command returns.
1549

    
1550
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1551

    
1552
.. code-block:: console
1553

    
1554
   # gnt-instance add -o snf-image+default --os-parameters \
1555
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithosmap://<HashMap>/<Size>",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1556
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1557
                      testvm1
1558

    
1559
In the above command:
1560

    
1561
 * ``img_passwd``: the arbitrary root password of your new instance
1562
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1563
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1564
               should have the format ``pithosmap://<HashMap>/<size>``:
1565
               * ``HashMap``: the map of the file
1566
               * ``size``: the size of the file, same size as reported in 
1567
                 ``ls -la filename``
1568
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1569
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1570
                       <http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_
1571

    
1572
If the ``gnt-instance add`` command returns successfully, then run:
1573

    
1574
.. code-block:: console
1575

    
1576
   # gnt-instance info testvm1 | grep "console connection"
1577

    
1578
to find out where to connect using VNC. If you can connect successfully and can
1579
login to your new instance using the root password ``my_vm_example_passw0rd``,
1580
then everything works as expected and you have your new Debian Base VM up and
1581
running.
1582

    
1583
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1584
to access the Pithos database and the Pithos backend data (newer versions
1585
require UUID instead of a username). Another issue you may encounter is that in
1586
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1587
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1588
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1589
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1590
are sure everything works till this point.
1591

    
1592
If everything works, you have successfully connected Ganeti with Pithos. Let's
1593
move on to networking now.
1594

    
1595
.. warning::
1596

    
1597
    You can bypass the networking sections and go straight to
1598
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1599
    the Cyclades Network Service, but only the Cyclades Compute Service
1600
    (recommended for now).
1601

    
1602
Networking Setup Overview
1603
-------------------------
1604

    
1605
This part is deployment-specific and must be customized based on the specific
1606
needs of the system administrator.
1607

    
1608
In this section, we'll describe the simplest scenario, which will provide
1609
access to the public Internet along with private networking capabilities for 
1610
the VMs.
1611

    
1612
.. _snf-network:
1613

    
1614
snf-network
1615
~~~~~~~~~~~
1616

    
1617
snf-network is a set of custom scripts, that perform all the necessary actions,
1618
so that VMs have a working networking configuration.
1619

    
1620
Install snf-network on all Ganeti nodes:
1621

    
1622
.. code-block:: console
1623

    
1624
   # apt-get install snf-network
1625

    
1626
Then, in :file:`/etc/default/snf-network` set:
1627

    
1628
.. code-block:: console
1629

    
1630
   MAC_MASK=ff:ff:f0:00:00:00
1631

    
1632
.. _nfdhcpd:
1633

    
1634
nfdhcpd
1635
~~~~~~~
1636

    
1637
nfdhcpd is an NFQUEUE based daemon, answering DHCP requests and running locally 
1638
on every Ganeti node. Its leases file, gets automatically updated by 
1639
snf-network and information provided by Ganeti.
1640

    
1641
.. code-block:: console
1642

    
1643
   # apt-get install python-nfqueue=0.4+physindev-1~wheezy
1644
   # apt-get install nfdhcpd
1645

    
1646
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1647
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1648
variable to your DNS IP/s (the one running dnsmasq for instance or you can use
1649
Google's DNS server ``8.8.8.8``). Restart the server on all nodes:
1650

    
1651
.. code-block:: console
1652

    
1653
   # /etc/init.d/nfdhcpd restart
1654

    
1655
In order for nfdhcpd to receive the VMs requests, we have to mangle all DHCP 
1656
traffic coming from the corresponding interfaces. To accomplish that run:
1657

    
1658
.. code-block:: console
1659

    
1660
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1661

    
1662
and append it to your ``/etc/rc.local``.
1663

    
1664
You can check which clients are currently served by nfdhcpd by running:
1665

    
1666
.. code-block:: console
1667

    
1668
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1669

    
1670
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1671

    
1672
Public Network Setup
1673
--------------------
1674

    
1675
In the following section, we'll guide you through a very basic network setup.
1676
This assumes the following:
1677
    
1678
    * Node1 has access to the public network via eth0.
1679
    * Node1 will become a NAT server for the VMs.
1680
    * All nodes have ``br1/br2`` dedicated for the VMs' public/private traffic.
1681
    * VMs' public network is ``10.0.0.0/24`` with gateway ``10.0.0.1``.
1682

    
1683
Setting up the NAT server on node1
1684
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1685

    
1686
To setup the NAT server on node1, run:
1687

    
1688
.. code-block:: console
1689
   
1690
   # ip addr add 10.0.0.1/24 dev br1
1691
   # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
1692
   # echo 1 > /proc/sys/net/ipv4/ip_forward
1693

    
1694
and append it to your ``/etc/rc.local``.
1695
    
1696

    
1697
Testing the Public Networks
1698
~~~~~~~~~~~~~~~~~~~~~~~~~~~
1699

    
1700
First add the network in Ganati:
1701

    
1702
.. code-block:: console
1703

    
1704
   # gnt-network add --network=10.0.0.0/24 --gateway=10.0.0.1 --tags=nfdhcpd test-net-public
1705

    
1706
Then, provide connectivity mode and link to the network:
1707

    
1708
.. code-block:: console
1709

    
1710
   # gnt-network connect test-net-public bridged br1
1711

    
1712
Now, it is time to test that the backend infrastracture is correctly setup for
1713
the Public Network. We will add a new VM, almost the same way we did it on the
1714
previous testing section. However, now we'll also add one NIC, configured to be
1715
managed from our previously defined network.
1716

    
1717
Fetch the Debian Old Base image locally (in all nodes), by running:
1718

    
1719
.. code-block:: console
1720

    
1721
   # wget http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump -O /var/lib/snf-image/debian_base-6.0-x86_64.diskdump
1722

    
1723
Also in all nodes, bring all ``br*`` interfaces up:
1724

    
1725
.. code-block:: console
1726

    
1727
   # ifconfig br1 up
1728
   # ifconfig br2 up
1729

    
1730
Finally, run on the GANETI-MASTER (node1):
1731

    
1732
.. code-block:: console
1733
    
1734
   # gnt-instance add -o snf-image+default --os-parameters \
1735
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1736
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1737
                      --net 0:ip=pool,network=test-net-public \
1738
                      testvm2
1739

    
1740
The following things should happen:
1741

    
1742
    * Ganeti creates a tap interface.
1743
    * snf-network bridges the tap interface to ``br1`` and updates nfdhcpd state.
1744
    * nfdhcpd serves 10.0.0.2 IP to the interface of ``testvm2``.
1745

    
1746
Now try to ping the outside world e.g., ``www.synnefo.org`` from inside the VM 
1747
(connect to the VM using VNC as before).
1748

    
1749
Make sure everything works as expected, before proceeding with the Private
1750
Networks setup.
1751

    
1752
.. _private-networks-setup:
1753

    
1754
Private Networks Setup
1755
----------------------
1756

    
1757
In this section, we'll describe a basic network configuration, that will provide 
1758
isolated private networks to the end-users. All private network traffic, will 
1759
pass through ``br1`` and isolation will be guaranteed with a specific set of 
1760
``ebtables`` rules.
1761

    
1762
Testing the Private Networks
1763
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1764

    
1765
We'll create two instances and connect them to the same Private Network. This 
1766
means that the instances will have a second NIC connected to the ``br1``.
1767

    
1768
.. code-block:: console
1769

    
1770
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --tags=nfdhcpd,private-filtered test-net-prv-mac
1771
   # gnt-network connect test-net-prv-mac bridged br1
1772

    
1773
   # gnt-instance add -o snf-image+default --os-parameters \
1774
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1775
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1776
                      --net 0:ip=pool,network=test-net-public \
1777
                      --net 1:ip=pool,network=test-net-prv-mac \
1778
                      testvm3
1779

    
1780
   # gnt-instance add -o snf-image+default --os-parameters \
1781
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1782
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1783
                      --net 0:ip=pool,network=test-net-public \
1784
                      --net 1:ip=pool,network=test-net-prv-mac -n node2 \
1785
                      testvm4
1786

    
1787
Above, we create two instances with the first NIC connected to the internet and
1788
their second NIC connected to a MAC filtered private Network. Now, connect to the
1789
instances using VNC and make sure everything works as expected:
1790

    
1791
 a) The instances have access to the public internet through their first eth
1792
    interface (``eth0``), which has been automatically assigned a "public" IP.
1793

    
1794
 b) ``eth1`` will have mac prefix ``aa:00:55``
1795

    
1796
 c) On testvm3  ping 192.168.1.2
1797

    
1798
If everything works as expected, then you have finished the Network Setup at the
1799
backend for both types of Networks (Public & Private).
1800

    
1801
.. _cyclades-gtools:
1802

    
1803
Cyclades Ganeti tools
1804
---------------------
1805

    
1806
In order for Ganeti to be connected with Cyclades later on, we need the
1807
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1808
case). You can install them by running in both nodes:
1809

    
1810
.. code-block:: console
1811

    
1812
   # apt-get install snf-cyclades-gtools
1813

    
1814
This will install the following:
1815

    
1816
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1817
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1818

    
1819
Configure ``snf-cyclades-gtools``
1820
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1821

    
1822
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1823
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1824
that need it:
1825

    
1826
.. code-block:: console
1827

    
1828
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1829

    
1830
The above variables should reflect your :ref:`Message Queue setup
1831
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1832

    
1833
Connect ``snf-image`` with ``snf-progress-monitor``
1834
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1835

    
1836
Finally, we need to configure ``snf-image`` to publish progress messages during
1837
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1838
set the corresponding variable to ``snf-progress-monitor``:
1839

    
1840
.. code-block:: console
1841

    
1842
   PROGRESS_MONITOR="snf-progress-monitor"
1843

    
1844
This file should be editted in all Ganeti nodes.
1845

    
1846
.. _rapi-user:
1847

    
1848
Synnefo RAPI user
1849
-----------------
1850

    
1851
As a last step before installing Cyclades, create a new RAPI user that will
1852
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1853
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1854
You can do this, by first running:
1855

    
1856
.. code-block:: console
1857

    
1858
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1859

    
1860
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1861

    
1862
.. code-block:: console
1863

    
1864
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1865

    
1866
More about Ganeti's RAPI users `here.
1867
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_
1868

    
1869
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1870
to the actual Cyclades installation.
1871

    
1872

    
1873
Installation of Cyclades on node1
1874
=================================
1875

    
1876
This section describes the installation of Cyclades. Cyclades is Synnefo's
1877
Compute service. The Image Service will get installed automatically along with
1878
Cyclades, because it is contained in the same Synnefo component.
1879

    
1880
We will install Cyclades on node1. To do so, we install the corresponding
1881
package by running on node1:
1882

    
1883
.. code-block:: console
1884

    
1885
   # apt-get install snf-cyclades-app memcached python-memcache
1886

    
1887
If all packages install successfully, then Cyclades are installed and we
1888
proceed with their configuration.
1889

    
1890
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1891
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1892
This is achieved by storing all sensitive information to a CACHE backend and
1893
exporting it via VMAPI. The cache entries are invalidated after the first
1894
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1895
`Django <https://www.djangoproject.com/>`_ cache backend.
1896

    
1897
Configuration of Cyclades
1898
=========================
1899

    
1900
Conf files
1901
----------
1902

    
1903
After installing Cyclades, a number of new configuration files will appear under
1904
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1905
only the minimal needed changes to result with a working system. In general,
1906
sane defaults have been chosen for the most of the options, to cover most of the
1907
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1908
once you get familiar with the different options.
1909

    
1910
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1911

    
1912
.. code-block:: console
1913

    
1914
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1915
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1916

    
1917
   CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w'
1918

    
1919
The ``ASTAKOS_AUTH_URL`` denotes the Astakos endpoint for Cyclades,
1920
which is used for all user management, including authentication.
1921
Since our Astakos, Cyclades, and Pithos installations belong together,
1922
they should all have identical ``ASTAKOS_AUTH_URL`` setting
1923
(see also, :ref:`previously <conf-pithos>`).
1924

    
1925
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL.
1926
Appending an extra path (``/cyclades`` here) is recommended in order to
1927
distinguish components, if more than one are installed on the same machine.
1928

    
1929
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1930
It can be retrieved by running on the Astakos node (node1 in our case):
1931

    
1932
.. code-block:: console
1933

    
1934
   # snf-manage component-list
1935

    
1936
The token has been generated automatically during the :ref:`Cyclades service
1937
registration <services-reg>`.
1938

    
1939
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1940

    
1941
.. code-block:: console
1942

    
1943
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1944
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1945
   CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1946

    
1947
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1948
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1949
used by the Cyclades Web UI to get from Astakos all the information needed to
1950
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1951
above should have the same values we put in the corresponding variables in
1952
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1953
:ref:`Pithos configuration <conf-pithos>` section.
1954

    
1955
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1956

    
1957
.. code-block:: console
1958

    
1959
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1960
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1961

    
1962
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1963
denotes the Pithos database (where the Image files are stored). So we set that
1964
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1965
Pithos data location.
1966

    
1967
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1968

    
1969
.. code-block:: console
1970

    
1971
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1972

    
1973
The above settings denote the Message Queue. Those settings should have the same
1974
values as in ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` file, and
1975
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1976

    
1977
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1978

    
1979
.. code-block:: console
1980

    
1981
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1982

    
1983
Edit ``/etc/default/vncauthproxy``:
1984

    
1985
.. code-block:: console
1986

    
1987
   CHUID="nobody:www-data"
1988

    
1989
We have now finished with the basic Cyclades configuration.
1990

    
1991
Database Initialization
1992
-----------------------
1993

    
1994
Once Cyclades is configured, we sync the database:
1995

    
1996
.. code-block:: console
1997

    
1998
   $ snf-manage syncdb
1999
   $ snf-manage migrate
2000

    
2001
and load the initial server flavors:
2002

    
2003
.. code-block:: console
2004

    
2005
   $ snf-manage loaddata flavors
2006

    
2007
If everything returns successfully, our database is ready.
2008

    
2009
Add the Ganeti backend
2010
----------------------
2011

    
2012
In our installation we assume that we only have one Ganeti cluster, the one we
2013
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
2014
Cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
2015
correctly.
2016

    
2017
.. code-block:: console
2018

    
2019
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
2020

    
2021
You can see everything has been setup correctly by running:
2022

    
2023
.. code-block:: console
2024

    
2025
   $ snf-manage backend-list
2026

    
2027
Enable the new backend by running:
2028

    
2029
.. code-block::
2030

    
2031
   $ snf-manage backend-modify --drained False 1
2032

    
2033
.. warning:: Since version 0.13, the backend is set to "drained" by default.
2034
    This means that you cannot add VMs to it. The reason for this is that the
2035
    nodes should be unavailable to Synnefo until the Administrator explicitly
2036
    releases them. To change this setting, use ``snf-manage backend-modify
2037
    --drained False <backend-id>``.
2038

    
2039
If something is not set correctly, you can modify the backend with the
2040
``snf-manage backend-modify`` command. If something has gone wrong, you could
2041
modify the backend to reflect the Ganeti installation by running:
2042

    
2043
.. code-block:: console
2044

    
2045
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
2046
                               --user=cyclades
2047
                               --pass=example_rapi_passw0rd
2048
                               1
2049

    
2050
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
2051
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
2052
can talk to Ganeti even after a Ganeti master-failover.
2053

    
2054
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
2055
password.  Once we setup the first backend to point at our Ganeti cluster, we
2056
update the Cyclades backends status by running:
2057

    
2058
.. code-block:: console
2059

    
2060
   $ snf-manage backend-update-status
2061

    
2062
Cyclades can manage multiple Ganeti backends, but for the purpose of this
2063
guide,we won't get into more detail regarding mulitple backends. If you want to
2064
learn more please see /*TODO*/.
2065

    
2066
Add a Public Network
2067
----------------------
2068

    
2069
Cyclades supports different Public Networks on different Ganeti backends.
2070
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
2071
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
2072
created NIC on a bridge. 
2073

    
2074
.. code-block:: console
2075

    
2076
   $ snf-manage network-create --subnet=10.0.0.0/24 \
2077
                               --gateway=10.0.0.1 \
2078
                               --public --dhcp --flavor=CUSTOM \
2079
                               --link=br1 --mode=bridged \
2080
                               --name=public_network \
2081
                               --backend-id=1
2082

    
2083
This will create the Public Network on both Cyclades and the Ganeti backend. To
2084
make sure everything was setup correctly, also run:
2085

    
2086
.. code-block:: console
2087

    
2088
   # snf-manage reconcile-networks
2089

    
2090
You can use ``snf-manage reconcile-networks --fix-all`` to fix any
2091
inconsistencies that may have arisen.
2092

    
2093
You can see all available networks by running:
2094

    
2095
.. code-block:: console
2096

    
2097
   # snf-manage network-list
2098

    
2099
and inspect each network's state by running:
2100

    
2101
.. code-block:: console
2102

    
2103
   # snf-manage network-inspect <net_id>
2104

    
2105
Finally, you can see the networks from the Ganeti perspective by running on the
2106
Ganeti MASTER:
2107

    
2108
.. code-block:: console
2109

    
2110
   # gnt-network list
2111
   # gnt-network info <network_name>
2112

    
2113
Create pools for Private Networks
2114
---------------------------------
2115

    
2116
To prevent duplicate assignment of resources to different private networks,
2117
Cyclades supports two types of pools:
2118

    
2119
 - MAC prefix Pool
2120
 - Bridge Pool
2121

    
2122
As long as those resourses have been provisioned, admin has to define two
2123
these pools in Synnefo:
2124

    
2125

    
2126
.. code-block:: console
2127

    
2128
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
2129

    
2130
Also, change the Synnefo setting in :file:`/etc/synnefo/20-snf-cyclades-app-api.conf`:
2131

    
2132
.. code-block:: console
2133

    
2134
   DEFAULT_MAC_FILTERED_BRIDGE = 'br2'
2135

    
2136
Servers restart
2137
---------------
2138

    
2139
Restart gunicorn on node1:
2140

    
2141
.. code-block:: console
2142

    
2143
   # /etc/init.d/gunicorn restart
2144

    
2145
Now let's do the final connections of Cyclades with Ganeti.
2146

    
2147
``snf-dispatcher`` initialization
2148
---------------------------------
2149

    
2150
``snf-dispatcher`` dispatches all messages published to the Message Queue and
2151
manages the Cyclades database accordingly. It also initializes all exchanges. By
2152
default it is not enabled during installation of Cyclades, so let's enable it in
2153
its configuration file ``/etc/default/snf-dispatcher``:
2154

    
2155
.. code-block:: console
2156

    
2157
   SNF_DSPTCH_ENABLE=true
2158

    
2159
and start the daemon:
2160

    
2161
.. code-block:: console
2162

    
2163
   # /etc/init.d/snf-dispatcher start
2164

    
2165
You can see that everything works correctly by tailing its log file
2166
``/var/log/synnefo/dispatcher.log``.
2167

    
2168
``snf-ganeti-eventd`` on GANETI MASTER
2169
--------------------------------------
2170

    
2171
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
2172
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
2173
The daemon is already installed on the GANETI MASTER (node1 in our case).
2174
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
2175
installation, so we enable it in its configuration file
2176
``/etc/default/snf-ganeti-eventd``:
2177

    
2178
.. code-block:: console
2179

    
2180
   SNF_EVENTD_ENABLE=true
2181

    
2182
and start the daemon:
2183

    
2184
.. code-block:: console
2185

    
2186
   # /etc/init.d/snf-ganeti-eventd start
2187

    
2188
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
2189

    
2190
Apply Quota
2191
-----------
2192

    
2193
The following commands will check and fix the integrity of user quota.
2194
In a freshly installed system, these commands have no effect and can be
2195
skipped.
2196

    
2197
.. code-block:: console
2198

    
2199
   node1 # snf-manage quota --sync
2200
   node1 # snf-manage reconcile-resources-astakos --fix
2201
   node2 # snf-manage reconcile-resources-pithos --fix
2202
   node1 # snf-manage reconcile-resources-cyclades --fix
2203

    
2204
VM stats configuration
2205
----------------------
2206

    
2207
Please refer to the documentation in the :ref:`admin guide <admin-guide-stats>`
2208
for deploying and configuring snf-stats-app and collectd.
2209

    
2210

    
2211
If all the above return successfully, then you have finished with the Cyclades
2212
installation and setup.
2213

    
2214
Let's test our installation now.
2215

    
2216

    
2217
Testing of Cyclades
2218
===================
2219

    
2220
Cyclades Web UI
2221
---------------
2222

    
2223
First of all we need to test that our Cyclades Web UI works correctly. Open your
2224
browser and go to the Astakos home page. Login and then click 'Cyclades' on the
2225
top cloud bar. This should redirect you to:
2226

    
2227
 `http://node1.example.com/cyclades/ui/`
2228

    
2229
and the Cyclades home page should appear. If not, please go back and find what
2230
went wrong. Do not proceed if you don't see the Cyclades home page.
2231

    
2232
If the Cyclades home page appears, click on the orange button 'New machine'. The
2233
first step of the 'New machine wizard' will appear. This step shows all the
2234
available Images from which you can spawn new VMs. The list should be currently
2235
empty, as we haven't registered any Images yet. Close the wizard and browse the
2236
interface (not many things to see yet). If everything seems to work, let's
2237
register our first Image file.
2238

    
2239
Cyclades Images
2240
---------------
2241

    
2242
To test our Cyclades installation, we will use an Image stored on Pithos to
2243
spawn a new VM from the Cyclades interface. We will describe all steps, even
2244
though you may already have uploaded an Image on Pithos from a :ref:`previous
2245
<snf-image-images>` section:
2246

    
2247
 * Upload an Image file to Pithos
2248
 * Register that Image file to Cyclades
2249
 * Spawn a new VM from that Image from the Cyclades Web UI
2250

    
2251
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2252
command line client to do the uploading and registering of the Image.
2253

    
2254
Register an existing Image file to Cyclades
2255
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2256

    
2257
For the purposes of the following example, we assume that the user has uploaded
2258
a file in container ``pithos`` called ``debian_base-6.0-x86_64``. Moreover, 
2259
he should have the appropriate `metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_.
2260

    
2261
Once the Image file has been successfully uploaded on Pithos then we register
2262
it to Cyclades, by running:
2263

    
2264
.. code-block:: console
2265

    
2266
   $ kamaki image register "Debian Base" pithos:debian_base-6.0-x86_64 \
2267
     --metafile debian_base-6.0-x86_64.diskdump.meta --public
2268

    
2269
This command registers a Pithos file as an Image in Cyclades. This Image will 
2270
be public (``--public``), so all users will be able to spawn VMs from it. 
2271

    
2272
Spawn a VM from the Cyclades Web UI
2273
-----------------------------------
2274

    
2275
If the registration completes successfully, then go to the Cyclades Web UI from
2276
your browser at:
2277

    
2278
 `https://node1.example.com/cyclades/ui/`
2279

    
2280
Click on the 'New Machine' button and the first step of the wizard will appear.
2281
Click on 'My Images' (right after 'System' Images) on the left pane of the
2282
wizard. Your previously registered Image "Debian Base" should appear under
2283
'Available Images'. If not, something has gone wrong with the registration. Make
2284
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2285
register`` returns successfully with all options and properties as shown above.
2286

    
2287
If the Image appears on the list, select it and complete the wizard by selecting
2288
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2289
write down your password, because you *WON'T* be able to retrieve it later.
2290

    
2291
If everything was setup correctly, after a few minutes your new machine will go
2292
to state 'Running' and you will be able to use it. Click 'Console' to connect
2293
through VNC out of band, or click on the machine's icon to connect directly via
2294
SSH or RDP (for windows machines).
2295

    
2296
Congratulations. You have successfully installed the whole Synnefo stack and
2297
connected all components.