Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ b0bdf005

History | View | Annotate | Download (81.1 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
22
not released yet.
23

    
24
If you just want to install the Object Storage Service (Pithos), follow the
25
guide and just stop after the "Testing of Pithos" section.
26

    
27

    
28
Installation of Synnefo / Introduction
29
======================================
30

    
31
We will install the services with the above list's order. The last three
32
services will be installed in a single step (at the end), because at the moment
33
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
be installed in the second, due to a conflict between the snf-pithos-app and
36
snf-cyclades-app component (scheduled to be fixed in the next version).
37

    
38
For the rest of the documentation we will refer to the first physical node as
39
"node1" and the second as "node2". We will also assume that their domain names
40
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
41
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
42
In case you choose to follow a private installation you will need to
43
set up a private dns server, using dnsmasq for example. See node1 below for more.
44

    
45
General Prerequisites
46
=====================
47

    
48
These are the general synnefo prerequisites, that you need on node1 and node2
49
and are related to all the services (Astakos, Pithos, Cyclades).
50

    
51
To be able to download all synnefo components you need to add the following
52
lines in your ``/etc/apt/sources.list`` file:
53

    
54
| ``deb http://apt.dev.grnet.gr squeeze/``
55
| ``deb-src http://apt.dev.grnet.gr squeeze/``
56

    
57
and import the repo's GPG key:
58

    
59
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
60

    
61
Also add the following line to enable the ``squeeze-backports`` repository,
62
which may provide more recent versions of certain packages. The repository
63
is deactivated by default and must be specified expicitly in ``apt-get``
64
operations:
65

    
66
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
67

    
68
You also need a shared directory visible by both nodes. Pithos will save all
69
data inside this directory. By 'all data', we mean files, images, and pithos
70
specific mapping data. If you plan to upload more than one basic image, this
71
directory should have at least 50GB of free space. During this guide, we will
72
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
73
to node2 (be sure to set no_root_squash flag). Node2 has this directory
74
mounted under ``/srv/pithos``, too.
75

    
76
Before starting the synnefo installation, you will need basic third party
77
software to be installed and configured on the physical nodes. We will describe
78
each node's general prerequisites separately. Any additional configuration,
79
specific to a synnefo service for each node, will be described at the service's
80
section.
81

    
82
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
83
system clocks (e.g. by running ntpd).
84

    
85
Node1
86
-----
87

    
88

    
89
General Synnefo dependencies
90
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
91

    
92
		* apache (http server)
93
		* public certificate
94
		* gunicorn (WSGI http server)
95
		* postgresql (database)
96
		* rabbitmq (message queue)
97
		* ntp (NTP daemon)
98
		* gevent
99
		* dns server
100

    
101
You can install apache2, postgresql, ntp and rabbitmq by running:
102

    
103
.. code-block:: console
104

    
105
   # apt-get install apache2 postgresql ntp rabbitmq-server
106

    
107
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
108
the official debian backports:
109

    
110
.. code-block:: console
111

    
112
   # apt-get -t squeeze-backports install gunicorn
113

    
114
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
115

    
116
.. code-block:: console
117

    
118
   # apt-get -t squeeze-backports install python-gevent
119

    
120
On node1, we will create our databases, so you will also need the
121
python-psycopg2 package:
122

    
123
.. code-block:: console
124

    
125
   # apt-get install python-psycopg2
126

    
127

    
128
Database setup
129
~~~~~~~~~~~~~~
130

    
131
On node1, we create a database called ``snf_apps``, that will host all django
132
apps related tables. We also create the user ``synnefo`` and grant him all
133
privileges on the database. We do this by running:
134

    
135
.. code-block:: console
136

    
137
    root@node1:~ # su - postgres
138
    postgres@node1:~ $ psql
139
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
140
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
141
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
142

    
143
We also create the database ``snf_pithos`` needed by the Pithos backend and
144
grant the ``synnefo`` user all privileges on the database. This database could
145
be created on node2 instead, but we do it on node1 for simplicity. We will
146
create all needed databases on node1 and then node2 will connect to them.
147

    
148
.. code-block:: console
149

    
150
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
151
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
152

    
153
Configure the database to listen to all network interfaces. You can do this by
154
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
155
``listen_addresses`` to ``'*'`` :
156

    
157
.. code-block:: console
158

    
159
    listen_addresses = '*'
160

    
161
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
162
node2 to connect to the database. Add the following lines under ``#IPv4 local
163
connections:`` :
164

    
165
.. code-block:: console
166

    
167
    host		all	all	4.3.2.1/32	md5
168
    host		all	all	4.3.2.2/32	md5
169

    
170
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
171
actual IPs. Now, restart the server to apply the changes:
172

    
173
.. code-block:: console
174

    
175
   # /etc/init.d/postgresql restart
176

    
177
Gunicorn setup
178
~~~~~~~~~~~~~~
179

    
180
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
181
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
182

    
183
.. code-block:: console
184

    
185
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
186

    
187

    
188
.. warning:: Do NOT start the server yet, because it won't find the
189
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
190
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
191
    ``--worker-class=sync``. We will start the server after successful
192
    installation of astakos. If the server is running::
193

    
194
       # /etc/init.d/gunicorn stop
195

    
196
Certificate Creation
197
~~~~~~~~~~~~~~~~~~~~~
198

    
199
Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection.
200
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate.
201
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
202
and sign your keys with this. To do so on node1 run
203

    
204
.. code-block:: console
205

    
206
		# aptitude install openvpn
207
		# mkdir /etc/openvpn/easy-rsa
208
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
209
		# cd /etc/openvpn/easy-rsa/2.0
210
		# vim vars
211

    
212
In vars you can set your own parameters such as KEY_COUNTRY
213

    
214
.. code-block:: console
215

    
216
	# . ./vars
217
	# ./clean-all
218

    
219
Now you can create the certificate
220

    
221
.. code-block:: console
222

    
223
		# ./build-ca
224

    
225
The previous will create a ``ca.crt`` file. Copy this file under
226
``/usr/local/share/ca-certificates/`` directory and run :
227

    
228
.. code-block:: console
229

    
230
		# update-ca-certificates
231

    
232
to update the records. You will have to do the following on node2 as well.
233

    
234
Now you can create the keys and sign them with the certificate
235

    
236
.. code-block:: console
237

    
238
		# ./build-key-server node1.example.com
239

    
240
This will create a .pem and a .key file in your current folder. Copy these in
241
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and
242
use them in the apache2 configuration file below instead of the defaults.
243

    
244
Apache2 setup
245
~~~~~~~~~~~~~
246

    
247
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
248
following:
249

    
250
.. code-block:: console
251

    
252
    <VirtualHost *:80>
253
        ServerName node1.example.com
254

    
255
        RewriteEngine On
256
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
257
        RewriteRule ^(.*)$ - [F,L]
258
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
259
    </VirtualHost>
260

    
261

    
262
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
263
following:
264

    
265
.. code-block:: console
266

    
267
    <IfModule mod_ssl.c>
268
    <VirtualHost _default_:443>
269
        ServerName node1.example.com
270

    
271
        Alias /static "/usr/share/synnefo/static"
272

    
273
        #  SetEnv no-gzip
274
        #  SetEnv dont-vary
275

    
276
       AllowEncodedSlashes On
277

    
278
       RequestHeader set X-Forwarded-Protocol "https"
279

    
280
    <Proxy * >
281
        Order allow,deny
282
        Allow from all
283
    </Proxy>
284

    
285
        SetEnv                proxy-sendchunked
286
        SSLProxyEngine        off
287
        ProxyErrorOverride    off
288

    
289
        ProxyPass        /static !
290
        ProxyPass        / http://localhost:8080/ retry=0
291
        ProxyPassReverse / http://localhost:8080/
292

    
293
        RewriteEngine On
294
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
295
        RewriteRule ^(.*)$ - [F,L]
296

    
297
        SSLEngine on
298
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
299
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
300
    </VirtualHost>
301
    </IfModule>
302

    
303
Now enable sites and modules by running:
304

    
305
.. code-block:: console
306

    
307
   # a2enmod ssl
308
   # a2enmod rewrite
309
   # a2dissite default
310
   # a2ensite synnefo
311
   # a2ensite synnefo-ssl
312
   # a2enmod headers
313
   # a2enmod proxy_http
314

    
315
.. note:: This isn't really needed, but it's a good security practice to disable
316
    directory listing in apache::
317

    
318
        # a2dismod autoindex
319

    
320

    
321
.. warning:: Do NOT start/restart the server yet. If the server is running::
322

    
323
       # /etc/init.d/apache2 stop
324

    
325

    
326
.. _rabbitmq-setup:
327

    
328
Message Queue setup
329
~~~~~~~~~~~~~~~~~~~
330

    
331
The message queue will run on node1, so we need to create the appropriate
332
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
333
exchanges:
334

    
335
.. code-block:: console
336

    
337
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
338
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
339

    
340
We do not need to initialize the exchanges. This will be done automatically,
341
during the Cyclades setup.
342

    
343
Pithos data directory setup
344
~~~~~~~~~~~~~~~~~~~~~~~~~~~
345

    
346
As mentioned in the General Prerequisites section, there is a directory called
347
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
348
directory inside it:
349

    
350
.. code-block:: console
351

    
352
   # cd /srv/pithos
353
   # mkdir data
354
   # chown www-data:www-data data
355
   # chmod g+ws data
356

    
357
DNS server setup
358
~~~~~~~~~~~~~~~~
359

    
360
If your machines are not under the same domain nameyou have to set up a dns server.
361
In order to set up a dns server using dnsmasq do the following
362

    
363
.. code-block:: console
364

    
365
				# apt-get install dnsmasq
366

    
367
Then edit you ``/etc/hosts/`` as follows
368

    
369
.. code-block:: console
370

    
371
		4.3.2.1     node1.example.com
372
		4.3.2.2     node2.example.com
373

    
374
Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and
375
the ``interface`` you would like to listen to.
376

    
377
Also add the following in your ``/etc/resolv.conf`` file
378

    
379
.. code-block:: console
380

    
381
		nameserver 4.3.2.1
382

    
383
You are now ready with all general prerequisites concerning node1. Let's go to
384
node2.
385

    
386
Node2
387
-----
388

    
389
General Synnefo dependencies
390
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
391

    
392
    * apache (http server)
393
    * gunicorn (WSGI http server)
394
    * postgresql (database)
395
    * ntp (NTP daemon)
396
    * gevent
397
    * certificates
398
    * dns setup
399

    
400
You can install the above by running:
401

    
402
.. code-block:: console
403

    
404
   # apt-get install apache2 postgresql ntp
405

    
406
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
407
the official debian backports:
408

    
409
.. code-block:: console
410

    
411
   # apt-get -t squeeze-backports install gunicorn
412

    
413
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
414

    
415
.. code-block:: console
416

    
417
   # apt-get -t squeeze-backports install python-gevent
418

    
419
Node2 will connect to the databases on node1, so you will also need the
420
python-psycopg2 package:
421

    
422
.. code-block:: console
423

    
424
   # apt-get install python-psycopg2
425

    
426
Database setup
427
~~~~~~~~~~~~~~
428

    
429
All databases have been created and setup on node1, so we do not need to take
430
any action here. From node2, we will just connect to them. When you get familiar
431
with the software you may choose to run different databases on different nodes,
432
for performance/scalability/redundancy reasons, but those kind of setups are out
433
of the purpose of this guide.
434

    
435
Gunicorn setup
436
~~~~~~~~~~~~~~
437

    
438
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
439
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
440
(as happened for node1):
441

    
442
.. code-block:: console
443

    
444
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
445

    
446

    
447
.. warning:: Do NOT start the server yet, because it won't find the
448
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
449
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
450
    ``--worker-class=sync``. We will start the server after successful
451
    installation of astakos. If the server is running::
452

    
453
       # /etc/init.d/gunicorn stop
454

    
455
Apache2 setup
456
~~~~~~~~~~~~~
457

    
458
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
459
following:
460

    
461
.. code-block:: console
462

    
463
    <VirtualHost *:80>
464
        ServerName node2.example.com
465

    
466
        RewriteEngine On
467
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
468
        RewriteRule ^(.*)$ - [F,L]
469
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
470
    </VirtualHost>
471

    
472
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
473
containing the following:
474

    
475
.. code-block:: console
476

    
477
    <IfModule mod_ssl.c>
478
    <VirtualHost _default_:443>
479
        ServerName node2.example.com
480

    
481
        Alias /static "/usr/share/synnefo/static"
482

    
483
        SetEnv no-gzip
484
        SetEnv dont-vary
485
        AllowEncodedSlashes On
486

    
487
        RequestHeader set X-Forwarded-Protocol "https"
488

    
489
        <Proxy * >
490
            Order allow,deny
491
            Allow from all
492
        </Proxy>
493

    
494
        SetEnv                proxy-sendchunked
495
        SSLProxyEngine        off
496
        ProxyErrorOverride    off
497

    
498
        ProxyPass        /static !
499
        ProxyPass        / http://localhost:8080/ retry=0
500
        ProxyPassReverse / http://localhost:8080/
501

    
502
        SSLEngine on
503
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
504
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
505
    </VirtualHost>
506
    </IfModule>
507

    
508
As in node1, enable sites and modules by running:
509

    
510
.. code-block:: console
511

    
512
   # a2enmod ssl
513
   # a2enmod rewrite
514
   # a2dissite default
515
   # a2ensite synnefo
516
   # a2ensite synnefo-ssl
517
   # a2enmod headers
518
   # a2enmod proxy_http
519

    
520
.. note:: This isn't really needed, but it's a good security practice to disable
521
    directory listing in apache::
522

    
523
        # a2dismod autoindex
524

    
525
.. warning:: Do NOT start/restart the server yet. If the server is running::
526

    
527
       # /etc/init.d/apache2 stop
528

    
529

    
530
Acquire certificate
531
~~~~~~~~~~~~~~~~~~~
532

    
533
Copy the certificate you created before on node1 (`ca.crt`) under the directory
534
``/usr/local/share/ca-certificate``
535

    
536
and run:
537

    
538
.. code-block:: console
539

    
540
		# update-ca-certificates
541

    
542
to update the records.
543

    
544

    
545
DNS Setup
546
~~~~~~~~~
547

    
548
Add the following line in ``/etc/resolv.conf`` file
549

    
550
.. code-block:: console
551

    
552
		nameserver 4.3.2.1
553

    
554
to inform the node about the new dns server.
555

    
556
We are now ready with all general prerequisites for node2. Now that we have
557
finished with all general prerequisites for both nodes, we can start installing
558
the services. First, let's install Astakos on node1.
559

    
560
Installation of Astakos on node1
561
================================
562

    
563
To install astakos, grab the package from our repository (make sure  you made
564
the additions needed in your ``/etc/apt/sources.list`` file, as described
565
previously), by running:
566

    
567
.. code-block:: console
568

    
569
   # apt-get install snf-astakos-app snf-pithos-backend
570

    
571
.. _conf-astakos:
572

    
573
Configuration of Astakos
574
========================
575

    
576
Conf Files
577
----------
578

    
579
After astakos is successfully installed, you will find the directory
580
``/etc/synnefo`` and some configuration files inside it. The files contain
581
commented configuration options, which are the default options. While installing
582
new snf-* components, new configuration files will appear inside the directory.
583
In this guide (and for all services), we will edit only the minimum necessary
584
configuration options, to reflect our setup. Everything else will remain as is.
585

    
586
After getting familiar with synnefo, you will be able to customize the software
587
as you wish and fits your needs. Many options are available, to empower the
588
administrator with extensively customizable setups.
589

    
590
For the snf-webproject component (installed as an astakos dependency), we
591
need the following:
592

    
593
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
594
uncomment and edit the ``DATABASES`` block to reflect our database:
595

    
596
.. code-block:: console
597

    
598
    DATABASES = {
599
     'default': {
600
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
601
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
602
         # ATTENTION: This *must* be the absolute path if using sqlite3.
603
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
604
         'NAME': 'snf_apps',
605
         'USER': 'synnefo',                      # Not used with sqlite3.
606
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
607
         # Set to empty string for localhost. Not used with sqlite3.
608
         'HOST': '4.3.2.1',
609
         # Set to empty string for default. Not used with sqlite3.
610
         'PORT': '5432',
611
     }
612
    }
613

    
614
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
615
``SECRET_KEY``. This is a Django specific setting which is used to provide a
616
seed in secret-key hashing algorithms. Set this to a random string of your
617
choice and keep it private:
618

    
619
.. code-block:: console
620

    
621
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
622

    
623
For astakos specific configuration, edit the following options in
624
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
625

    
626
.. code-block:: console
627

    
628
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
629

    
630
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
631

    
632
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
633
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an
634
extra path (``/astakos`` here) is recommended in order to distinguish
635
components, if more than one are installed on the same machine.
636

    
637
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
638
    If you would like to enable it, you have to edit the following options:
639

    
640
    .. code-block:: console
641

    
642
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
643
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
644
        ASTAKOS_RECAPTCHA_USE_SSL = True
645
        ASTAKOS_RECAPTCHA_ENABLED = True
646

    
647
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
648
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
649

    
650
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
651

    
652
.. code-block:: console
653

    
654
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
655

    
656
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
657

    
658
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
659

    
660
Those settings have to do with the black cloudbar endpoints and will be
661
described in more detail later on in this guide. For now, just edit the domain
662
to point at node1 which is where we have installed Astakos.
663

    
664
If you are an advanced user and want to use the Shibboleth Authentication
665
method, read the relative :ref:`section <shibboleth-auth>`.
666

    
667
.. _email-configuration:
668

    
669
Email delivery configuration
670
----------------------------
671

    
672
Many of the ``astakos`` operations require server to notify service users and
673
administrators via email. e.g. right after the signup process the service sents
674
an email to the registered email address containing an email verification url,
675
after the user verifies the email address astakos once again needs to notify
676
administrators with a notice that a new account has just been verified.
677

    
678
More specifically astakos sends emails in the following cases
679

    
680
- An email containing a verification link after each signup process.
681
- An email to the people listed in ``ADMINS`` setting after each email
682
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
683
  notifies administrators that an additional action is required in order to
684
  activate the user.
685
- A welcome email to the user email and an admin notification to ``ADMINS``
686
  right after each account activation.
687
- Feedback messages submited from astakos contact view and astakos feedback
688
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
689
- Project application request notifications to people included in ``HELPDESK``
690
  and ``MANAGERS`` settings.
691
- Notifications after each project members action (join request, membership
692
  accepted/declinde etc.) to project members or project owners.
693

    
694
Astakos uses the Django internal email delivering mechanism to send email
695
notifications. A simple configuration, using an external smtp server to
696
deliver messages, is shown below. Alter the following example to meet your
697
smtp server characteristics. Notice that the smtp server is needed for a proper
698
installation
699

    
700
.. code-block:: python
701

    
702
    # /etc/synnefo/00-snf-common-admins.conf
703
    EMAIL_HOST = "mysmtp.server.synnefo.org"
704
    EMAIL_HOST_USER = "<smtpuser>"
705
    EMAIL_HOST_PASSWORD = "<smtppassword>"
706

    
707
    # this gets appended in all email subjects
708
    EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] "
709

    
710
    # Address to use for outgoing emails
711
    DEFAULT_FROM_EMAIL = "server@example.synnefo.org"
712

    
713
    # Email where users can contact for support. This is used in html/email
714
    # templates.
715
    CONTACT_EMAIL = "server@example.synnefo.org"
716

    
717
    # The email address that error messages come from
718
    SERVER_EMAIL = "server-errors@example.synnefo.org"
719

    
720
Notice that since email settings might be required by applications other than
721
astakos they are defined in a different configuration file than the one
722
previously used to set astakos specific settings.
723

    
724
Refer to
725
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
726
for additional information on available email settings.
727

    
728
As refered in the previous section, based on the operation that triggers
729
an email notification, the recipients list differs. Specifically for
730
emails whose recipients include contacts from your service team
731
(administrators, managers, helpdesk etc) synnefo provides the following
732
settings located in ``00-snf-common-admins.conf``:
733

    
734
.. code-block:: python
735

    
736
    ADMINS = (('Admin name', 'admin@example.synnefo.org'),
737
              ('Admin2 name', 'admin2@example.synnefo.org))
738
    MANAGERS = (('Manager name', 'manager@example.synnefo.org'),)
739
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),)
740

    
741
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:
742

    
743
.. code-block:: python
744

    
745
    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
746
    EMAIL_FILE_PATH = '/tmp/app-messages' 
747
  
748

    
749

    
750
Enable Pooling
751
--------------
752

    
753
This section can be bypassed, but we strongly recommend you apply the following,
754
since they result in a significant performance boost.
755

    
756
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
757
around Psycopg2. This allows independent Django requests to reuse pooled DB
758
connections, with significant performance gains.
759

    
760
To use, first monkey-patch psycopg2. For Django, run this before the
761
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
762

    
763
.. code-block:: console
764

    
765
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
766
    monkey_patch_psycopg2()
767

    
768
Since we are running with greenlets, we should modify psycopg2 behavior, so it
769
works properly in a greenlet context:
770

    
771
.. code-block:: console
772

    
773
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
774
    make_psycopg_green()
775

    
776
Use the Psycopg2 driver as usual. For Django, this means using
777
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
778
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
779
driver, through ``DATABASES.OPTIONS`` in Django.
780

    
781
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
782
file that looks like this:
783

    
784
.. code-block:: console
785

    
786
    # Monkey-patch psycopg2
787
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
788
    monkey_patch_psycopg2()
789

    
790
    # If running with greenlets
791
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
792
    make_psycopg_green()
793

    
794
    DATABASES = {
795
     'default': {
796
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
797
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
798
         'OPTIONS': {'synnefo_poolsize': 8},
799

    
800
         # ATTENTION: This *must* be the absolute path if using sqlite3.
801
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
802
         'NAME': 'snf_apps',
803
         'USER': 'synnefo',                      # Not used with sqlite3.
804
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
805
         # Set to empty string for localhost. Not used with sqlite3.
806
         'HOST': '4.3.2.1',
807
         # Set to empty string for default. Not used with sqlite3.
808
         'PORT': '5432',
809
     }
810
    }
811

    
812
Database Initialization
813
-----------------------
814

    
815
After configuration is done, we initialize the database by running:
816

    
817
.. code-block:: console
818

    
819
    # snf-manage syncdb
820

    
821
At this example we don't need to create a django superuser, so we select
822
``[no]`` to the question. After a successful sync, we run the migration needed
823
for astakos:
824

    
825
.. code-block:: console
826

    
827
    # snf-manage migrate im
828
    # snf-manage migrate quotaholder_app
829

    
830
Then, we load the pre-defined user groups
831

    
832
.. code-block:: console
833

    
834
    # snf-manage loaddata groups
835

    
836
.. _services-reg:
837

    
838
Services Registration
839
---------------------
840

    
841
When the database is ready, we need to register the services. The following
842
command will ask you to register the standard Synnefo components (astakos,
843
cyclades, and pithos) along with the services they provide. Note that you
844
have to register at least astakos in order to have a usable authentication
845
system. For each component, you will be asked to provide two URLs: its base
846
URL and its UI URL.
847

    
848
The former is the location where the component resides; it should equal
849
the ``<component_name>_BASE_URL`` as specified in the respective component
850
settings. For example, the base URL for astakos would be
851
``https://node1.example.com/astakos``.
852

    
853
The latter is the URL that appears in the Cloudbar and leads to the
854
component UI. If you want to follow the default setup, set
855
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
856
URL as explained before. (You can later change the UI URL with
857
``snf-manage component-modify <component_name> --url new_ui_url``.)
858

    
859
The command will also register automatically the resource definitions
860
offered by the services.
861

    
862
.. code-block:: console
863

    
864
    # snf-component-register
865

    
866
.. note::
867

    
868
   This command is equivalent to running the following series of commands;
869
   it registers the three components in astakos and then in each host it
870
   exports the respective service definitions, copies the exported json file
871
   to the astakos host, where it finally imports it:
872

    
873
    .. code-block:: console
874

    
875
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
876
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
877
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
878
       astakos-host$ snf-manage service-export-astakos > astakos.json
879
       astakos-host$ snf-manage service-import --json astakos.json
880
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
881
       # copy the file to astakos-host
882
       astakos-host$ snf-manage service-import --json cyclades.json
883
       pithos-host$ snf-manage service-export-pithos > pithos.json
884
       # copy the file to astakos-host
885
       astakos-host$ snf-manage service-import --json pithos.json
886

    
887
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2
888

    
889
Setting Default Base Quota for Resources
890
----------------------------------------
891

    
892
We now have to specify the limit on resources that each user can employ
893
(exempting resources offered by projects). When specifying storage or
894
memory size limits consider to add an appropriate size suffix to the
895
numeric value, i.e. 10240 MB, 10 GB etc.
896

    
897
.. code-block:: console
898

    
899
    # snf-manage resource-modify --default-quota-interactive
900

    
901
.. _pithos_view_registration:
902

    
903
Register pithos view as an OAuth 2.0 client
904
-------------------------------------------
905

    
906
Starting from synnefo version 0.15, the pithos view, in order to get access to
907
the data of a protect pithos resource, has to be granted authorization for the
908
specific resource by astakos.
909

    
910
During the authorization grant procedure, it has to authenticate itself with
911
astakos since the later has to prevent serving requests by unknown/unauthorized
912
clients.
913

    
914
To register the pithos view as an OAuth 2.0 client in astakos, we have to run
915
the following command::
916

    
917
    snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url https://node2.example.com/pithos/ui/view
918

    
919
Servers Initialization
920
----------------------
921

    
922
Finally, we initialize the servers on node1:
923

    
924
.. code-block:: console
925

    
926
    root@node1:~ # /etc/init.d/gunicorn restart
927
    root@node1:~ # /etc/init.d/apache2 restart
928

    
929
We have now finished the Astakos setup. Let's test it now.
930

    
931

    
932
Testing of Astakos
933
==================
934

    
935
Open your favorite browser and go to:
936

    
937
``http://node1.example.com/astakos``
938

    
939
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
940
the "welcome" door of Astakos, then you have successfully setup Astakos.
941

    
942
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
943
and fill all your data at the sign up form. Then click "SUBMIT". You should now
944
see a green box on the top, which informs you that you made a successful request
945
and the request has been sent to the administrators. So far so good, let's
946
assume that you created the user with username ``user@example.com``.
947

    
948
Now we need to activate that user. Return to a command prompt at node1 and run:
949

    
950
.. code-block:: console
951

    
952
    root@node1:~ # snf-manage user-list
953

    
954
This command should show you a list with only one user; the one we just created.
955
This user should have an id with a value of ``1`` and flag "active" and
956
"verified" set to False. Now run:
957

    
958
.. code-block:: console
959

    
960
    root@node1:~ # snf-manage user-modify 1 --verify --accept
961

    
962
This verifies the user email and activates the user.
963
When running in production, the activation is done automatically with different
964
types of moderation, that Astakos supports. You can see the moderation methods
965
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
966
documentation. In production, you can also manually activate a user, by sending
967
him/her an activation email. See how to do this at the :ref:`User
968
activation <user_activation>` section.
969

    
970
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
971
your browser again. Try to sign in using your new credentials. If the astakos
972
menu appears and you can see your profile, then you have successfully setup
973
Astakos.
974

    
975
Let's continue to install Pithos now.
976

    
977

    
978
Installation of Pithos on node2
979
===============================
980

    
981
To install Pithos, grab the packages from our repository (make sure  you made
982
the additions needed in your ``/etc/apt/sources.list`` file, as described
983
previously), by running:
984

    
985
.. code-block:: console
986

    
987
   # apt-get install snf-pithos-app snf-pithos-backend
988

    
989
Now, install the pithos web interface:
990

    
991
.. code-block:: console
992

    
993
   # apt-get install snf-pithos-webclient
994

    
995
This package provides the standalone pithos web client. The web client is the
996
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
997
interface's cloudbar, at the top of the Astakos homepage.
998

    
999

    
1000
.. _conf-pithos:
1001

    
1002
Configuration of Pithos
1003
=======================
1004

    
1005
Conf Files
1006
----------
1007

    
1008
After Pithos is successfully installed, you will find the directory
1009
``/etc/synnefo`` and some configuration files inside it, as you did in node1
1010
after installation of astakos. Here, you will not have to change anything that
1011
has to do with snf-common or snf-webproject. Everything is set at node1. You
1012
only need to change settings that have to do with Pithos. Specifically:
1013

    
1014
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
1015
this options:
1016

    
1017
.. code-block:: console
1018

    
1019
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1020

    
1021
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1022
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1023
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1024

    
1025
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1026

    
1027

    
1028
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
1029
find the Pithos backend database. Above we tell Pithos that its database is
1030
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
1031
``example_passw0rd``.  All those settings where setup during node1's "Database
1032
setup" section.
1033

    
1034
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
1035
the Pithos backend data. Above we tell Pithos to store its data under
1036
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1037
directory at node1's "Pithos data directory setup" section.
1038

    
1039
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1040
The Astakos service is used for user management (authentication, quotas, etc.)
1041

    
1042
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.
1043

    
1044
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos.
1045
It can be retrieved by running on the Astakos node (node1 in our case):
1046

    
1047
.. code-block:: console
1048

    
1049
   # snf-manage component-list
1050

    
1051
The token has been generated automatically during the :ref:`Pithos service
1052
registration <services-reg>`.
1053

    
1054
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
1055
object checksums. This results to improved performance during object uploading.
1056
However, if compatibility with the OpenStack Object Storage API is important
1057
then it should be changed to ``True``.
1058

    
1059
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1060
Pithos web UI with the astakos web UI (through the top cloudbar):
1061

    
1062
.. code-block:: console
1063

    
1064
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1065
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1066
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1067

    
1068
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
1069
cloudbar.
1070

    
1071
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1072
Pithos web client to get from astakos all the information needed to fill its
1073
own cloudbar. So we put our astakos deployment urls there.
1074

    
1075
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view
1076
in order to authenticate itself with astakos during the authorization grant
1077
procedure and it should container the credentials issued for the pithos view
1078
in `the pithos view registration step`__.
1079

    
1080
__ pithos_view_registration_
1081

    
1082
Pooling and Greenlets
1083
---------------------
1084

    
1085
Pithos is pooling-ready without the need of further configuration, because it
1086
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
1087
backend objects for access to the Pithos DB.
1088

    
1089
However, as in Astakos, since we are running with Greenlets, it is also
1090
recommended to modify psycopg2 behavior so it works properly in a greenlet
1091
context. This means adding the following lines at the top of your
1092
``/etc/synnefo/10-snf-webproject-database.conf`` file:
1093

    
1094
.. code-block:: console
1095

    
1096
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
1097
    make_psycopg_green()
1098

    
1099
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
1100
mentioned above, depending on your setup) argument on your
1101
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
1102
like this:
1103

    
1104
.. code-block:: console
1105

    
1106
    CONFIG = {
1107
     'mode': 'django',
1108
     'environment': {
1109
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
1110
     },
1111
     'working_dir': '/etc/synnefo',
1112
     'user': 'www-data',
1113
     'group': 'www-data',
1114
     'args': (
1115
       '--bind=127.0.0.1:8080',
1116
       '--workers=4',
1117
       '--worker-class=gevent',
1118
       '--log-level=debug',
1119
       '--timeout=43200'
1120
     ),
1121
    }
1122

    
1123
Stamp Database Revision
1124
-----------------------
1125

    
1126
Pithos uses the alembic_ database migrations tool.
1127

    
1128
.. _alembic: http://alembic.readthedocs.org
1129

    
1130
After a successful installation, we should stamp it at the most recent
1131
revision, so that future migrations know where to start upgrading in
1132
the migration history.
1133

    
1134
.. code-block:: console
1135

    
1136
    root@node2:~ # pithos-migrate stamp head
1137

    
1138
Servers Initialization
1139
----------------------
1140

    
1141
After configuration is done, we initialize the servers on node2:
1142

    
1143
.. code-block:: console
1144

    
1145
    root@node2:~ # /etc/init.d/gunicorn restart
1146
    root@node2:~ # /etc/init.d/apache2 restart
1147

    
1148
You have now finished the Pithos setup. Let's test it now.
1149

    
1150

    
1151
Testing of Pithos
1152
=================
1153

    
1154
Open your browser and go to the Astakos homepage:
1155

    
1156
``http://node1.example.com/astakos``
1157

    
1158
Login, and you will see your profile page. Now, click the "pithos" link on the
1159
top black cloudbar. If everything was setup correctly, this will redirect you
1160
to:
1161

    
1162

    
1163
and you will see the blue interface of the Pithos application.  Click the
1164
orange "Upload" button and upload your first file. If the file gets uploaded
1165
successfully, then this is your first sign of a successful Pithos installation.
1166
Go ahead and experiment with the interface to make sure everything works
1167
correctly.
1168

    
1169
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1170

    
1171
If you don't stumble on any problems, then you have successfully installed
1172
Pithos, which you can use as a standalone File Storage Service.
1173

    
1174
If you would like to do more, such as:
1175

    
1176
    * Spawning VMs
1177
    * Spawning VMs from Images stored on Pithos
1178
    * Uploading your custom Images to Pithos
1179
    * Spawning VMs from those custom Images
1180
    * Registering existing Pithos files as Images
1181
    * Connect VMs to the Internet
1182
    * Create Private Networks
1183
    * Add VMs to Private Networks
1184

    
1185
please continue with the rest of the guide.
1186

    
1187

    
1188
Cyclades Prerequisites
1189
======================
1190

    
1191
Before proceeding with the Cyclades installation, make sure you have
1192
successfully set up Astakos and Pithos first, because Cyclades depends on
1193
them. If you don't have a working Astakos and Pithos installation yet, please
1194
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1195

    
1196
Besides Astakos and Pithos, you will also need a number of additional working
1197
prerequisites, before you start the Cyclades installation.
1198

    
1199
Ganeti
1200
------
1201

    
1202
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1203
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1204
Please refer to the
1205
`ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all the
1206
gory details. A successful Ganeti installation concludes with a working
1207
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1208
<GANETI_NODES>`.
1209

    
1210
The above Ganeti cluster can run on different physical machines than node1 and
1211
node2 and can scale independently, according to your needs.
1212

    
1213
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1214
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1215
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1216

    
1217
We highly recommend that you read the official Ganeti documentation, if you are
1218
not familiar with Ganeti.
1219

    
1220
Unfortunately, the current stable version of the stock Ganeti (v2.6.2) doesn't
1221
support IP pool management. This feature will be available in Ganeti >= 2.7.
1222
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1223
GRNET provided packages until stable 2.7 is out. These packages will also install
1224
the proper version of Ganeti. To do so:
1225

    
1226
.. code-block:: console
1227

    
1228
   # apt-get install snf-ganeti ganeti-htools
1229

    
1230
Ganeti will make use of drbd. To enable this and make the configuration pemanent
1231
you have to do the following :
1232

    
1233
.. code-block:: console
1234

    
1235
		# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1236
		# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
1237

    
1238

    
1239
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1240
both nodes, choose a domain name that resolves to a valid floating IP (let's
1241
say it's ``ganeti.node1.example.com``). This IP is needed to communicate with
1242
the Ganeti cluster. Make sure node1 and node2 have same dsa,rsa keys and authorised_keys
1243
for password-less root ssh between each other. If not then skip passing --no-ssh-init but be
1244
aware that it will replace /root/.ssh/* related files and you might lose access to master node.
1245
Also, Ganeti will need a volume to host your VMs' disks. So, make sure there is an lvm volume
1246
group named ``ganeti``. Finally, setup a bridge interface on the host machines (e.g: br0). This
1247
will be needed for the network configuration afterwards.
1248

    
1249
Then run on node1:
1250

    
1251
.. code-block:: console
1252

    
1253
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1254
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1255
                    --master-netdev eth0 ganeti.node1.example.com
1256
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1257
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1258
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1259

    
1260
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1261
                    --vm-capable=yes node2.example.com
1262
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1263
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1264

    
1265
You can verify that the ganeti cluster is successfully setup,by running on the
1266
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1267

    
1268
.. code-block:: console
1269

    
1270
   # gnt-cluster verify
1271

    
1272
For any problems you may stumble upon installing Ganeti, please refer to the
1273
`official documentation <http://docs.ganeti.org/ganeti/2.6/html>`_. Installation
1274
of Ganeti is out of the scope of this guide.
1275

    
1276
.. _cyclades-install-snfimage:
1277

    
1278
snf-image
1279
---------
1280

    
1281
Installation
1282
~~~~~~~~~~~~
1283
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1284
you need the :ref:
1285
`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` OS
1286
Definition installed on *all* VM-capable Ganeti nodes. This means we need
1287
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on
1288
node1 and node2. You can do this by running on *both* nodes:
1289

    
1290
.. code-block:: console
1291

    
1292
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1293

    
1294
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1295
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1296
able to access the Pithos database. This is why, we also install them on *all*
1297
VM-capable Ganeti nodes.
1298

    
1299
.. warning::
1300
		snf-image uses ``curl`` for handling URLs. This means that it will
1301
		not  work out of the box if you try to use URLs served by servers which do
1302
		not have a valid certificate. In case you haven't followed the guide's
1303
		directions about the certificates,in order to circumvent this you should edit the file
1304
		``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
1305

    
1306
Configuration
1307
~~~~~~~~~~~~~
1308
snf-image supports native access to Images stored on Pithos. This means that
1309
it can talk directly to the Pithos backend, without the need of providing a
1310
public URL. More details, are described in the next section. For now, the only
1311
thing we need to do, is configure snf-image to access our Pithos backend.
1312

    
1313
To do this, we need to set the corresponding variables in
1314
``/etc/default/snf-image``, to reflect our Pithos setup:
1315

    
1316
.. code-block:: console
1317

    
1318
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1319

    
1320
    PITHOS_DATA="/srv/pithos/data"
1321

    
1322
If you have installed your Ganeti cluster on different nodes than node1 and
1323
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1324

    
1325
If you would like to use Images that are also/only stored locally, you need to
1326
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1327
Pithos.
1328

    
1329
Testing
1330
~~~~~~~
1331
You can test that snf-image is successfully installed by running on the
1332
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1333

    
1334
.. code-block:: console
1335

    
1336
   # gnt-os diagnose
1337

    
1338
This should return ``valid`` for snf-image.
1339

    
1340
If you are interested to learn more about snf-image's internals (and even use
1341
it alongside Ganeti without Synnefo), please see
1342
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information
1343
concerning installation instructions, documentation on the design and
1344
implementation, and supported Image formats.
1345

    
1346
.. _snf-image-images:
1347

    
1348
Actual Images for snf-image
1349
---------------------------
1350

    
1351
Now that snf-image is installed successfully we need to provide it with some
1352
Images.
1353
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1354
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We
1355
recommend the use of the ``diskdump`` format. For more information about
1356
snf-image Image formats see `here
1357
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_.
1358

    
1359
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1360
also supports three (3) different locations for the above Images to be stored:
1361

    
1362
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1363
      in :file:`/etc/default/snf-image`)
1364
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1365
    * On Pithos (accessible natively, not only by its public URL)
1366

    
1367
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1368
on the official `snf-image page
1369
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The
1370
image is of type ``diskdump``. We will store it in our new Pithos installation.
1371

    
1372
To do so, do the following:
1373

    
1374
a) Download the Image from the official snf-image page.
1375

    
1376
b) Upload the Image to your Pithos installation, either using the Pithos Web
1377
   UI or the command line client `kamaki
1378
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1379

    
1380
Once the Image is uploaded successfully, download the Image's metadata file
1381
from the official snf-image page. You will need it, for spawning a VM from
1382
Ganeti, in the next section.
1383

    
1384
Of course, you can repeat the procedure to upload more Images, available from
1385
the `official snf-image page
1386
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_.
1387

    
1388
.. _ganeti-with-pithos-images:
1389

    
1390
Spawning a VM from a Pithos Image, using Ganeti
1391
-----------------------------------------------
1392

    
1393
Now, it is time to test our installation so far. So, we have Astakos and
1394
Pithos installed, we have a working Ganeti installation, the snf-image
1395
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1396
Pithos. Make sure you also have the `metadata file
1397
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1398

    
1399
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1400

    
1401
.. code-block:: console
1402

    
1403
   # gnt-instance add -o snf-image+default --os-parameters \
1404
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1405
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1406
                      testvm1
1407

    
1408
In the above command:
1409

    
1410
 * ``img_passwd``: the arbitrary root password of your new instance
1411
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1412
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1413
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1414
               * ``UUID``: the username found in Cyclades Web UI under API access
1415
               * ``container``: ``pithos`` (default, if the Web UI was used)
1416
               * ``filename``: the name of file (visible also from the Web UI)
1417
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1418
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1419
                       <http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_
1420

    
1421
If the ``gnt-instance add`` command returns successfully, then run:
1422

    
1423
.. code-block:: console
1424

    
1425
   # gnt-instance info testvm1 | grep "console connection"
1426

    
1427
to find out where to connect using VNC. If you can connect successfully and can
1428
login to your new instance using the root password ``my_vm_example_passw0rd``,
1429
then everything works as expected and you have your new Debian Base VM up and
1430
running.
1431

    
1432
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1433
to access the Pithos database and the Pithos backend data (newer versions
1434
require UUID instead of a username). Another issue you may encounter is that in
1435
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1436
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1437
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1438
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1439
are sure everything works till this point.
1440

    
1441
If everything works, you have successfully connected Ganeti with Pithos. Let's
1442
move on to networking now.
1443

    
1444
.. warning::
1445

    
1446
    You can bypass the networking sections and go straight to
1447
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1448
    the Cyclades Network Service, but only the Cyclades Compute Service
1449
    (recommended for now).
1450

    
1451
Networking Setup Overview
1452
-------------------------
1453

    
1454
This part is deployment-specific and must be customized based on the specific
1455
needs of the system administrator. However, to do so, the administrator needs
1456
to understand how each level handles Virtual Networks, to be able to setup the
1457
backend appropriately, before installing Cyclades. To do so, please read the
1458
:ref:`Network <networks>` section before proceeding.
1459

    
1460
Since synnefo 0.11 all network actions are managed with the snf-manage
1461
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1462
snf-network, bridges, vlans) to be already configured correctly. The only
1463
actions needed in this point are:
1464

    
1465
a) Have Ganeti with IP pool management support installed.
1466

    
1467
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1468

    
1469
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1470

    
1471
In order to test that everything is setup correctly before installing Cyclades,
1472
we will make some testing actions in this section, and the actual setup will be
1473
done afterwards with snf-manage commands.
1474

    
1475
.. _snf-network:
1476

    
1477
snf-network
1478
~~~~~~~~~~~
1479

    
1480
snf-network includes `kvm-vif-bridge` script that is invoked every time
1481
a tap (a VM's NIC) is created. Based on environment variables passed by
1482
Ganeti it issues various commands depending on the network type the NIC is
1483
connected to and sets up a corresponding dhcp lease.
1484

    
1485
Install snf-network on all Ganeti nodes:
1486

    
1487
.. code-block:: console
1488

    
1489
   # apt-get install snf-network
1490

    
1491
Then, in :file:`/etc/default/snf-network` set:
1492

    
1493
.. code-block:: console
1494

    
1495
   MAC_MASK=ff:ff:f0:00:00:00
1496

    
1497
.. _nfdhcpd:
1498

    
1499
nfdhcpd
1500
~~~~~~~
1501

    
1502
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1503
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1504
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1505
create a dhcp response.
1506

    
1507
.. code-block:: console
1508

    
1509
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1510
   # apt-get install nfdhcpd
1511

    
1512
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1513
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1514
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1515
VMs. Once you are finished, restart the server on all nodes:
1516

    
1517
.. code-block:: console
1518

    
1519
   # /etc/init.d/nfdhcpd restart
1520

    
1521
If you are using ``ferm``, then you need to run the following:
1522

    
1523
.. code-block:: console
1524

    
1525
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1526
   # /etc/init.d/ferm restart
1527

    
1528
or make sure to run after boot:
1529

    
1530
.. code-block:: console
1531

    
1532
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1533

    
1534
and if you have IPv6 enabled:
1535

    
1536
.. code-block:: console
1537

    
1538
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1539
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1540

    
1541
You can check which clients are currently served by nfdhcpd by running:
1542

    
1543
.. code-block:: console
1544

    
1545
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1546

    
1547
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1548

    
1549
Public Network Setup
1550
--------------------
1551

    
1552
To achieve basic networking the simplest way is to have a common bridge (e.g.
1553
``br0``, on the same collision domain with the router) where all VMs will
1554
connect to. Packets will be "forwarded" to the router and then to the Internet.
1555
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1556
:ref:`Network <networks>` section).
1557

    
1558
Physical Host Setup
1559
~~~~~~~~~~~~~~~~~~~
1560

    
1561
Assuming ``eth0`` on both hosts is the public interface (directly connected
1562
to the router), run on every node:
1563

    
1564
.. code-block:: console
1565

    
1566
   # apt-get install vlan
1567
   # brctl addbr br0
1568
   # ip link set br0 up
1569
   # vconfig add eth0 100
1570
   # ip link set eth0.100 up
1571
   # brctl addif br0 eth0.100
1572

    
1573

    
1574
Testing a Public Network
1575
~~~~~~~~~~~~~~~~~~~~~~~~
1576

    
1577
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1578
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1579
network by running:
1580

    
1581
.. code-block:: console
1582

    
1583
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1584

    
1585
Then, connect the network to all your nodegroups. We assume that we only have
1586
one nodegroup (``default``) in our Ganeti cluster:
1587

    
1588
.. code-block:: console
1589

    
1590
   # gnt-network connect test-net-public default bridged br0
1591

    
1592
Now, it is time to test that the backend infrastracture is correctly setup for
1593
the Public Network. We will add a new VM, the same way we did it on the
1594
previous testing section. However, now will also add one NIC, configured to be
1595
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1596

    
1597
.. code-block:: console
1598

    
1599
   # gnt-instance add -o snf-image+default --os-parameters \
1600
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1601
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1602
                      --net 0:ip=pool,network=test-net-public \
1603
                      testvm2
1604

    
1605
If the above returns successfully, connect to the new VM through VNC as before and run:
1606

    
1607
.. code-block:: console
1608

    
1609
   root@testvm2:~ # ip addr
1610
   root@testvm2:~ # ip route
1611
   root@testvm2:~ # cat /etc/resolv.conf
1612

    
1613
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1614
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1615
ganeti, snf-network and nfdhcpd.
1616

    
1617
Now ping the outside world. If this works too, then you have also configured
1618
correctly your physical host and router.
1619

    
1620
Make sure everything works as expected, before proceeding with the Private
1621
Networks setup.
1622

    
1623
.. _private-networks-setup:
1624

    
1625
Private Networks Setup
1626
----------------------
1627

    
1628
Synnefo supports two types of private networks:
1629

    
1630
 - based on MAC filtering
1631
 - based on physical VLANs
1632

    
1633
Both types provide Layer 2 isolation to the end-user.
1634

    
1635
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1636
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1637
physical VLAN. To this end to assure isolation among end-users' private networks
1638
each has to have different MAC prefix (for the filtering to take place) or to be
1639
"connected" to a different bridge (VLAN actually).
1640

    
1641
Physical Host Setup
1642
~~~~~~~~~~~~~~~~~~~
1643

    
1644
In order to create the necessary VLAN/bridges, one for MAC filtered private
1645
networks and various (e.g. 20) for private networks based on physical VLANs,
1646
run on every node:
1647

    
1648
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1649
configured correctly) connected together, run on every node:
1650

    
1651
.. code-block:: console
1652

    
1653
   # modprobe 8021q
1654
   # $iface=eth0
1655
   # for prv in $(seq 0 20); do
1656
        vlan=$prv
1657
        bridge=prv$prv
1658
        vconfig add $iface $vlan
1659
        ifconfig $iface.$vlan up
1660
        brctl addbr $bridge
1661
        brctl setfd $bridge 0
1662
        brctl addif $bridge $iface.$vlan
1663
        ifconfig $bridge up
1664
      done
1665

    
1666
The above will do the following :
1667

    
1668
 * provision 21 new bridges: ``prv0`` - ``prv20``
1669
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1670
 * add the corresponding vlan to the equivalent bridge
1671

    
1672
You can run ``brctl show`` on both nodes to see if everything was setup
1673
correctly.
1674

    
1675
Testing the Private Networks
1676
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1677

    
1678
To test the Private Networks, we will create two instances and put them in the
1679
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1680
that the instances will have a second NIC connected to the ``prv0``
1681
pre-provisioned bridge and a third to ``prv1``.
1682

    
1683
We run the same command as in the Public Network testing section, but with one
1684
more argument for the second NIC:
1685

    
1686
.. code-block:: console
1687

    
1688
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1689
   # gnt-network connect test-net-prv-mac default bridged prv0
1690

    
1691
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1692
   # gnt-network connect test-net-prv-vlan default bridged prv1
1693

    
1694
   # gnt-instance add -o snf-image+default --os-parameters \
1695
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1696
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1697
                      --net 0:ip=pool,network=test-net-public \
1698
                      --net 1:ip=pool,network=test-net-prv-mac \
1699
                      --net 2:ip=none,network=test-net-prv-vlan \
1700
                      testvm3
1701

    
1702
   # gnt-instance add -o snf-image+default --os-parameters \
1703
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1704
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1705
                      --net 0:ip=pool,network=test-net-public \
1706
                      --net 1:ip=pool,network=test-net-prv-mac \
1707
                      --net 2:ip=none,network=test-net-prv-vlan \
1708
                      testvm4
1709

    
1710
Above, we create two instances with first NIC connected to the internet, their
1711
second NIC connected to a MAC filtered private Network and their third NIC
1712
connected to the first Physical VLAN Private Network. Now, connect to the
1713
instances using VNC and make sure everything works as expected:
1714

    
1715
 a) The instances have access to the public internet through their first eth
1716
    interface (``eth0``), which has been automatically assigned a public IP.
1717

    
1718
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1719

    
1720
 c) ip link set ``eth1``/``eth2`` up
1721

    
1722
 d) dhclient ``eth1``/``eth2``
1723

    
1724
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1725

    
1726
If everything works as expected, then you have finished the Network Setup at the
1727
backend for both types of Networks (Public & Private).
1728

    
1729
.. _cyclades-gtools:
1730

    
1731
Cyclades Ganeti tools
1732
---------------------
1733

    
1734
In order for Ganeti to be connected with Cyclades later on, we need the
1735
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1736
case). You can install them by running in both nodes:
1737

    
1738
.. code-block:: console
1739

    
1740
   # apt-get install snf-cyclades-gtools
1741

    
1742
This will install the following:
1743

    
1744
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1745
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1746

    
1747
Configure ``snf-cyclades-gtools``
1748
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1749

    
1750
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1751
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1752
that need it:
1753

    
1754
.. code-block:: console
1755

    
1756
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1757

    
1758
The above variables should reflect your :ref:`Message Queue setup
1759
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1760

    
1761
Connect ``snf-image`` with ``snf-progress-monitor``
1762
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1763

    
1764
Finally, we need to configure ``snf-image`` to publish progress messages during
1765
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1766
set the corresponding variable to ``snf-progress-monitor``:
1767

    
1768
.. code-block:: console
1769

    
1770
   PROGRESS_MONITOR="snf-progress-monitor"
1771

    
1772
This file should be editted in all Ganeti nodes.
1773

    
1774
.. _rapi-user:
1775

    
1776
Synnefo RAPI user
1777
-----------------
1778

    
1779
As a last step before installing Cyclades, create a new RAPI user that will
1780
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1781
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1782
You can do this, by first running:
1783

    
1784
.. code-block:: console
1785

    
1786
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1787

    
1788
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1789

    
1790
.. code-block:: console
1791

    
1792
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1793

    
1794
More about Ganeti's RAPI users `here.
1795
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_
1796

    
1797
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1798
to the actual Cyclades installation.
1799

    
1800

    
1801
Installation of Cyclades on node1
1802
=================================
1803

    
1804
This section describes the installation of Cyclades. Cyclades is Synnefo's
1805
Compute service. The Image Service will get installed automatically along with
1806
Cyclades, because it is contained in the same Synnefo component.
1807

    
1808
We will install Cyclades on node1. To do so, we install the corresponding
1809
package by running on node1:
1810

    
1811
.. code-block:: console
1812

    
1813
   # apt-get install snf-cyclades-app memcached python-memcache
1814

    
1815
If all packages install successfully, then Cyclades are installed and we
1816
proceed with their configuration.
1817

    
1818
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1819
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1820
This is achieved by storing all sensitive information to a CACHE backend and
1821
exporting it via VMAPI. The cache entries are invalidated after the first
1822
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1823
`Django <https://www.djangoproject.com/>`_ cache backend.
1824

    
1825
Configuration of Cyclades
1826
=========================
1827

    
1828
Conf files
1829
----------
1830

    
1831
After installing Cyclades, a number of new configuration files will appear under
1832
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1833
only the minimal needed changes to result with a working system. In general,
1834
sane defaults have been chosen for the most of the options, to cover most of the
1835
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1836
once you get familiar with the different options.
1837

    
1838
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1839

    
1840
.. code-block:: console
1841

    
1842
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1843
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1844

    
1845
   CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w'
1846

    
1847
The ``ASTAKOS_AUTH_URL`` denotes the Astakos endpoint for Cyclades,
1848
which is used for all user management, including authentication.
1849
Since our Astakos, Cyclades, and Pithos installations belong together,
1850
they should all have identical ``ASTAKOS_AUTH_URL`` setting
1851
(see also, :ref:`previously <conf-pithos>`).
1852

    
1853
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL.
1854
Appending an extra path (``/cyclades`` here) is recommended in order to
1855
distinguish components, if more than one are installed on the same machine.
1856

    
1857
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with astakos.
1858
It can be retrieved by running on the Astakos node (node1 in our case):
1859

    
1860
.. code-block:: console
1861

    
1862
   # snf-manage component-list
1863

    
1864
The token has been generated automatically during the :ref:`Cyclades service
1865
registration <services-reg>`.
1866

    
1867
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1868

    
1869
.. code-block:: console
1870

    
1871
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1872
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1873
   CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1874

    
1875
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1876
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1877
used by the Cyclades Web UI to get from Astakos all the information needed to
1878
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1879
above should have the same values we put in the corresponding variables in
1880
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1881
:ref:`Pithos configuration <conf-pithos>` section.
1882

    
1883
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1884

    
1885
.. code-block:: console
1886

    
1887
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1888
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1889

    
1890
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1891
denotes the Pithos database (where the Image files are stored). So we set that
1892
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1893
Pithos data location.
1894

    
1895
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1896

    
1897
.. code-block:: console
1898

    
1899
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1900

    
1901
The above settings denote the Message Queue. Those settings should have the same
1902
values as in ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` file, and
1903
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1904

    
1905
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1906

    
1907
.. code-block:: console
1908

    
1909
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1910

    
1911
Edit ``/etc/default/vncauthproxy``:
1912

    
1913
.. code-block:: console
1914

    
1915
   CHUID="nobody:www-data"
1916

    
1917
We have now finished with the basic Cyclades configuration.
1918

    
1919
Database Initialization
1920
-----------------------
1921

    
1922
Once Cyclades is configured, we sync the database:
1923

    
1924
.. code-block:: console
1925

    
1926
   $ snf-manage syncdb
1927
   $ snf-manage migrate
1928

    
1929
and load the initial server flavors:
1930

    
1931
.. code-block:: console
1932

    
1933
   $ snf-manage loaddata flavors
1934

    
1935
If everything returns successfully, our database is ready.
1936

    
1937
Add the Ganeti backend
1938
----------------------
1939

    
1940
In our installation we assume that we only have one Ganeti cluster, the one we
1941
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1942
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1943
correctly.
1944

    
1945
.. code-block:: console
1946

    
1947
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1948

    
1949
You can see everything has been setup correctly by running:
1950

    
1951
.. code-block:: console
1952

    
1953
   $ snf-manage backend-list
1954

    
1955
Enable the new backend by running:
1956

    
1957
.. code-block::
1958

    
1959
   $ snf-manage backend-modify --drained False 1
1960

    
1961
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1962
    This means that you cannot add VMs to it. The reason for this is that the
1963
    nodes should be unavailable to Synnefo until the Administrator explicitly
1964
    releases them. To change this setting, use ``snf-manage backend-modify
1965
    --drained False <backend-id>``.
1966

    
1967
If something is not set correctly, you can modify the backend with the
1968
``snf-manage backend-modify`` command. If something has gone wrong, you could
1969
modify the backend to reflect the Ganeti installation by running:
1970

    
1971
.. code-block:: console
1972

    
1973
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1974
                               --user=cyclades
1975
                               --pass=example_rapi_passw0rd
1976
                               1
1977

    
1978
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1979
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1980
can talk to Ganeti even after a Ganeti master-failover.
1981

    
1982
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1983
password.  Once we setup the first backend to point at our Ganeti cluster, we
1984
update the Cyclades backends status by running:
1985

    
1986
.. code-block:: console
1987

    
1988
   $ snf-manage backend-update-status
1989

    
1990
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1991
guide,we won't get into more detail regarding mulitple backends. If you want to
1992
learn more please see /*TODO*/.
1993

    
1994
Add a Public Network
1995
----------------------
1996

    
1997
Cyclades supports different Public Networks on different Ganeti backends.
1998
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1999
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
2000
created NIC on a bridge. After having a bridge (e.g. br0) created in every
2001
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
2002

    
2003
.. code-block:: console
2004

    
2005
   $ snf-manage network-create --subnet=5.6.7.0/27 \
2006
                               --gateway=5.6.7.1 \
2007
                               --subnet6=2001:648:2FFC:1322::/64 \
2008
                               --gateway6=2001:648:2FFC:1322::1 \
2009
                               --public --dhcp=True --flavor=CUSTOM \
2010
                               --link=br0 --mode=bridged \
2011
                               --name=public_network \
2012
                               --backend-id=1
2013

    
2014
This will create the Public Network on both Cyclades and the Ganeti backend. To
2015
make sure everything was setup correctly, also run:
2016

    
2017
.. code-block:: console
2018

    
2019
   $ snf-manage reconcile-networks
2020

    
2021
You can see all available networks by running:
2022

    
2023
.. code-block:: console
2024

    
2025
   $ snf-manage network-list
2026

    
2027
and inspect each network's state by running:
2028

    
2029
.. code-block:: console
2030

    
2031
   $ snf-manage network-inspect <net_id>
2032

    
2033
Finally, you can see the networks from the Ganeti perspective by running on the
2034
Ganeti MASTER:
2035

    
2036
.. code-block:: console
2037

    
2038
   $ gnt-network list
2039
   $ gnt-network info <network_name>
2040

    
2041
Create pools for Private Networks
2042
---------------------------------
2043

    
2044
To prevent duplicate assignment of resources to different private networks,
2045
Cyclades supports two types of pools:
2046

    
2047
 - MAC prefix Pool
2048
 - Bridge Pool
2049

    
2050
As long as those resourses have been provisioned, admin has to define two
2051
these pools in Synnefo:
2052

    
2053

    
2054
.. code-block:: console
2055

    
2056
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
2057

    
2058
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
2059

    
2060
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
2061

    
2062
.. code-block:: console
2063

    
2064
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
2065

    
2066
Servers restart
2067
---------------
2068

    
2069
Restart gunicorn on node1:
2070

    
2071
.. code-block:: console
2072

    
2073
   # /etc/init.d/gunicorn restart
2074

    
2075
Now let's do the final connections of Cyclades with Ganeti.
2076

    
2077
``snf-dispatcher`` initialization
2078
---------------------------------
2079

    
2080
``snf-dispatcher`` dispatches all messages published to the Message Queue and
2081
manages the Cyclades database accordingly. It also initializes all exchanges. By
2082
default it is not enabled during installation of Cyclades, so let's enable it in
2083
its configuration file ``/etc/default/snf-dispatcher``:
2084

    
2085
.. code-block:: console
2086

    
2087
   SNF_DSPTCH_ENABLE=true
2088

    
2089
and start the daemon:
2090

    
2091
.. code-block:: console
2092

    
2093
   # /etc/init.d/snf-dispatcher start
2094

    
2095
You can see that everything works correctly by tailing its log file
2096
``/var/log/synnefo/dispatcher.log``.
2097

    
2098
``snf-ganeti-eventd`` on GANETI MASTER
2099
--------------------------------------
2100

    
2101
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
2102
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
2103
The daemon is already installed on the GANETI MASTER (node1 in our case).
2104
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
2105
installation, so we enable it in its configuration file
2106
``/etc/default/snf-ganeti-eventd``:
2107

    
2108
.. code-block:: console
2109

    
2110
   SNF_EVENTD_ENABLE=true
2111

    
2112
and start the daemon:
2113

    
2114
.. code-block:: console
2115

    
2116
   # /etc/init.d/snf-ganeti-eventd start
2117

    
2118
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
2119

    
2120
Apply Quota
2121
-----------
2122

    
2123
The following commands will check and fix the integrity of user quota.
2124
In a freshly installed system, these commands have no effect and can be
2125
skipped.
2126

    
2127
.. code-block:: console
2128

    
2129
   node1 # snf-manage quota --sync
2130
   node1 # snf-manage reconcile-resources-astakos --fix
2131
   node2 # snf-manage reconcile-resources-pithos --fix
2132
   node1 # snf-manage reconcile-resources-cyclades --fix
2133

    
2134
VM stats configuration
2135
----------------------
2136

    
2137
Please refer to the documentation in the :ref:`admin guide <admin-guide-stats>`
2138
for deploying and configuring snf-stats-app and collectd.
2139

    
2140

    
2141
If all the above return successfully, then you have finished with the Cyclades
2142
installation and setup.
2143

    
2144
Let's test our installation now.
2145

    
2146

    
2147
Testing of Cyclades
2148
===================
2149

    
2150
Cyclades Web UI
2151
---------------
2152

    
2153
First of all we need to test that our Cyclades Web UI works correctly. Open your
2154
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2155
top cloud bar. This should redirect you to:
2156

    
2157
 `http://node1.example.com/cyclades/ui/`
2158

    
2159
and the Cyclades home page should appear. If not, please go back and find what
2160
went wrong. Do not proceed if you don't see the Cyclades home page.
2161

    
2162
If the Cyclades home page appears, click on the orange button 'New machine'. The
2163
first step of the 'New machine wizard' will appear. This step shows all the
2164
available Images from which you can spawn new VMs. The list should be currently
2165
empty, as we haven't registered any Images yet. Close the wizard and browse the
2166
interface (not many things to see yet). If everything seems to work, let's
2167
register our first Image file.
2168

    
2169
Cyclades Images
2170
---------------
2171

    
2172
To test our Cyclades installation, we will use an Image stored on Pithos to
2173
spawn a new VM from the Cyclades interface. We will describe all steps, even
2174
though you may already have uploaded an Image on Pithos from a :ref:`previous
2175
<snf-image-images>` section:
2176

    
2177
 * Upload an Image file to Pithos
2178
 * Register that Image file to Cyclades
2179
 * Spawn a new VM from that Image from the Cyclades Web UI
2180

    
2181
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2182
command line client to do the uploading and registering of the Image.
2183

    
2184
Installation of `kamaki`
2185
~~~~~~~~~~~~~~~~~~~~~~~~
2186

    
2187
You can install `kamaki` anywhere you like, since it is a standalone client of
2188
the APIs and talks to the installation over `http`. For the purpose of this
2189
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2190
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2191
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2192
too. We do this by running:
2193

    
2194
.. code-block:: console
2195

    
2196
   # apt-get install kamaki
2197

    
2198
Configuration of kamaki
2199
~~~~~~~~~~~~~~~~~~~~~~~
2200

    
2201
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2202
installation. We do this by running:
2203

    
2204
.. code-block:: console
2205

    
2206
   $ kamaki config set cloud.default.url \
2207
       "https://node1.example.com/astakos/identity/v2.0"
2208
   $ kamaki config set cloud.default.token USER_TOKEN
2209

    
2210
Both the Authentication URL and the USER_TOKEN appear on the user's
2211
`API access` web page on the Astakos Web UI.
2212

    
2213
You can see that the new configuration options have been applied correctly,
2214
either by checking the editable file ``~/.kamakirc`` or by running:
2215

    
2216
.. code-block:: console
2217

    
2218
   $ kamaki config list
2219

    
2220
A quick test to check that kamaki is configured correctly, is to try to
2221
authenticate a user based on his/her token (in this case the user is you):
2222

    
2223
.. code-block:: console
2224

    
2225
  $ kamaki user authenticate
2226

    
2227
The above operation provides various user information, e.g. UUID (the unique
2228
user id) which might prove useful in some operations.
2229

    
2230
Upload an Image file to Pithos
2231
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2232

    
2233
Now, that we have set up `kamaki` we will upload the Image that we have
2234
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2235
under the root ``Pithos`` container (as you may have done when uploading the
2236
Image from the Pithos Web UI), we will create a new container called ``images``
2237
and store the Image under that container. We do this for two reasons:
2238

    
2239
a) To demonstrate how to create containers other than the default ``Pithos``.
2240
   This can be done only with the `kamaki` client and not through the Web UI.
2241

    
2242
b) As a best organization practise, so that you won't have your Image files
2243
   tangled along with all your other Pithos files and directory structures.
2244

    
2245
We create the new ``images`` container by running:
2246

    
2247
.. code-block:: console
2248

    
2249
   $ kamaki file create images
2250

    
2251
To check if the container has been created, list all containers of your
2252
account:
2253

    
2254
.. code-block:: console
2255

    
2256
  $ kamaki file list
2257

    
2258
Then, we upload the Image file to that container:
2259

    
2260
.. code-block:: console
2261

    
2262
   $ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump images
2263

    
2264
The first is the local path and the second is the remote container on Pithos.
2265
Check if the file has been uploaded, by listing the container contents:
2266

    
2267
.. code-block:: console
2268

    
2269
  $ kamaki file list images
2270

    
2271
Alternatively check if the new container and file appear on the Pithos Web UI.
2272

    
2273
Register an existing Image file to Cyclades
2274
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2275

    
2276
For the purposes of the following example, we assume that the user UUID is
2277
``u53r-un1qu3-1d``.
2278

    
2279
Once the Image file has been successfully uploaded on Pithos then we register
2280
it to Cyclades, by running:
2281

    
2282
.. code-block:: console
2283

    
2284
   $ kamaki image register "Debian Base" \
2285
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump \
2286
                           --public \
2287
                           --disk-format=diskdump \
2288
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2289
                           --property description="Debian Squeeze Base System" \
2290
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2291
                           --property sortorder=1 --property USERS=root --property OS=debian
2292

    
2293
This command registers the Pithos file
2294
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump`` as an
2295
Image in Cyclades. This Image will be public (``--public``), so all users will
2296
be able to spawn VMs from it and is of type ``diskdump``. The first two
2297
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2298
properties are optional, but recommended, so that the Images appear nicely on
2299
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2300
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2301
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2302

    
2303
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2304
from Cyclades to Ganeti and then `snf-image` (also see
2305
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2306
used to show information on the Cyclades UI.
2307

    
2308
Spawn a VM from the Cyclades Web UI
2309
-----------------------------------
2310

    
2311
If the registration completes successfully, then go to the Cyclades Web UI from
2312
your browser at:
2313

    
2314
 `https://node1.example.com/cyclades/ui/`
2315

    
2316
Click on the 'New Machine' button and the first step of the wizard will appear.
2317
Click on 'My Images' (right after 'System' Images) on the left pane of the
2318
wizard. Your previously registered Image "Debian Base" should appear under
2319
'Available Images'. If not, something has gone wrong with the registration. Make
2320
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2321
register`` returns successfully with all options and properties as shown above.
2322

    
2323
If the Image appears on the list, select it and complete the wizard by selecting
2324
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2325
write down your password, because you *WON'T* be able to retrieve it later.
2326

    
2327
If everything was setup correctly, after a few minutes your new machine will go
2328
to state 'Running' and you will be able to use it. Click 'Console' to connect
2329
through VNC out of band, or click on the machine's icon to connect directly via
2330
SSH or RDP (for windows machines).
2331

    
2332
Congratulations. You have successfully installed the whole Synnefo stack and
2333
connected all components. Go ahead in the next section to test the Network
2334
functionality from inside Cyclades and discover even more features.
2335

    
2336
General Testing
2337
===============
2338

    
2339
Notes
2340
=====
2341