Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ d3840a05

History | View | Annotate | Download (76.5 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
22
not released yet.
23

    
24
If you just want to install the Object Storage Service (Pithos), follow the
25
guide and just stop after the "Testing of Pithos" section.
26

    
27

    
28
Installation of Synnefo / Introduction
29
======================================
30

    
31
We will install the services with the above list's order. The last three
32
services will be installed in a single step (at the end), because at the moment
33
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
be installed in the second, due to a conflict between the snf-pithos-app and
36
snf-cyclades-app component (scheduled to be fixed in the next version).
37

    
38
For the rest of the documentation we will refer to the first physical node as
39
"node1" and the second as "node2". We will also assume that their domain names
40
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
41
"4.3.2.2" respectively.
42

    
43
.. note:: It is import that the two machines are under the same domain name.
44
    If they are not, you can do this by editting the file ``/etc/hosts``
45
    on both machines, and add the following lines:
46

    
47
    .. code-block:: console
48

    
49
        4.3.2.1     node1.example.com
50
        4.3.2.2     node2.example.com
51

    
52

    
53
General Prerequisites
54
=====================
55

    
56
These are the general synnefo prerequisites, that you need on node1 and node2
57
and are related to all the services (Astakos, Pithos, Cyclades).
58

    
59
To be able to download all synnefo components you need to add the following
60
lines in your ``/etc/apt/sources.list`` file:
61

    
62
| ``deb http://apt.dev.grnet.gr squeeze/``
63
| ``deb-src http://apt.dev.grnet.gr squeeze/``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
105

    
106
You can install apache2, progresql and ntp by running:
107

    
108
.. code-block:: console
109

    
110
   # apt-get install apache2 postgresql ntp
111

    
112
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
113
the official debian backports:
114

    
115
.. code-block:: console
116

    
117
   # apt-get -t squeeze-backports install gunicorn
118

    
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
120

    
121
.. code-block:: console
122

    
123
   # apt-get -t squeeze-backports install python-gevent
124

    
125
On node1, we will create our databases, so you will also need the
126
python-psycopg2 package:
127

    
128
.. code-block:: console
129

    
130
   # apt-get install python-psycopg2
131

    
132
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
133
following line to ``/etc/apt/sources.list``:
134

    
135
.. code-block:: console
136

    
137
    deb http://www.rabbitmq.com/debian testing main
138

    
139
Add RabbitMQ public key, to trusted key list:
140

    
141
.. code-block:: console
142

    
143
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
144
  # apt-key add rabbitmq-signing-key-public.asc
145

    
146
Finally, to install the package run:
147

    
148
.. code-block:: console
149

    
150
  # apt-get update
151
  # apt-get install rabbitmq-server
152

    
153
Database setup
154
~~~~~~~~~~~~~~
155

    
156
On node1, we create a database called ``snf_apps``, that will host all django
157
apps related tables. We also create the user ``synnefo`` and grant him all
158
privileges on the database. We do this by running:
159

    
160
.. code-block:: console
161

    
162
    root@node1:~ # su - postgres
163
    postgres@node1:~ $ psql
164
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
166
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167

    
168
We also create the database ``snf_pithos`` needed by the Pithos backend and
169
grant the ``synnefo`` user all privileges on the database. This database could
170
be created on node2 instead, but we do it on node1 for simplicity. We will
171
create all needed databases on node1 and then node2 will connect to them.
172

    
173
.. code-block:: console
174

    
175
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
176
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177

    
178
Configure the database to listen to all network interfaces. You can do this by
179
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
180
``listen_addresses`` to ``'*'`` :
181

    
182
.. code-block:: console
183

    
184
    listen_addresses = '*'
185

    
186
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
187
node2 to connect to the database. Add the following lines under ``#IPv4 local
188
connections:`` :
189

    
190
.. code-block:: console
191

    
192
    host		all	all	4.3.2.1/32	md5
193
    host		all	all	4.3.2.2/32	md5
194

    
195
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
196
actual IPs. Now, restart the server to apply the changes:
197

    
198
.. code-block:: console
199

    
200
   # /etc/init.d/postgresql restart
201

    
202
Gunicorn setup
203
~~~~~~~~~~~~~~
204

    
205
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
206
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
207

    
208
.. code-block:: console
209

    
210
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
211

    
212

    
213
.. warning:: Do NOT start the server yet, because it won't find the
214
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
215
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
216
    ``--worker-class=sync``. We will start the server after successful
217
    installation of astakos. If the server is running::
218

    
219
       # /etc/init.d/gunicorn stop
220

    
221
Apache2 setup
222
~~~~~~~~~~~~~
223

    
224
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
225
following:
226

    
227
.. code-block:: console
228

    
229
    <VirtualHost *:80>
230
        ServerName node1.example.com
231

    
232
        RewriteEngine On
233
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
234
        RewriteRule ^(.*)$ - [F,L]
235
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
236
    </VirtualHost>
237

    
238
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
239
following:
240

    
241
.. code-block:: console
242

    
243
    <IfModule mod_ssl.c>
244
    <VirtualHost _default_:443>
245
        ServerName node1.example.com
246

    
247
        Alias /static "/usr/share/synnefo/static"
248

    
249
        #  SetEnv no-gzip
250
        #  SetEnv dont-vary
251

    
252
       AllowEncodedSlashes On
253

    
254
       RequestHeader set X-Forwarded-Protocol "https"
255

    
256
    <Proxy * >
257
        Order allow,deny
258
        Allow from all
259
    </Proxy>
260

    
261
        SetEnv                proxy-sendchunked
262
        SSLProxyEngine        off
263
        ProxyErrorOverride    off
264

    
265
        ProxyPass        /static !
266
        ProxyPass        / http://localhost:8080/ retry=0
267
        ProxyPassReverse / http://localhost:8080/
268

    
269
        RewriteEngine On
270
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
271
        RewriteRule ^(.*)$ - [F,L]
272

    
273
        SSLEngine on
274
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
275
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
276
    </VirtualHost>
277
    </IfModule>
278

    
279
Now enable sites and modules by running:
280

    
281
.. code-block:: console
282

    
283
   # a2enmod ssl
284
   # a2enmod rewrite
285
   # a2dissite default
286
   # a2ensite synnefo
287
   # a2ensite synnefo-ssl
288
   # a2enmod headers
289
   # a2enmod proxy_http
290

    
291
.. warning:: Do NOT start/restart the server yet. If the server is running::
292

    
293
       # /etc/init.d/apache2 stop
294

    
295
.. _rabbitmq-setup:
296

    
297
Message Queue setup
298
~~~~~~~~~~~~~~~~~~~
299

    
300
The message queue will run on node1, so we need to create the appropriate
301
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
302
exchanges:
303

    
304
.. code-block:: console
305

    
306
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
307
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
308

    
309
We do not need to initialize the exchanges. This will be done automatically,
310
during the Cyclades setup.
311

    
312
Pithos data directory setup
313
~~~~~~~~~~~~~~~~~~~~~~~~~~~
314

    
315
As mentioned in the General Prerequisites section, there is a directory called
316
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
317
directory inside it:
318

    
319
.. code-block:: console
320

    
321
   # cd /srv/pithos
322
   # mkdir data
323
   # chown www-data:www-data data
324
   # chmod g+ws data
325

    
326
You are now ready with all general prerequisites concerning node1. Let's go to
327
node2.
328

    
329
Node2
330
-----
331

    
332
General Synnefo dependencies
333
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
334

    
335
    * apache (http server)
336
    * gunicorn (WSGI http server)
337
    * postgresql (database)
338
    * ntp (NTP daemon)
339
    * gevent
340

    
341
You can install the above by running:
342

    
343
.. code-block:: console
344

    
345
   # apt-get install apache2 postgresql ntp
346

    
347
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
348
the official debian backports:
349

    
350
.. code-block:: console
351

    
352
   # apt-get -t squeeze-backports install gunicorn
353

    
354
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
355

    
356
.. code-block:: console
357

    
358
   # apt-get -t squeeze-backports install python-gevent
359

    
360
Node2 will connect to the databases on node1, so you will also need the
361
python-psycopg2 package:
362

    
363
.. code-block:: console
364

    
365
   # apt-get install python-psycopg2
366

    
367
Database setup
368
~~~~~~~~~~~~~~
369

    
370
All databases have been created and setup on node1, so we do not need to take
371
any action here. From node2, we will just connect to them. When you get familiar
372
with the software you may choose to run different databases on different nodes,
373
for performance/scalability/redundancy reasons, but those kind of setups are out
374
of the purpose of this guide.
375

    
376
Gunicorn setup
377
~~~~~~~~~~~~~~
378

    
379
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
380
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
381
(as happened for node1):
382

    
383
.. code-block:: console
384

    
385
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
386

    
387

    
388
.. warning:: Do NOT start the server yet, because it won't find the
389
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
390
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
391
    ``--worker-class=sync``. We will start the server after successful
392
    installation of astakos. If the server is running::
393

    
394
       # /etc/init.d/gunicorn stop
395

    
396
Apache2 setup
397
~~~~~~~~~~~~~
398

    
399
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
400
following:
401

    
402
.. code-block:: console
403

    
404
    <VirtualHost *:80>
405
        ServerName node2.example.com
406

    
407
        RewriteEngine On
408
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
409
        RewriteRule ^(.*)$ - [F,L]
410
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
411
    </VirtualHost>
412

    
413
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
414
containing the following:
415

    
416
.. code-block:: console
417

    
418
    <IfModule mod_ssl.c>
419
    <VirtualHost _default_:443>
420
        ServerName node2.example.com
421

    
422
        Alias /static "/usr/share/synnefo/static"
423

    
424
        SetEnv no-gzip
425
        SetEnv dont-vary
426
        AllowEncodedSlashes On
427

    
428
        RequestHeader set X-Forwarded-Protocol "https"
429

    
430
        <Proxy * >
431
            Order allow,deny
432
            Allow from all
433
        </Proxy>
434

    
435
        SetEnv                proxy-sendchunked
436
        SSLProxyEngine        off
437
        ProxyErrorOverride    off
438

    
439
        ProxyPass        /static !
440
        ProxyPass        / http://localhost:8080/ retry=0
441
        ProxyPassReverse / http://localhost:8080/
442

    
443
        SSLEngine on
444
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
445
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
446
    </VirtualHost>
447
    </IfModule>
448

    
449
As in node1, enable sites and modules by running:
450

    
451
.. code-block:: console
452

    
453
   # a2enmod ssl
454
   # a2enmod rewrite
455
   # a2dissite default
456
   # a2ensite synnefo
457
   # a2ensite synnefo-ssl
458
   # a2enmod headers
459
   # a2enmod proxy_http
460

    
461
.. warning:: Do NOT start/restart the server yet. If the server is running::
462

    
463
       # /etc/init.d/apache2 stop
464

    
465
We are now ready with all general prerequisites for node2. Now that we have
466
finished with all general prerequisites for both nodes, we can start installing
467
the services. First, let's install Astakos on node1.
468

    
469

    
470
Installation of Astakos on node1
471
================================
472

    
473
To install astakos, grab the package from our repository (make sure  you made
474
the additions needed in your ``/etc/apt/sources.list`` file, as described
475
previously), by running:
476

    
477
.. code-block:: console
478

    
479
   # apt-get install snf-astakos-app snf-pithos-backend
480

    
481
.. _conf-astakos:
482

    
483
Configuration of Astakos
484
========================
485

    
486
Conf Files
487
----------
488

    
489
After astakos is successfully installed, you will find the directory
490
``/etc/synnefo`` and some configuration files inside it. The files contain
491
commented configuration options, which are the default options. While installing
492
new snf-* components, new configuration files will appear inside the directory.
493
In this guide (and for all services), we will edit only the minimum necessary
494
configuration options, to reflect our setup. Everything else will remain as is.
495

    
496
After getting familiar with synnefo, you will be able to customize the software
497
as you wish and fits your needs. Many options are available, to empower the
498
administrator with extensively customizable setups.
499

    
500
For the snf-webproject component (installed as an astakos dependency), we
501
need the following:
502

    
503
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
504
uncomment and edit the ``DATABASES`` block to reflect our database:
505

    
506
.. code-block:: console
507

    
508
    DATABASES = {
509
     'default': {
510
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
511
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
512
         # ATTENTION: This *must* be the absolute path if using sqlite3.
513
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
514
         'NAME': 'snf_apps',
515
         'USER': 'synnefo',                      # Not used with sqlite3.
516
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
517
         # Set to empty string for localhost. Not used with sqlite3.
518
         'HOST': '4.3.2.1',
519
         # Set to empty string for default. Not used with sqlite3.
520
         'PORT': '5432',
521
     }
522
    }
523

    
524
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
525
``SECRET_KEY``. This is a Django specific setting which is used to provide a
526
seed in secret-key hashing algorithms. Set this to a random string of your
527
choice and keep it private:
528

    
529
.. code-block:: console
530

    
531
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
532

    
533
For astakos specific configuration, edit the following options in
534
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
535

    
536
.. code-block:: console
537

    
538
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
539

    
540
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
541

    
542
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
543
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an
544
extra path (``/astakos`` here) is recommended in order to distinguish
545
components, if more than one are installed on the same machine.
546

    
547
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
548
    If you would like to enable it, you have to edit the following options:
549

    
550
    .. code-block:: console
551

    
552
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
553
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
554
        ASTAKOS_RECAPTCHA_USE_SSL = True
555
        ASTAKOS_RECAPTCHA_ENABLED = True
556

    
557
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
558
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
559

    
560
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
561

    
562
.. code-block:: console
563

    
564
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
565

    
566
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
567

    
568
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
569

    
570
Those settings have to do with the black cloudbar endpoints and will be
571
described in more detail later on in this guide. For now, just edit the domain
572
to point at node1 which is where we have installed Astakos.
573

    
574
If you are an advanced user and want to use the Shibboleth Authentication
575
method, read the relative :ref:`section <shibboleth-auth>`.
576

    
577
.. _email-configuration:
578

    
579
Email delivery configuration
580
----------------------------
581

    
582
Many of the ``astakos`` operations require server to notify service users and 
583
administrators via email. e.g. right after the signup process the service sents 
584
an email to the registered email address containing an email verification url, 
585
after the user verifies the email address astakos once again needs to notify 
586
administrators with a notice that a new account has just been verified.
587

    
588
More specifically astakos sends emails in the following cases
589

    
590
- An email containing a verification link after each signup process.
591
- An email to the people listed in ``ADMINS`` setting after each email 
592
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email 
593
  notifies administrators that an additional action is required in order to 
594
  activate the user.
595
- A welcome email to the user email and an admin notification to ``ADMINS`` 
596
  right after each account activation.
597
- Feedback messages submited from astakos contact view and astakos feedback 
598
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
599
- Project application request notifications to people included in ``HELPDESK`` 
600
  and ``MANAGERS`` settings.
601
- Notifications after each project members action (join request, membership 
602
  accepted/declinde etc.) to project members or project owners.
603

    
604
Astakos uses the Django internal email delivering mechanism to send email 
605
notifications. A simple configuration, using an external smtp server to 
606
deliver messages, is shown below. 
607

    
608
.. code-block:: python
609
    
610
    # /etc/synnefo/10-snf-common-admins.conf
611
    EMAIL_HOST = "mysmtp.server.synnefo.org"
612
    EMAIL_HOST_USER = "<smtpuser>"
613
    EMAIL_HOST_PASSWORD = "<smtppassword>"
614

    
615
    # this gets appended in all email subjects
616
    EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] "
617
    
618
    # Address to use for outgoing emails
619
    DEFAULT_FROM_EMAIL = "server@example.synnefo.org"
620

    
621
    # Email where users can contact for support. This is used in html/email 
622
    # templates.
623
    CONTACT_EMAIL = "server@example.synnefo.org"
624

    
625
    # The email address that error messages come from
626
    SERVER_EMAIL = "server-errors@example.synnefo.org"
627

    
628
Notice that since email settings might be required by applications other than
629
astakos they are defined in a different configuration file than the one
630
previously used to set astakos specific settings. 
631

    
632
Refer to 
633
`Django documentation <https://docs.djangoproject.com/en/1.2/topics/email/>`_
634
for additional information on available email settings.
635

    
636
As refered in the previous section, based on the operation that triggers 
637
an email notification, the recipients list differs. Specifically for 
638
emails whose recipients include contacts from your service team 
639
(administrators, managers, helpdesk etc) synnefo provides the following 
640
settings located in ``10-snf-common-admins.conf``:
641

    
642
.. code-block:: python
643

    
644
    ADMINS = (('Admin name', 'admin@example.synnefo.org'), 
645
              ('Admin2 name', 'admin2@example.synnefo.org))
646
    MANAGERS = (('Manager name', 'manager@example.synnefo.org'),)
647
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),)
648

    
649

    
650

    
651
Enable Pooling
652
--------------
653

    
654
This section can be bypassed, but we strongly recommend you apply the following,
655
since they result in a significant performance boost.
656

    
657
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
658
around Psycopg2. This allows independent Django requests to reuse pooled DB
659
connections, with significant performance gains.
660

    
661
To use, first monkey-patch psycopg2. For Django, run this before the
662
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
663

    
664
.. code-block:: console
665

    
666
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
667
    monkey_patch_psycopg2()
668

    
669
Since we are running with greenlets, we should modify psycopg2 behavior, so it
670
works properly in a greenlet context:
671

    
672
.. code-block:: console
673

    
674
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
675
    make_psycopg_green()
676

    
677
Use the Psycopg2 driver as usual. For Django, this means using
678
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
679
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
680
driver, through ``DATABASES.OPTIONS`` in Django.
681

    
682
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
683
file that looks like this:
684

    
685
.. code-block:: console
686

    
687
    # Monkey-patch psycopg2
688
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
689
    monkey_patch_psycopg2()
690

    
691
    # If running with greenlets
692
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
693
    make_psycopg_green()
694

    
695
    DATABASES = {
696
     'default': {
697
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
698
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
699
         'OPTIONS': {'synnefo_poolsize': 8},
700

    
701
         # ATTENTION: This *must* be the absolute path if using sqlite3.
702
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
703
         'NAME': 'snf_apps',
704
         'USER': 'synnefo',                      # Not used with sqlite3.
705
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
706
         # Set to empty string for localhost. Not used with sqlite3.
707
         'HOST': '4.3.2.1',
708
         # Set to empty string for default. Not used with sqlite3.
709
         'PORT': '5432',
710
     }
711
    }
712

    
713
Database Initialization
714
-----------------------
715

    
716
After configuration is done, we initialize the database by running:
717

    
718
.. code-block:: console
719

    
720
    # snf-manage syncdb
721

    
722
At this example we don't need to create a django superuser, so we select
723
``[no]`` to the question. After a successful sync, we run the migration needed
724
for astakos:
725

    
726
.. code-block:: console
727

    
728
    # snf-manage migrate im
729
    # snf-manage migrate quotaholder_app
730

    
731
Then, we load the pre-defined user groups
732

    
733
.. code-block:: console
734

    
735
    # snf-manage loaddata groups
736

    
737
.. _services-reg:
738

    
739
Services Registration
740
---------------------
741

    
742
When the database is ready, we need to register the services. The following
743
command will ask you to register the standard Synnefo components (astakos,
744
cyclades, and pithos) along with the services they provide. Note that you
745
have to register at least astakos in order to have a usable authentication
746
system. For each component, you will be asked to provide two URLs: its base
747
URL and its UI URL.
748

    
749
The former is the location where the component resides; it should equal
750
the ``<component_name>_BASE_URL`` as specified in the respective component
751
settings. For example, the base URL for astakos would be
752
``https://node1.example.com/astakos``.
753

    
754
The latter is the URL that appears in the Cloudbar and leads to the
755
component UI. If you want to follow the default setup, set
756
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
757
URL as explained before. (You can later change the UI URL with
758
``snf-manage component-modify <component_name> --url new_ui_url``.)
759

    
760
The command will also register automatically the resource definitions
761
offered by the services.
762

    
763
.. code-block:: console
764

    
765
    # snf-component-register
766

    
767
.. note::
768

    
769
   This command is equivalent to running the following series of commands;
770
   it registers the three components in astakos and then in each host it
771
   exports the respective service definitions, copies the exported json file
772
   to the astakos host, where it finally imports it:
773

    
774
    .. code-block:: console
775

    
776
       astakos-host$ snf-manage component-add astakos astakos_ui_url
777
       astakos-host$ snf-manage component-add cyclades cyclades_ui_url
778
       astakos-host$ snf-manage component-add pithos pithos_ui_url
779
       astakos-host$ snf-manage service-export-astakos > astakos.json
780
       astakos-host$ snf-manage service-import --json astakos.json
781
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
782
       # copy the file to astakos-host
783
       astakos-host$ snf-manage service-import --json cyclades.json
784
       pithos-host$ snf-manage service-export-pithos > pithos.json
785
       # copy the file to astakos-host
786
       astakos-host$ snf-manage service-import --json pithos.json
787

    
788
Setting Default Base Quota for Resources
789
----------------------------------------
790

    
791
We now have to specify the limit on resources that each user can employ
792
(exempting resources offered by projects).
793

    
794
.. code-block:: console
795

    
796
    # snf-manage resource-modify --limit-interactive
797

    
798

    
799
Servers Initialization
800
----------------------
801

    
802
Finally, we initialize the servers on node1:
803

    
804
.. code-block:: console
805

    
806
    root@node1:~ # /etc/init.d/gunicorn restart
807
    root@node1:~ # /etc/init.d/apache2 restart
808

    
809
We have now finished the Astakos setup. Let's test it now.
810

    
811

    
812
Testing of Astakos
813
==================
814

    
815
Open your favorite browser and go to:
816

    
817
``http://node1.example.com/astakos``
818

    
819
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
820
the "welcome" door of Astakos, then you have successfully setup Astakos.
821

    
822
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
823
and fill all your data at the sign up form. Then click "SUBMIT". You should now
824
see a green box on the top, which informs you that you made a successful request
825
and the request has been sent to the administrators. So far so good, let's
826
assume that you created the user with username ``user@example.com``.
827

    
828
Now we need to activate that user. Return to a command prompt at node1 and run:
829

    
830
.. code-block:: console
831

    
832
    root@node1:~ # snf-manage user-list
833

    
834
This command should show you a list with only one user; the one we just created.
835
This user should have an id with a value of ``1`` and flag "active" and
836
"verified" set to False. Now run:
837

    
838
.. code-block:: console
839

    
840
    root@node1:~ # snf-manage user-modify 1 --verify --accept
841

    
842
This verifies the user email and activates the user.
843
When running in production, the activation is done automatically with different
844
types of moderation, that Astakos supports. You can see the moderation methods
845
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
846
documentation. In production, you can also manually activate a user, by sending
847
him/her an activation email. See how to do this at the :ref:`User
848
activation <user_activation>` section.
849

    
850
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
851
your browser again. Try to sign in using your new credentials. If the astakos
852
menu appears and you can see your profile, then you have successfully setup
853
Astakos.
854

    
855
Let's continue to install Pithos now.
856

    
857

    
858
Installation of Pithos on node2
859
===============================
860

    
861
To install Pithos, grab the packages from our repository (make sure  you made
862
the additions needed in your ``/etc/apt/sources.list`` file, as described
863
previously), by running:
864

    
865
.. code-block:: console
866

    
867
   # apt-get install snf-pithos-app snf-pithos-backend
868

    
869
Now, install the pithos web interface:
870

    
871
.. code-block:: console
872

    
873
   # apt-get install snf-pithos-webclient
874

    
875
This package provides the standalone pithos web client. The web client is the
876
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
877
interface's cloudbar, at the top of the Astakos homepage.
878

    
879

    
880
.. _conf-pithos:
881

    
882
Configuration of Pithos
883
=======================
884

    
885
Conf Files
886
----------
887

    
888
After Pithos is successfully installed, you will find the directory
889
``/etc/synnefo`` and some configuration files inside it, as you did in node1
890
after installation of astakos. Here, you will not have to change anything that
891
has to do with snf-common or snf-webproject. Everything is set at node1. You
892
only need to change settings that have to do with Pithos. Specifically:
893

    
894
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
895
this options:
896

    
897
.. code-block:: console
898

    
899
   ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
900

    
901
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
902
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
903
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
904

    
905
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
906

    
907
   # Set to False if astakos & pithos are on the same host
908
   PITHOS_PROXY_USER_SERVICES = True
909

    
910

    
911
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
912
find the Pithos backend database. Above we tell Pithos that its database is
913
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
914
``example_passw0rd``.  All those settings where setup during node1's "Database
915
setup" section.
916

    
917
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
918
the Pithos backend data. Above we tell Pithos to store its data under
919
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
920
directory at node1's "Pithos data directory setup" section.
921

    
922
The ``ASTAKOS_BASE_URL`` option informs the Pithos app where Astakos is.
923
The Astakos service is used for user management (authentication, quotas, etc.)
924

    
925
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.
926

    
927
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos.
928
It can be retrieved by running on the Astakos node (node1 in our case):
929

    
930
.. code-block:: console
931

    
932
   # snf-manage component-list
933

    
934
The token has been generated automatically during the :ref:`Pithos service
935
registration <services-reg>`.
936

    
937
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
938
object checksums. This results to improved performance during object uploading.
939
However, if compatibility with the OpenStack Object Storage API is important
940
then it should be changed to ``True``.
941

    
942
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
943
Pithos web UI with the astakos web UI (through the top cloudbar):
944

    
945
.. code-block:: console
946

    
947
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
948
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
949
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
950

    
951
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
952
cloudbar.
953

    
954
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
955
Pithos web client to get from astakos all the information needed to fill its
956
own cloudbar. So we put our astakos deployment urls there.
957

    
958
Pooling and Greenlets
959
---------------------
960

    
961
Pithos is pooling-ready without the need of further configuration, because it
962
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
963
backend objects for access to the Pithos DB.
964

    
965
However, as in Astakos, since we are running with Greenlets, it is also
966
recommended to modify psycopg2 behavior so it works properly in a greenlet
967
context. This means adding the following lines at the top of your
968
``/etc/synnefo/10-snf-webproject-database.conf`` file:
969

    
970
.. code-block:: console
971

    
972
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
973
    make_psycopg_green()
974

    
975
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
976
mentioned above, depending on your setup) argument on your
977
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
978
like this:
979

    
980
.. code-block:: console
981

    
982
    CONFIG = {
983
     'mode': 'django',
984
     'environment': {
985
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
986
     },
987
     'working_dir': '/etc/synnefo',
988
     'user': 'www-data',
989
     'group': 'www-data',
990
     'args': (
991
       '--bind=127.0.0.1:8080',
992
       '--workers=4',
993
       '--worker-class=gevent',
994
       '--log-level=debug',
995
       '--timeout=43200'
996
     ),
997
    }
998

    
999
Stamp Database Revision
1000
-----------------------
1001

    
1002
Pithos uses the alembic_ database migrations tool.
1003

    
1004
.. _alembic: http://alembic.readthedocs.org
1005

    
1006
After a successful installation, we should stamp it at the most recent
1007
revision, so that future migrations know where to start upgrading in
1008
the migration history.
1009

    
1010
.. code-block:: console
1011

    
1012
    root@node2:~ # pithos-migrate stamp head
1013

    
1014
Servers Initialization
1015
----------------------
1016

    
1017
After configuration is done, we initialize the servers on node2:
1018

    
1019
.. code-block:: console
1020

    
1021
    root@node2:~ # /etc/init.d/gunicorn restart
1022
    root@node2:~ # /etc/init.d/apache2 restart
1023

    
1024
You have now finished the Pithos setup. Let's test it now.
1025

    
1026

    
1027
Testing of Pithos
1028
=================
1029

    
1030
Open your browser and go to the Astakos homepage:
1031

    
1032
``http://node1.example.com/astakos``
1033

    
1034
Login, and you will see your profile page. Now, click the "pithos" link on the
1035
top black cloudbar. If everything was setup correctly, this will redirect you
1036
to:
1037

    
1038

    
1039
and you will see the blue interface of the Pithos application.  Click the
1040
orange "Upload" button and upload your first file. If the file gets uploaded
1041
successfully, then this is your first sign of a successful Pithos installation.
1042
Go ahead and experiment with the interface to make sure everything works
1043
correctly.
1044

    
1045
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1046

    
1047
If you don't stumble on any problems, then you have successfully installed
1048
Pithos, which you can use as a standalone File Storage Service.
1049

    
1050
If you would like to do more, such as:
1051

    
1052
    * Spawning VMs
1053
    * Spawning VMs from Images stored on Pithos
1054
    * Uploading your custom Images to Pithos
1055
    * Spawning VMs from those custom Images
1056
    * Registering existing Pithos files as Images
1057
    * Connect VMs to the Internet
1058
    * Create Private Networks
1059
    * Add VMs to Private Networks
1060

    
1061
please continue with the rest of the guide.
1062

    
1063

    
1064
Cyclades Prerequisites
1065
======================
1066

    
1067
Before proceeding with the Cyclades installation, make sure you have
1068
successfully set up Astakos and Pithos first, because Cyclades depends on
1069
them. If you don't have a working Astakos and Pithos installation yet, please
1070
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1071

    
1072
Besides Astakos and Pithos, you will also need a number of additional working
1073
prerequisites, before you start the Cyclades installation.
1074

    
1075
Ganeti
1076
------
1077

    
1078
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1079
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1080
Please refer to the
1081
`ganeti documentation <http://docs.ganeti.org/ganeti/2.6/html>`_ for all the
1082
gory details. A successful Ganeti installation concludes with a working
1083
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1084
<GANETI_NODES>`.
1085

    
1086
The above Ganeti cluster can run on different physical machines than node1 and
1087
node2 and can scale independently, according to your needs.
1088

    
1089
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1090
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1091
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1092

    
1093
We highly recommend that you read the official Ganeti documentation, if you are
1094
not familiar with Ganeti.
1095

    
1096
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1097
support IP pool management. This feature will be available in Ganeti >= 2.7.
1098
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1099
GRNET provided packages until stable 2.7 is out. To do so:
1100

    
1101
.. code-block:: console
1102

    
1103
   # apt-get install snf-ganeti ganeti-htools
1104
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1105

    
1106
You should have:
1107

    
1108
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1109

    
1110
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1111
both nodes, choose a domain name that resolves to a valid floating IP (let's
1112
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1113
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1114
If not then skip passing --no-ssh-init but be aware that it will replace
1115
/root/.ssh/* related files and you might lose access to master node. Also,
1116
make sure there is an lvm volume group named ``ganeti`` that will host your
1117
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1118
Then run on node1:
1119

    
1120
.. code-block:: console
1121

    
1122
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1123
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1124
                    --master-netdev eth0 ganeti.node1.example.com
1125
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1126
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1127
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1128

    
1129
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1130
                    --vm-capable=yes node2.example.com
1131
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1132
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1133

    
1134
For any problems you may stumble upon installing Ganeti, please refer to the
1135
`official documentation <http://docs.ganeti.org/ganeti/2.6/html>`_. Installation
1136
of Ganeti is out of the scope of this guide.
1137

    
1138
.. _cyclades-install-snfimage:
1139

    
1140
snf-image
1141
---------
1142

    
1143
Installation
1144
~~~~~~~~~~~~
1145
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1146
you need the :ref:
1147
`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` OS
1148
Definition installed on *all* VM-capable Ganeti nodes. This means we need
1149
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on
1150
node1 and node2. You can do this by running on *both* nodes:
1151

    
1152
.. code-block:: console
1153

    
1154
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1155

    
1156
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1157
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1158
able to access the Pithos database. This is why, we also install them on *all*
1159
VM-capable Ganeti nodes.
1160

    
1161
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1162
    not  work out of the box if you try to use URLs served by servers which do
1163
    not have a valid certificate. To circumvent this you should edit the file
1164
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1165

    
1166
Configuration
1167
~~~~~~~~~~~~~
1168
snf-image supports native access to Images stored on Pithos. This means that
1169
it can talk directly to the Pithos backend, without the need of providing a
1170
public URL. More details, are described in the next section. For now, the only
1171
thing we need to do, is configure snf-image to access our Pithos backend.
1172

    
1173
To do this, we need to set the corresponding variables in
1174
``/etc/default/snf-image``, to reflect our Pithos setup:
1175

    
1176
.. code-block:: console
1177

    
1178
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1179

    
1180
    PITHOS_DATA="/srv/pithos/data"
1181

    
1182
If you have installed your Ganeti cluster on different nodes than node1 and
1183
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1184

    
1185
If you would like to use Images that are also/only stored locally, you need to
1186
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1187
Pithos.
1188

    
1189
Testing
1190
~~~~~~~
1191
You can test that snf-image is successfully installed by running on the
1192
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1193

    
1194
.. code-block:: console
1195

    
1196
   # gnt-os diagnose
1197

    
1198
This should return ``valid`` for snf-image.
1199

    
1200
If you are interested to learn more about snf-image's internals (and even use
1201
it alongside Ganeti without Synnefo), please see
1202
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information
1203
concerning installation instructions, documentation on the design and
1204
implementation, and supported Image formats.
1205

    
1206
.. _snf-image-images:
1207

    
1208
Actual Images for snf-image
1209
---------------------------
1210

    
1211
Now that snf-image is installed successfully we need to provide it with some
1212
Images.
1213
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1214
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We
1215
recommend the use of the ``diskdump`` format. For more information about
1216
snf-image Image formats see `here
1217
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_.
1218

    
1219
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1220
also supports three (3) different locations for the above Images to be stored:
1221

    
1222
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1223
      in :file:`/etc/default/snf-image`)
1224
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1225
    * On Pithos (accessible natively, not only by its public URL)
1226

    
1227
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1228
on the official `snf-image page
1229
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The
1230
image is of type ``diskdump``. We will store it in our new Pithos installation.
1231

    
1232
To do so, do the following:
1233

    
1234
a) Download the Image from the official snf-image page.
1235

    
1236
b) Upload the Image to your Pithos installation, either using the Pithos Web
1237
   UI or the command line client `kamaki
1238
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1239

    
1240
Once the Image is uploaded successfully, download the Image's metadata file
1241
from the official snf-image page. You will need it, for spawning a VM from
1242
Ganeti, in the next section.
1243

    
1244
Of course, you can repeat the procedure to upload more Images, available from
1245
the `official snf-image page
1246
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_.
1247

    
1248
.. _ganeti-with-pithos-images:
1249

    
1250
Spawning a VM from a Pithos Image, using Ganeti
1251
-----------------------------------------------
1252

    
1253
Now, it is time to test our installation so far. So, we have Astakos and
1254
Pithos installed, we have a working Ganeti installation, the snf-image
1255
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1256
Pithos. Make sure you also have the `metadata file
1257
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1258

    
1259
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1260

    
1261
.. code-block:: console
1262

    
1263
   # gnt-instance add -o snf-image+default --os-parameters \
1264
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1265
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1266
                      testvm1
1267

    
1268
In the above command:
1269

    
1270
 * ``img_passwd``: the arbitrary root password of your new instance
1271
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1272
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1273
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1274
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1275
               * ``container``: ``pithos`` (default, if the Web UI was used)
1276
               * ``filename``: the name of file (visible also from the Web UI)
1277
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1278
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1279
                       <http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_
1280

    
1281
If the ``gnt-instance add`` command returns successfully, then run:
1282

    
1283
.. code-block:: console
1284

    
1285
   # gnt-instance info testvm1 | grep "console connection"
1286

    
1287
to find out where to connect using VNC. If you can connect successfully and can
1288
login to your new instance using the root password ``my_vm_example_passw0rd``,
1289
then everything works as expected and you have your new Debian Base VM up and
1290
running.
1291

    
1292
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1293
to access the Pithos database and the Pithos backend data (newer versions
1294
require UUID instead of a username). Another issue you may encounter is that in
1295
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1296
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1297
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1298
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1299
are sure everything works till this point.
1300

    
1301
If everything works, you have successfully connected Ganeti with Pithos. Let's
1302
move on to networking now.
1303

    
1304
.. warning::
1305

    
1306
    You can bypass the networking sections and go straight to
1307
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1308
    the Cyclades Network Service, but only the Cyclades Compute Service
1309
    (recommended for now).
1310

    
1311
Networking Setup Overview
1312
-------------------------
1313

    
1314
This part is deployment-specific and must be customized based on the specific
1315
needs of the system administrator. However, to do so, the administrator needs
1316
to understand how each level handles Virtual Networks, to be able to setup the
1317
backend appropriately, before installing Cyclades. To do so, please read the
1318
:ref:`Network <networks>` section before proceeding.
1319

    
1320
Since synnefo 0.11 all network actions are managed with the snf-manage
1321
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1322
snf-network, bridges, vlans) to be already configured correctly. The only
1323
actions needed in this point are:
1324

    
1325
a) Have Ganeti with IP pool management support installed.
1326

    
1327
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1328

    
1329
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1330

    
1331
In order to test that everything is setup correctly before installing Cyclades,
1332
we will make some testing actions in this section, and the actual setup will be
1333
done afterwards with snf-manage commands.
1334

    
1335
.. _snf-network:
1336

    
1337
snf-network
1338
~~~~~~~~~~~
1339

    
1340
snf-network includes `kvm-vif-bridge` script that is invoked every time
1341
a tap (a VM's NIC) is created. Based on environment variables passed by
1342
Ganeti it issues various commands depending on the network type the NIC is
1343
connected to and sets up a corresponding dhcp lease.
1344

    
1345
Install snf-network on all Ganeti nodes:
1346

    
1347
.. code-block:: console
1348

    
1349
   # apt-get install snf-network
1350

    
1351
Then, in :file:`/etc/default/snf-network` set:
1352

    
1353
.. code-block:: console
1354

    
1355
   MAC_MASK=ff:ff:f0:00:00:00
1356

    
1357
.. _nfdhcpd:
1358

    
1359
nfdhcpd
1360
~~~~~~~
1361

    
1362
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1363
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1364
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1365
create a dhcp response.
1366

    
1367
.. code-block:: console
1368

    
1369
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1370
   # apt-get install nfdhcpd
1371

    
1372
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1373
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1374
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1375
VMs. Once you are finished, restart the server on all nodes:
1376

    
1377
.. code-block:: console
1378

    
1379
   # /etc/init.d/nfdhcpd restart
1380

    
1381
If you are using ``ferm``, then you need to run the following:
1382

    
1383
.. code-block:: console
1384

    
1385
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1386
   # /etc/init.d/ferm restart
1387

    
1388
or make sure to run after boot:
1389

    
1390
.. code-block:: console
1391

    
1392
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1393

    
1394
and if you have IPv6 enabled:
1395

    
1396
.. code-block:: console
1397

    
1398
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1399
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1400

    
1401
You can check which clients are currently served by nfdhcpd by running:
1402

    
1403
.. code-block:: console
1404

    
1405
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1406

    
1407
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1408

    
1409
Public Network Setup
1410
--------------------
1411

    
1412
To achieve basic networking the simplest way is to have a common bridge (e.g.
1413
``br0``, on the same collision domain with the router) where all VMs will
1414
connect to. Packets will be "forwarded" to the router and then to the Internet.
1415
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1416
:ref:`Network <networks>` section).
1417

    
1418
Physical Host Setup
1419
~~~~~~~~~~~~~~~~~~~
1420

    
1421
Assuming ``eth0`` on both hosts is the public interface (directly connected
1422
to the router), run on every node:
1423

    
1424
.. code-block:: console
1425

    
1426
   # apt-get install vlan
1427
   # brctl addbr br0
1428
   # ip link set br0 up
1429
   # vconfig add eth0 100
1430
   # ip link set eth0.100 up
1431
   # brctl addif br0 eth0.100
1432

    
1433

    
1434
Testing a Public Network
1435
~~~~~~~~~~~~~~~~~~~~~~~~
1436

    
1437
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1438
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1439
network by running:
1440

    
1441
.. code-block:: console
1442

    
1443
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1444

    
1445
Then, connect the network to all your nodegroups. We assume that we only have
1446
one nodegroup (``default``) in our Ganeti cluster:
1447

    
1448
.. code-block:: console
1449

    
1450
   # gnt-network connect test-net-public default bridged br0
1451

    
1452
Now, it is time to test that the backend infrastracture is correctly setup for
1453
the Public Network. We will add a new VM, the same way we did it on the
1454
previous testing section. However, now will also add one NIC, configured to be
1455
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1456

    
1457
.. code-block:: console
1458

    
1459
   # gnt-instance add -o snf-image+default --os-parameters \
1460
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1461
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1462
                      --net 0:ip=pool,network=test-net-public \
1463
                      testvm2
1464

    
1465
If the above returns successfully, connect to the new VM and run:
1466

    
1467
.. code-block:: console
1468

    
1469
   root@testvm2:~ # ip addr
1470
   root@testvm2:~ # ip route
1471
   root@testvm2:~ # cat /etc/resolv.conf
1472

    
1473
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1474
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1475
ganeti, snf-network and nfdhcpd.
1476

    
1477
Now ping the outside world. If this works too, then you have also configured
1478
correctly your physical host and router.
1479

    
1480
Make sure everything works as expected, before proceeding with the Private
1481
Networks setup.
1482

    
1483
.. _private-networks-setup:
1484

    
1485
Private Networks Setup
1486
----------------------
1487

    
1488
Synnefo supports two types of private networks:
1489

    
1490
 - based on MAC filtering
1491
 - based on physical VLANs
1492

    
1493
Both types provide Layer 2 isolation to the end-user.
1494

    
1495
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1496
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1497
physical VLAN. To this end to assure isolation among end-users' private networks
1498
each has to have different MAC prefix (for the filtering to take place) or to be
1499
"connected" to a different bridge (VLAN actually).
1500

    
1501
Physical Host Setup
1502
~~~~~~~~~~~~~~~~~~~
1503

    
1504
In order to create the necessary VLAN/bridges, one for MAC filtered private
1505
networks and various (e.g. 20) for private networks based on physical VLANs,
1506
run on every node:
1507

    
1508
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1509
configured correctly) connected together, run on every node:
1510

    
1511
.. code-block:: console
1512

    
1513
   # modprobe 8021q
1514
   # $iface=eth0
1515
   # for prv in $(seq 0 20); do
1516
        vlan=$prv
1517
        bridge=prv$prv
1518
        vconfig add $iface $vlan
1519
        ifconfig $iface.$vlan up
1520
        brctl addbr $bridge
1521
        brctl setfd $bridge 0
1522
        brctl addif $bridge $iface.$vlan
1523
        ifconfig $bridge up
1524
      done
1525

    
1526
The above will do the following :
1527

    
1528
 * provision 21 new bridges: ``prv0`` - ``prv20``
1529
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1530
 * add the corresponding vlan to the equivalent bridge
1531

    
1532
You can run ``brctl show`` on both nodes to see if everything was setup
1533
correctly.
1534

    
1535
Testing the Private Networks
1536
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1537

    
1538
To test the Private Networks, we will create two instances and put them in the
1539
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1540
that the instances will have a second NIC connected to the ``prv0``
1541
pre-provisioned bridge and a third to ``prv1``.
1542

    
1543
We run the same command as in the Public Network testing section, but with one
1544
more argument for the second NIC:
1545

    
1546
.. code-block:: console
1547

    
1548
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1549
   # gnt-network connect test-net-prv-mac default bridged prv0
1550

    
1551
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1552
   # gnt-network connect test-net-prv-vlan default bridged prv1
1553

    
1554
   # gnt-instance add -o snf-image+default --os-parameters \
1555
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1556
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1557
                      --net 0:ip=pool,network=test-net-public \
1558
                      --net 1:ip=pool,network=test-net-prv-mac \
1559
                      --net 2:ip=none,network=test-net-prv-vlan \
1560
                      testvm3
1561

    
1562
   # gnt-instance add -o snf-image+default --os-parameters \
1563
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1564
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1565
                      --net 0:ip=pool,network=test-net-public \
1566
                      --net 1:ip=pool,network=test-net-prv-mac \
1567
                      --net 2:ip=none,network=test-net-prv-vlan \
1568
                      testvm4
1569

    
1570
Above, we create two instances with first NIC connected to the internet, their
1571
second NIC connected to a MAC filtered private Network and their third NIC
1572
connected to the first Physical VLAN Private Network. Now, connect to the
1573
instances using VNC and make sure everything works as expected:
1574

    
1575
 a) The instances have access to the public internet through their first eth
1576
    interface (``eth0``), which has been automatically assigned a public IP.
1577

    
1578
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1579

    
1580
 c) ip link set ``eth1``/``eth2`` up
1581

    
1582
 d) dhclient ``eth1``/``eth2``
1583

    
1584
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1585

    
1586
If everything works as expected, then you have finished the Network Setup at the
1587
backend for both types of Networks (Public & Private).
1588

    
1589
.. _cyclades-gtools:
1590

    
1591
Cyclades Ganeti tools
1592
---------------------
1593

    
1594
In order for Ganeti to be connected with Cyclades later on, we need the
1595
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1596
case). You can install them by running in both nodes:
1597

    
1598
.. code-block:: console
1599

    
1600
   # apt-get install snf-cyclades-gtools
1601

    
1602
This will install the following:
1603

    
1604
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1605
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1606
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1607

    
1608
Configure ``snf-cyclades-gtools``
1609
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1610

    
1611
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1612
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1613
that need it:
1614

    
1615
.. code-block:: console
1616

    
1617
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1618

    
1619
The above variables should reflect your :ref:`Message Queue setup
1620
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1621

    
1622
Connect ``snf-image`` with ``snf-progress-monitor``
1623
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1624

    
1625
Finally, we need to configure ``snf-image`` to publish progress messages during
1626
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1627
set the corresponding variable to ``snf-progress-monitor``:
1628

    
1629
.. code-block:: console
1630

    
1631
   PROGRESS_MONITOR="snf-progress-monitor"
1632

    
1633
This file should be editted in all Ganeti nodes.
1634

    
1635
.. _rapi-user:
1636

    
1637
Synnefo RAPI user
1638
-----------------
1639

    
1640
As a last step before installing Cyclades, create a new RAPI user that will
1641
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1642
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1643
You can do this, by first running:
1644

    
1645
.. code-block:: console
1646

    
1647
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1648

    
1649
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1650

    
1651
.. code-block:: console
1652

    
1653
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1654

    
1655
More about Ganeti's RAPI users `here.
1656
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_
1657

    
1658
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1659
to the actual Cyclades installation.
1660

    
1661

    
1662
Installation of Cyclades on node1
1663
=================================
1664

    
1665
This section describes the installation of Cyclades. Cyclades is Synnefo's
1666
Compute service. The Image Service will get installed automatically along with
1667
Cyclades, because it is contained in the same Synnefo component.
1668

    
1669
We will install Cyclades on node1. To do so, we install the corresponding
1670
package by running on node1:
1671

    
1672
.. code-block:: console
1673

    
1674
   # apt-get install snf-cyclades-app memcached python-memcache
1675

    
1676
If all packages install successfully, then Cyclades are installed and we
1677
proceed with their configuration.
1678

    
1679
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1680
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1681
This is achieved by storing all sensitive information to a CACHE backend and
1682
exporting it via VMAPI. The cache entries are invalidated after the first
1683
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1684
`Django <https://www.djangoproject.com/>`_ cache backend.
1685

    
1686
Configuration of Cyclades
1687
=========================
1688

    
1689
Conf files
1690
----------
1691

    
1692
After installing Cyclades, a number of new configuration files will appear under
1693
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1694
only the minimal needed changes to result with a working system. In general,
1695
sane defaults have been chosen for the most of the options, to cover most of the
1696
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1697
once you get familiar with the different options.
1698

    
1699
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1700

    
1701
.. code-block:: console
1702

    
1703
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1704
   ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
1705

    
1706
   # Set to False if astakos & cyclades are on the same host
1707
   CYCLADES_PROXY_USER_SERVICES = False
1708

    
1709
   CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w'
1710

    
1711
The ``ASTAKOS_BASE_URL`` denotes the Astakos endpoint for Cyclades,
1712
which is used for all user management, including authentication.
1713
Since our Astakos, Cyclades, and Pithos installations belong together,
1714
they should all have identical ``ASTAKOS_BASE_URL`` setting
1715
(see also, :ref:`previously <conf-pithos>`).
1716

    
1717
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL.
1718
Appending an extra path (``/cyclades`` here) is recommended in order to
1719
distinguish components, if more than one are installed on the same machine.
1720

    
1721
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with astakos.
1722
It can be retrieved by running on the Astakos node (node1 in our case):
1723

    
1724
.. code-block:: console
1725

    
1726
   # snf-manage component-list
1727

    
1728
The token has been generated automatically during the :ref:`Cyclades service
1729
registration <services-reg>`.
1730

    
1731
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1732

    
1733
.. code-block:: console
1734

    
1735
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1736
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1737
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/astakos/ui/get_menu'
1738

    
1739
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1740
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1741
used by the Cyclades Web UI to get from Astakos all the information needed to
1742
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1743
above should have the same values we put in the corresponding variables in
1744
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1745
:ref:`Pithos configuration <conf-pithos>` section.
1746

    
1747
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1748

    
1749
.. code-block:: console
1750

    
1751
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1752
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1753

    
1754
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1755
denotes the Pithos database (where the Image files are stored). So we set that
1756
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1757
Pithos data location.
1758

    
1759
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1760

    
1761
.. code-block:: console
1762

    
1763
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1764

    
1765
The above settings denote the Message Queue. Those settings should have the same
1766
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1767
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1768

    
1769
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1770

    
1771
.. code-block:: console
1772

    
1773
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1774

    
1775
Edit ``/etc/default/vncauthproxy``:
1776

    
1777
.. code-block:: console
1778

    
1779
   CHUID="nobody:www-data"
1780

    
1781
We have now finished with the basic Cyclades configuration.
1782

    
1783
Database Initialization
1784
-----------------------
1785

    
1786
Once Cyclades is configured, we sync the database:
1787

    
1788
.. code-block:: console
1789

    
1790
   $ snf-manage syncdb
1791
   $ snf-manage migrate
1792

    
1793
and load the initial server flavors:
1794

    
1795
.. code-block:: console
1796

    
1797
   $ snf-manage loaddata flavors
1798

    
1799
If everything returns successfully, our database is ready.
1800

    
1801
Add the Ganeti backend
1802
----------------------
1803

    
1804
In our installation we assume that we only have one Ganeti cluster, the one we
1805
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1806
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1807
correctly.
1808

    
1809
.. code-block:: console
1810

    
1811
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1812

    
1813
You can see everything has been setup correctly by running:
1814

    
1815
.. code-block:: console
1816

    
1817
   $ snf-manage backend-list
1818

    
1819
Enable the new backend by running:
1820

    
1821
.. code-block::
1822

    
1823
   $ snf-manage backend-modify --drained False 1
1824

    
1825
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1826
    This means that you cannot add VMs to it. The reason for this is that the
1827
    nodes should be unavailable to Synnefo until the Administrator explicitly
1828
    releases them. To change this setting, use ``snf-manage backend-modify
1829
    --drained False <backend-id>``.
1830

    
1831
If something is not set correctly, you can modify the backend with the
1832
``snf-manage backend-modify`` command. If something has gone wrong, you could
1833
modify the backend to reflect the Ganeti installation by running:
1834

    
1835
.. code-block:: console
1836

    
1837
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1838
                               --user=cyclades
1839
                               --pass=example_rapi_passw0rd
1840
                               1
1841

    
1842
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1843
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1844
can talk to Ganeti even after a Ganeti master-failover.
1845

    
1846
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1847
password.  Once we setup the first backend to point at our Ganeti cluster, we
1848
update the Cyclades backends status by running:
1849

    
1850
.. code-block:: console
1851

    
1852
   $ snf-manage backend-update-status
1853

    
1854
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1855
guide,we won't get into more detail regarding mulitple backends. If you want to
1856
learn more please see /*TODO*/.
1857

    
1858
Add a Public Network
1859
----------------------
1860

    
1861
Cyclades supports different Public Networks on different Ganeti backends.
1862
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1863
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1864
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1865
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1866

    
1867
.. code-block:: console
1868

    
1869
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1870
                               --gateway=5.6.7.1 \
1871
                               --subnet6=2001:648:2FFC:1322::/64 \
1872
                               --gateway6=2001:648:2FFC:1322::1 \
1873
                               --public --dhcp --flavor=CUSTOM \
1874
                               --link=br0 --mode=bridged \
1875
                               --name=public_network \
1876
                               --backend-id=1
1877

    
1878
This will create the Public Network on both Cyclades and the Ganeti backend. To
1879
make sure everything was setup correctly, also run:
1880

    
1881
.. code-block:: console
1882

    
1883
   $ snf-manage reconcile-networks
1884

    
1885
You can see all available networks by running:
1886

    
1887
.. code-block:: console
1888

    
1889
   $ snf-manage network-list
1890

    
1891
and inspect each network's state by running:
1892

    
1893
.. code-block:: console
1894

    
1895
   $ snf-manage network-inspect <net_id>
1896

    
1897
Finally, you can see the networks from the Ganeti perspective by running on the
1898
Ganeti MASTER:
1899

    
1900
.. code-block:: console
1901

    
1902
   $ gnt-network list
1903
   $ gnt-network info <network_name>
1904

    
1905
Create pools for Private Networks
1906
---------------------------------
1907

    
1908
To prevent duplicate assignment of resources to different private networks,
1909
Cyclades supports two types of pools:
1910

    
1911
 - MAC prefix Pool
1912
 - Bridge Pool
1913

    
1914
As long as those resourses have been provisioned, admin has to define two
1915
these pools in Synnefo:
1916

    
1917

    
1918
.. code-block:: console
1919

    
1920
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1921

    
1922
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1923

    
1924
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1925

    
1926
.. code-block:: console
1927

    
1928
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
1929

    
1930
Servers restart
1931
---------------
1932

    
1933
Restart gunicorn on node1:
1934

    
1935
.. code-block:: console
1936

    
1937
   # /etc/init.d/gunicorn restart
1938

    
1939
Now let's do the final connections of Cyclades with Ganeti.
1940

    
1941
``snf-dispatcher`` initialization
1942
---------------------------------
1943

    
1944
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1945
manages the Cyclades database accordingly. It also initializes all exchanges. By
1946
default it is not enabled during installation of Cyclades, so let's enable it in
1947
its configuration file ``/etc/default/snf-dispatcher``:
1948

    
1949
.. code-block:: console
1950

    
1951
   SNF_DSPTCH_ENABLE=true
1952

    
1953
and start the daemon:
1954

    
1955
.. code-block:: console
1956

    
1957
   # /etc/init.d/snf-dispatcher start
1958

    
1959
You can see that everything works correctly by tailing its log file
1960
``/var/log/synnefo/dispatcher.log``.
1961

    
1962
``snf-ganeti-eventd`` on GANETI MASTER
1963
--------------------------------------
1964

    
1965
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1966
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1967
The daemon is already installed on the GANETI MASTER (node1 in our case).
1968
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1969
installation, so we enable it in its configuration file
1970
``/etc/default/snf-ganeti-eventd``:
1971

    
1972
.. code-block:: console
1973

    
1974
   SNF_EVENTD_ENABLE=true
1975

    
1976
and start the daemon:
1977

    
1978
.. code-block:: console
1979

    
1980
   # /etc/init.d/snf-ganeti-eventd start
1981

    
1982
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1983

    
1984
Apply Quota
1985
-----------
1986

    
1987
The following commands will check and fix the integrity of user quota.
1988
In a freshly installed system, these commands have no effect and can be
1989
skipped.
1990

    
1991
.. code-block:: console
1992

    
1993
   node1 # snf-manage quota --sync
1994
   node1 # snf-manage reconcile-resources-astakos --fix
1995
   node2 # snf-manage reconcile-resources-pithos --fix
1996
   node1 # snf-manage reconcile-resources-cyclades --fix
1997

    
1998
If all the above return successfully, then you have finished with the Cyclades
1999
installation and setup.
2000

    
2001
Let's test our installation now.
2002

    
2003

    
2004
Testing of Cyclades
2005
===================
2006

    
2007
Cyclades Web UI
2008
---------------
2009

    
2010
First of all we need to test that our Cyclades Web UI works correctly. Open your
2011
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2012
top cloud bar. This should redirect you to:
2013

    
2014
 `http://node1.example.com/cyclades/ui/`
2015

    
2016
and the Cyclades home page should appear. If not, please go back and find what
2017
went wrong. Do not proceed if you don't see the Cyclades home page.
2018

    
2019
If the Cyclades home page appears, click on the orange button 'New machine'. The
2020
first step of the 'New machine wizard' will appear. This step shows all the
2021
available Images from which you can spawn new VMs. The list should be currently
2022
empty, as we haven't registered any Images yet. Close the wizard and browse the
2023
interface (not many things to see yet). If everything seems to work, let's
2024
register our first Image file.
2025

    
2026
Cyclades Images
2027
---------------
2028

    
2029
To test our Cyclades installation, we will use an Image stored on Pithos to
2030
spawn a new VM from the Cyclades interface. We will describe all steps, even
2031
though you may already have uploaded an Image on Pithos from a :ref:`previous
2032
<snf-image-images>` section:
2033

    
2034
 * Upload an Image file to Pithos
2035
 * Register that Image file to Cyclades
2036
 * Spawn a new VM from that Image from the Cyclades Web UI
2037

    
2038
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2039
command line client to do the uploading and registering of the Image.
2040

    
2041
Installation of `kamaki`
2042
~~~~~~~~~~~~~~~~~~~~~~~~
2043

    
2044
You can install `kamaki` anywhere you like, since it is a standalone client of
2045
the APIs and talks to the installation over `http`. For the purpose of this
2046
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2047
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2048
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2049
too. We do this by running:
2050

    
2051
.. code-block:: console
2052

    
2053
   # apt-get install kamaki
2054

    
2055
Configuration of kamaki
2056
~~~~~~~~~~~~~~~~~~~~~~~
2057

    
2058
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2059
installation. We do this by running:
2060

    
2061
.. code-block:: console
2062

    
2063
   $ kamaki config set cloud.default.url \
2064
       "https://node1.example.com/astakos/identity/v2.0"
2065
   $ kamaki config set cloud.default.token USER_TOKEN
2066

    
2067
Both the Authentication URL and the USER_TOKEN appear on the user's
2068
`API access` web page on the Astakos Web UI.
2069

    
2070
You can see that the new configuration options have been applied correctly,
2071
either by checking the editable file ``~/.kamakirc`` or by running:
2072

    
2073
.. code-block:: console
2074

    
2075
   $ kamaki config list
2076

    
2077
A quick test to check that kamaki is configured correctly, is to try to
2078
authenticate a user based on his/her token (in this case the user is you):
2079

    
2080
.. code-block:: console
2081

    
2082
  $ kamaki user authenticate
2083

    
2084
The above operation provides various user information, e.g. UUID (the unique
2085
user id) which might prove useful in some operations.
2086

    
2087
Upload an Image file to Pithos
2088
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2089

    
2090
Now, that we have set up `kamaki` we will upload the Image that we have
2091
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2092
under the root ``Pithos`` container (as you may have done when uploading the
2093
Image from the Pithos Web UI), we will create a new container called ``images``
2094
and store the Image under that container. We do this for two reasons:
2095

    
2096
a) To demonstrate how to create containers other than the default ``Pithos``.
2097
   This can be done only with the `kamaki` client and not through the Web UI.
2098

    
2099
b) As a best organization practise, so that you won't have your Image files
2100
   tangled along with all your other Pithos files and directory structures.
2101

    
2102
We create the new ``images`` container by running:
2103

    
2104
.. code-block:: console
2105

    
2106
   $ kamaki file create images
2107

    
2108
To check if the container has been created, list all containers of your
2109
account:
2110

    
2111
.. code-block:: console
2112

    
2113
  $ kamaki file list
2114

    
2115
Then, we upload the Image file to that container:
2116

    
2117
.. code-block:: console
2118

    
2119
   $ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump images
2120

    
2121
The first is the local path and the second is the remote container on Pithos.
2122
Check if the file has been uploaded, by listing the container contents:
2123

    
2124
.. code-block:: console
2125

    
2126
  $ kamaki file list images
2127

    
2128
Alternatively check if the new container and file appear on the Pithos Web UI.
2129

    
2130
Register an existing Image file to Cyclades
2131
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2132

    
2133
For the purposes of the following example, we assume that the user UUID is
2134
``u53r-un1qu3-1d``.
2135

    
2136
Once the Image file has been successfully uploaded on Pithos then we register
2137
it to Cyclades, by running:
2138

    
2139
.. code-block:: console
2140

    
2141
   $ kamaki image register "Debian Base" \
2142
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump \
2143
                           --public \
2144
                           --disk-format=diskdump \
2145
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2146
                           --property description="Debian Squeeze Base System" \
2147
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2148
                           --property sortorder=1 --property USERS=root --property OS=debian
2149

    
2150
This command registers the Pithos file
2151
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump`` as an
2152
Image in Cyclades. This Image will be public (``--public``), so all users will
2153
be able to spawn VMs from it and is of type ``diskdump``. The first two
2154
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2155
properties are optional, but recommended, so that the Images appear nicely on
2156
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2157
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2158
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2159

    
2160
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2161
from Cyclades to Ganeti and then `snf-image` (also see
2162
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2163
used to show information on the Cyclades UI.
2164

    
2165
Spawn a VM from the Cyclades Web UI
2166
-----------------------------------
2167

    
2168
If the registration completes successfully, then go to the Cyclades Web UI from
2169
your browser at:
2170

    
2171
 `https://node1.example.com/cyclades/ui/`
2172

    
2173
Click on the 'New Machine' button and the first step of the wizard will appear.
2174
Click on 'My Images' (right after 'System' Images) on the left pane of the
2175
wizard. Your previously registered Image "Debian Base" should appear under
2176
'Available Images'. If not, something has gone wrong with the registration. Make
2177
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2178
register`` returns successfully with all options and properties as shown above.
2179

    
2180
If the Image appears on the list, select it and complete the wizard by selecting
2181
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2182
write down your password, because you *WON'T* be able to retrieve it later.
2183

    
2184
If everything was setup correctly, after a few minutes your new machine will go
2185
to state 'Running' and you will be able to use it. Click 'Console' to connect
2186
through VNC out of band, or click on the machine's icon to connect directly via
2187
SSH or RDP (for windows machines).
2188

    
2189
Congratulations. You have successfully installed the whole Synnefo stack and
2190
connected all components. Go ahead in the next section to test the Network
2191
functionality from inside Cyclades and discover even more features.
2192

    
2193
General Testing
2194
===============
2195

    
2196
Notes
2197
=====
2198