Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ a33ee5d2

History | View | Annotate | Download (76.1 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos+)
15
    * Compute Service (Cyclades)
16
    * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the
24
guide and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42
.. note:: It is import that the two machines are under the same domain name.
43
    If they are not, you can do this by editting the file ``/etc/hosts``
44
    on both machines, and add the following lines:
45

    
46
    .. code-block:: console
47

    
48
        4.3.2.1     node1.example.com
49
        4.3.2.2     node2.example.com
50

    
51

    
52
General Prerequisites
53
=====================
54

    
55
These are the general synnefo prerequisites, that you need on node1 and node2
56
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
57

    
58
To be able to download all synnefo components you need to add the following
59
lines in your ``/etc/apt/sources.list`` file:
60

    
61
| ``deb http://apt.dev.grnet.gr squeeze main``
62
| ``deb-src http://apt.dev.grnet.gr squeeze main``
63
| ``deb http://apt.dev.grnet.gr squeeze-backports main``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos+ will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
105

    
106
You can install apache2, progresql and ntp by running:
107

    
108
.. code-block:: console
109

    
110
   # apt-get install apache2 postgresql ntp
111

    
112
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
113
the official debian backports:
114

    
115
.. code-block:: console
116

    
117
   # apt-get -t squeeze-backports install gunicorn
118

    
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
120

    
121
.. code-block:: console
122

    
123
   # apt-get -t squeeze-backports install python-gevent
124

    
125
On node1, we will create our databases, so you will also need the
126
python-psycopg2 package:
127

    
128
.. code-block:: console
129

    
130
   # apt-get install python-psycopg2
131

    
132
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
133
following line to ``/etc/apt/sources.list``:
134

    
135
.. code-block:: console
136

    
137
    deb http://www.rabbitmq.com/debian testing main
138

    
139
Add RabbitMQ public key, to trusted key list:
140

    
141
.. code-block:: console
142

    
143
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
144
  # apt-key add rabbitmq-signing-key-public.asc
145

    
146
Finally, to install the package run:
147

    
148
.. code-block:: console
149

    
150
  # apt-get update
151
  # apt-get install rabbitmq-server
152

    
153
Database setup
154
~~~~~~~~~~~~~~
155

    
156
On node1, we create a database called ``snf_apps``, that will host all django
157
apps related tables. We also create the user ``synnefo`` and grant him all
158
privileges on the database. We do this by running:
159

    
160
.. code-block:: console
161

    
162
    root@node1:~ # su - postgres
163
    postgres@node1:~ $ psql
164
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
166
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167

    
168
We also create the database ``snf_pithos`` needed by the pithos+ backend and
169
grant the ``synnefo`` user all privileges on the database. This database could
170
be created on node2 instead, but we do it on node1 for simplicity. We will
171
create all needed databases on node1 and then node2 will connect to them.
172

    
173
.. code-block:: console
174

    
175
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
176
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177

    
178
Configure the database to listen to all network interfaces. You can do this by
179
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
180
``listen_addresses`` to ``'*'`` :
181

    
182
.. code-block:: console
183

    
184
    listen_addresses = '*'
185

    
186
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
187
node2 to connect to the database. Add the following lines under ``#IPv4 local
188
connections:`` :
189

    
190
.. code-block:: console
191

    
192
    host		all	all	4.3.2.1/32	md5
193
    host		all	all	4.3.2.2/32	md5
194

    
195
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
196
actual IPs. Now, restart the server to apply the changes:
197

    
198
.. code-block:: console
199

    
200
   # /etc/init.d/postgresql restart
201

    
202
Gunicorn setup
203
~~~~~~~~~~~~~~
204

    
205
Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
206

    
207
.. code-block:: console
208

    
209
    CONFIG = {
210
     'mode': 'django',
211
     'environment': {
212
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
213
     },
214
     'working_dir': '/etc/synnefo',
215
     'user': 'www-data',
216
     'group': 'www-data',
217
     'args': (
218
       '--bind=127.0.0.1:8080',
219
       '--worker-class=gevent',
220
       '--workers=8',
221
       '--log-level=debug',
222
     ),
223
    }
224

    
225
.. warning:: Do NOT start the server yet, because it won't find the
226
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
227
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
228
    ``--worker-class=sync``. We will start the server after successful
229
    installation of astakos. If the server is running::
230

    
231
       # /etc/init.d/gunicorn stop
232

    
233
Apache2 setup
234
~~~~~~~~~~~~~
235

    
236
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
237
following:
238

    
239
.. code-block:: console
240

    
241
    <VirtualHost *:80>
242
        ServerName node1.example.com
243

    
244
        RewriteEngine On
245
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
246
        RewriteRule ^(.*)$ - [F,L]
247
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
248
    </VirtualHost>
249

    
250
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
251
following:
252

    
253
.. code-block:: console
254

    
255
    <IfModule mod_ssl.c>
256
    <VirtualHost _default_:443>
257
        ServerName node1.example.com
258

    
259
        Alias /static "/usr/share/synnefo/static"
260

    
261
        #  SetEnv no-gzip
262
        #  SetEnv dont-vary
263

    
264
       AllowEncodedSlashes On
265

    
266
       RequestHeader set X-Forwarded-Protocol "https"
267

    
268
    <Proxy * >
269
        Order allow,deny
270
        Allow from all
271
    </Proxy>
272

    
273
        SetEnv                proxy-sendchunked
274
        SSLProxyEngine        off
275
        ProxyErrorOverride    off
276

    
277
        ProxyPass        /static !
278
        ProxyPass        / http://localhost:8080/ retry=0
279
        ProxyPassReverse / http://localhost:8080/
280

    
281
        RewriteEngine On
282
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
283
        RewriteRule ^(.*)$ - [F,L]
284

    
285
        SSLEngine on
286
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
287
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
288
    </VirtualHost>
289
    </IfModule>
290

    
291
Now enable sites and modules by running:
292

    
293
.. code-block:: console
294

    
295
   # a2enmod ssl
296
   # a2enmod rewrite
297
   # a2dissite default
298
   # a2ensite synnefo
299
   # a2ensite synnefo-ssl
300
   # a2enmod headers
301
   # a2enmod proxy_http
302

    
303
.. warning:: Do NOT start/restart the server yet. If the server is running::
304

    
305
       # /etc/init.d/apache2 stop
306

    
307
.. _rabbitmq-setup:
308

    
309
Message Queue setup
310
~~~~~~~~~~~~~~~~~~~
311

    
312
The message queue will run on node1, so we need to create the appropriate
313
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
314
exchanges:
315

    
316
.. code-block:: console
317

    
318
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
319
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
320

    
321
We do not need to initialize the exchanges. This will be done automatically,
322
during the Cyclades setup.
323

    
324
Pithos+ data directory setup
325
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
326

    
327
As mentioned in the General Prerequisites section, there is a directory called
328
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
329
directory inside it:
330

    
331
.. code-block:: console
332

    
333
   # cd /srv/pithos
334
   # mkdir data
335
   # chown www-data:www-data data
336
   # chmod g+ws data
337

    
338
You are now ready with all general prerequisites concerning node1. Let's go to
339
node2.
340

    
341
Node2
342
-----
343

    
344
General Synnefo dependencies
345
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346

    
347
    * apache (http server)
348
    * gunicorn (WSGI http server)
349
    * postgresql (database)
350
    * ntp (NTP daemon)
351
    * gevent
352

    
353
You can install the above by running:
354

    
355
.. code-block:: console
356

    
357
   # apt-get install apache2 postgresql ntp
358

    
359
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
360
the official debian backports:
361

    
362
.. code-block:: console
363

    
364
   # apt-get -t squeeze-backports install gunicorn
365

    
366
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
367

    
368
.. code-block:: console
369

    
370
   # apt-get -t squeeze-backports install python-gevent
371

    
372
Node2 will connect to the databases on node1, so you will also need the
373
python-psycopg2 package:
374

    
375
.. code-block:: console
376

    
377
   # apt-get install python-psycopg2
378

    
379
Database setup
380
~~~~~~~~~~~~~~
381

    
382
All databases have been created and setup on node1, so we do not need to take
383
any action here. From node2, we will just connect to them. When you get familiar
384
with the software you may choose to run different databases on different nodes,
385
for performance/scalability/redundancy reasons, but those kind of setups are out
386
of the purpose of this guide.
387

    
388
Gunicorn setup
389
~~~~~~~~~~~~~~
390

    
391
Create the file ``/etc/gunicorn.d/synnefo`` containing the following
392
(same contents as in node1; you can just copy/paste the file):
393

    
394
.. code-block:: console
395

    
396
    CONFIG = {
397
     'mode': 'django',
398
     'environment': {
399
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
400
     },
401
     'working_dir': '/etc/synnefo',
402
     'user': 'www-data',
403
     'group': 'www-data',
404
     'args': (
405
       '--bind=127.0.0.1:8080',
406
       '--worker-class=gevent',
407
       '--workers=4',
408
       '--log-level=debug',
409
       '--timeout=43200'
410
     ),
411
    }
412

    
413
.. warning:: Do NOT start the server yet, because it won't find the
414
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
415
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
416
    ``--worker-class=sync``. We will start the server after successful
417
    installation of astakos. If the server is running::
418

    
419
       # /etc/init.d/gunicorn stop
420

    
421
Apache2 setup
422
~~~~~~~~~~~~~
423

    
424
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
425
following:
426

    
427
.. code-block:: console
428

    
429
    <VirtualHost *:80>
430
        ServerName node2.example.com
431

    
432
        RewriteEngine On
433
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
434
        RewriteRule ^(.*)$ - [F,L]
435
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
436
    </VirtualHost>
437

    
438
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
439
containing the following:
440

    
441
.. code-block:: console
442

    
443
    <IfModule mod_ssl.c>
444
    <VirtualHost _default_:443>
445
        ServerName node2.example.com
446

    
447
        Alias /static "/usr/share/synnefo/static"
448

    
449
        SetEnv no-gzip
450
        SetEnv dont-vary
451
        AllowEncodedSlashes On
452

    
453
        RequestHeader set X-Forwarded-Protocol "https"
454

    
455
        <Proxy * >
456
            Order allow,deny
457
            Allow from all
458
        </Proxy>
459

    
460
        SetEnv                proxy-sendchunked
461
        SSLProxyEngine        off
462
        ProxyErrorOverride    off
463

    
464
        ProxyPass        /static !
465
        ProxyPass        / http://localhost:8080/ retry=0
466
        ProxyPassReverse / http://localhost:8080/
467

    
468
        SSLEngine on
469
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
470
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
471
    </VirtualHost>
472
    </IfModule>
473

    
474
As in node1, enable sites and modules by running:
475

    
476
.. code-block:: console
477

    
478
   # a2enmod ssl
479
   # a2enmod rewrite
480
   # a2dissite default
481
   # a2ensite synnefo
482
   # a2ensite synnefo-ssl
483
   # a2enmod headers
484
   # a2enmod proxy_http
485

    
486
.. warning:: Do NOT start/restart the server yet. If the server is running::
487

    
488
       # /etc/init.d/apache2 stop
489

    
490
We are now ready with all general prerequisites for node2. Now that we have
491
finished with all general prerequisites for both nodes, we can start installing
492
the services. First, let's install Astakos on node1.
493

    
494

    
495
Installation of Astakos on node1
496
================================
497

    
498
To install astakos, grab the package from our repository (make sure  you made
499
the additions needed in your ``/etc/apt/sources.list`` file, as described
500
previously), by running:
501

    
502
.. code-block:: console
503

    
504
   # apt-get install snf-astakos-app snf-quotaholder-app snf-pithos-backend
505

    
506
After successful installation of snf-astakos-app, make sure that also
507
snf-webproject has been installed (marked as "Recommended" package). By default
508
Debian installs "Recommended" packages, but if you have changed your
509
configuration and the package didn't install automatically, you should
510
explicitly install it manually running:
511

    
512
.. code-block:: console
513

    
514
   # apt-get install snf-webproject
515

    
516
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
517
the experienced administrator the ability to install Synnefo in a custom made
518
`Django <https://www.djangoproject.com/>`_ project. This corner case
519
concerns only very advanced users that know what they are doing and want to
520
experiment with synnefo.
521

    
522

    
523
.. _conf-astakos:
524

    
525
Configuration of Astakos
526
========================
527

    
528
Conf Files
529
----------
530

    
531
After astakos is successfully installed, you will find the directory
532
``/etc/synnefo`` and some configuration files inside it. The files contain
533
commented configuration options, which are the default options. While installing
534
new snf-* components, new configuration files will appear inside the directory.
535
In this guide (and for all services), we will edit only the minimum necessary
536
configuration options, to reflect our setup. Everything else will remain as is.
537

    
538
After getting familiar with synnefo, you will be able to customize the software
539
as you wish and fits your needs. Many options are available, to empower the
540
administrator with extensively customizable setups.
541

    
542
For the snf-webproject component (installed as an astakos dependency), we
543
need the following:
544

    
545
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
546
uncomment and edit the ``DATABASES`` block to reflect our database:
547

    
548
.. code-block:: console
549

    
550
    DATABASES = {
551
     'default': {
552
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
553
         'ENGINE': 'postgresql_psycopg2',
554
         # ATTENTION: This *must* be the absolute path if using sqlite3.
555
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
556
         'NAME': 'snf_apps',
557
         'USER': 'synnefo',                      # Not used with sqlite3.
558
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
559
         # Set to empty string for localhost. Not used with sqlite3.
560
         'HOST': '4.3.2.1',
561
         # Set to empty string for default. Not used with sqlite3.
562
         'PORT': '5432',
563
     }
564
    }
565

    
566
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
567
``SECRET_KEY``. This is a Django specific setting which is used to provide a
568
seed in secret-key hashing algorithms. Set this to a random string of your
569
choise and keep it private:
570

    
571
.. code-block:: console
572

    
573
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
574

    
575
For astakos specific configuration, edit the following options in
576
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
577

    
578
.. code-block:: console
579

    
580
    ASTAKOS_DEFAULT_ADMIN_EMAIL = None
581

    
582
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
583

    
584
    ASTAKOS_BASEURL = 'https://node1.example.com'
585

    
586
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
587
services). ``ASTAKOS_BASEURL`` is the astakos home page.
588

    
589
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
590
Every time a new account is created a notification is sent to this email.
591
For this we need access to a running mail server, so we have disabled
592
it for now by setting its value to None. For more informations on this,
593
read the relative :ref:`section <mail-server>`.
594

    
595
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
596
    If you would like to enable it, you have to edit the following options:
597

    
598
    .. code-block:: console
599

    
600
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
601
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
602
        ASTAKOS_RECAPTCHA_USE_SSL = True
603
        ASTAKOS_RECAPTCHA_ENABLED = True
604

    
605
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
606
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
607

    
608
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
609

    
610
.. code-block:: console
611

    
612
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
613

    
614
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
615

    
616
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
617

    
618
Those settings have to do with the black cloudbar endpoints and will be
619
described in more detail later on in this guide. For now, just edit the domain
620
to point at node1 which is where we have installed Astakos.
621

    
622
If you are an advanced user and want to use the Shibboleth Authentication
623
method, read the relative :ref:`section <shibboleth-auth>`.
624

    
625
.. note:: Because Cyclades and Astakos are running on the same machine
626
    in our example, we have to deactivate the CSRF verification. We can do so
627
    by adding to
628
    ``/etc/synnefo/99-local.conf``:
629

    
630
    .. code-block:: console
631

    
632
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
633
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
634

    
635
Since version 0.13 you need to configure some basic settings for the new *Quota*
636
feature.
637

    
638
Specifically:
639

    
640
Edit ``/etc/synnefo/20-snf-astakos-app-settings.conf``:
641

    
642
.. code-block:: console
643

    
644
    QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
645
    QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
646
    ASTAKOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
647
    ASTAKOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
648

    
649
Enable Pooling
650
--------------
651

    
652
This section can be bypassed, but we strongly recommend you apply the following,
653
since they result in a significant performance boost.
654

    
655
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
656
around Psycopg2. This allows independent Django requests to reuse pooled DB
657
connections, with significant performance gains.
658

    
659
To use, first monkey-patch psycopg2. For Django, run this before the
660
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
661

    
662
.. code-block:: console
663

    
664
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
665
    monkey_patch_psycopg2()
666

    
667
Since we are running with greenlets, we should modify psycopg2 behavior, so it
668
works properly in a greenlet context:
669

    
670
.. code-block:: console
671

    
672
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
673
    make_psycopg_green()
674

    
675
Use the Psycopg2 driver as usual. For Django, this means using
676
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
677
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
678
driver, through ``DATABASES.OPTIONS`` in Django.
679

    
680
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
681
file that looks like this:
682

    
683
.. code-block:: console
684

    
685
    # Monkey-patch psycopg2
686
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
687
    monkey_patch_psycopg2()
688

    
689
    # If running with greenlets
690
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
691
    make_psycopg_green()
692

    
693
    DATABASES = {
694
     'default': {
695
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
696
         'ENGINE': 'postgresql_psycopg2',
697
         'OPTIONS': {'synnefo_poolsize': 8},
698

    
699
         # ATTENTION: This *must* be the absolute path if using sqlite3.
700
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
701
         'NAME': 'snf_apps',
702
         'USER': 'synnefo',                      # Not used with sqlite3.
703
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
704
         # Set to empty string for localhost. Not used with sqlite3.
705
         'HOST': '4.3.2.1',
706
         # Set to empty string for default. Not used with sqlite3.
707
         'PORT': '5432',
708
     }
709
    }
710

    
711
Database Initialization
712
-----------------------
713

    
714
After configuration is done, we initialize the database by running:
715

    
716
.. code-block:: console
717

    
718
    # snf-manage syncdb
719

    
720
At this example we don't need to create a django superuser, so we select
721
``[no]`` to the question. After a successful sync, we run the migration needed
722
for astakos:
723

    
724
.. code-block:: console
725

    
726
    # snf-manage migrate im
727

    
728
Then, we load the pre-defined user groups
729

    
730
.. code-block:: console
731

    
732
    # snf-manage loaddata groups
733

    
734
.. _services-reg:
735

    
736
Services Registration
737
---------------------
738

    
739
When the database is ready, we configure the elements of the Astakos cloudbar,
740
to point to our future services:
741

    
742
.. code-block:: console
743

    
744
    # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
745
    # snf-manage service-add "cyclades" https://node1.example.com/ui/
746
    # snf-manage service-add "pithos+" https://node2.example.com/ui/
747

    
748
Servers Initialization
749
----------------------
750

    
751
Finally, we initialize the servers on node1:
752

    
753
.. code-block:: console
754

    
755
    root@node1:~ # /etc/init.d/gunicorn restart
756
    root@node1:~ # /etc/init.d/apache2 restart
757

    
758
We have now finished the Astakos setup. Let's test it now.
759

    
760

    
761
Testing of Astakos
762
==================
763

    
764
Open your favorite browser and go to:
765

    
766
``http://node1.example.com/im``
767

    
768
If this redirects you to ``https://node1.example.com/im/`` and you can see
769
the "welcome" door of Astakos, then you have successfully setup Astakos.
770

    
771
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
772
and fill all your data at the sign up form. Then click "SUBMIT". You should now
773
see a green box on the top, which informs you that you made a successful request
774
and the request has been sent to the administrators. So far so good, let's
775
assume that you created the user with username ``user@example.com``.
776

    
777
Now we need to activate that user. Return to a command prompt at node1 and run:
778

    
779
.. code-block:: console
780

    
781
    root@node1:~ # snf-manage user-list
782

    
783
This command should show you a list with only one user; the one we just created.
784
This user should have an id with a value of ``1``. It should also have an
785
"active" status with the value of ``0`` (inactive). Now run:
786

    
787
.. code-block:: console
788

    
789
    root@node1:~ # snf-manage user-update --set-active 1
790

    
791
This modifies the active value to ``1``, and actually activates the user.
792
When running in production, the activation is done automatically with different
793
types of moderation, that Astakos supports. You can see the moderation methods
794
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
795
documentation. In production, you can also manually activate a user, by sending
796
him/her an activation email. See how to do this at the :ref:`User
797
activation <user_activation>` section.
798

    
799
Now let's go back to the homepage. Open ``http://node1.example.com/im/`` with
800
your browser again. Try to sign in using your new credentials. If the astakos
801
menu appears and you can see your profile, then you have successfully setup
802
Astakos.
803

    
804
Let's continue to install Pithos+ now.
805

    
806

    
807
Installation of Pithos+ on node2
808
================================
809

    
810
To install pithos+, grab the packages from our repository (make sure  you made
811
the additions needed in your ``/etc/apt/sources.list`` file, as described
812
previously), by running:
813

    
814
.. code-block:: console
815

    
816
   # apt-get install snf-pithos-app snf-pithos-backend
817

    
818
After successful installation of snf-pithos-app, make sure that also
819
snf-webproject has been installed (marked as "Recommended" package). Refer to
820
the "Installation of Astakos on node1" section, if you don't remember why this
821
should happen. Now, install the pithos web interface:
822

    
823
.. code-block:: console
824

    
825
   # apt-get install snf-pithos-webclient
826

    
827
This package provides the standalone pithos web client. The web client is the
828
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
829
interface's cloudbar, at the top of the Astakos homepage.
830

    
831

    
832
.. _conf-pithos:
833

    
834
Configuration of Pithos+
835
========================
836

    
837
Conf Files
838
----------
839

    
840
After pithos+ is successfully installed, you will find the directory
841
``/etc/synnefo`` and some configuration files inside it, as you did in node1
842
after installation of astakos. Here, you will not have to change anything that
843
has to do with snf-common or snf-webproject. Everything is set at node1. You
844
only need to change settings that have to do with pithos+. Specifically:
845

    
846
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
847
this options:
848

    
849
.. code-block:: console
850

    
851
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
852

    
853
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
854

    
855
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
856
   PITHOS_AUTHENTICATION_USERS = None
857

    
858
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
859
   PITHOS_USER_CATALOG_URL = 'https://node1.example.com/user_catalogs'
860
   PITHOS_USER_FEEDBACK_URL = 'https://node1.example.com/feedback'
861
   PITHOS_USER_LOGIN_URL = 'https://node1.example.com/login'
862

    
863
   PITHOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
864
   PITHOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
865
   PITHOS_USE_QUOTAHOLDER = True
866

    
867
   # Set to False if astakos & pithos are on the same host
868
   #PITHOS_PROXY_USER_SERVICES = True
869

    
870

    
871
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
872
find the pithos+ backend database. Above we tell pithos+ that its database is
873
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
874
``example_passw0rd``.  All those settings where setup during node1's "Database
875
setup" section.
876

    
877
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
878
the pithos+ backend data. Above we tell pithos+ to store its data under
879
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
880
directory at node1's "Pithos+ data directory setup" section.
881

    
882
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
883
is available the astakos authentication api. If not set, pithos+ tries to
884
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
885

    
886
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on
887
the Astakos node (node1 in our case):
888

    
889
.. code-block:: console
890

    
891
   # snf-manage service-list
892

    
893
The token has been generated automatically during the :ref:`Pithos+ service
894
registration <services-reg>`.
895

    
896
Then we need to setup the web UI and connect it to astakos. To do so, edit
897
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
898

    
899
.. code-block:: console
900

    
901
    PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
902
    PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
903

    
904
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
905
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
906
pithos+ feedback form. Astakos already provides a generic feedback form for all
907
services, so we use this one.
908

    
909
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
910
pithos+ web UI with the astakos web UI (through the top cloudbar):
911

    
912
.. code-block:: console
913

    
914
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
915
    PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
916
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
917
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
918

    
919
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
920
cloudbar.
921

    
922
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
923
Astakos service. You can see all :ref:`registered services <services-reg>` by
924
running on the Astakos node (node1):
925

    
926
.. code-block:: console
927

    
928
   # snf-manage service-list
929

    
930
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos
931
service's ``id`` as shown by the above command, in our case ``3``.
932

    
933
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
934
pithos+ web client to get from astakos all the information needed to fill its
935
own cloudbar. So we put our astakos deployment urls there.
936

    
937
Pooling and Greenlets
938
---------------------
939

    
940
Pithos is pooling-ready without the need of further configuration, because it
941
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
942
backend objects for access to the Pithos DB.
943

    
944
However, as in Astakos, since we are running with Greenlets, it is also
945
recommended to modify psycopg2 behavior so it works properly in a greenlet
946
context. This means adding the following lines at the top of your
947
``/etc/synnefo/10-snf-webproject-database.conf`` file:
948

    
949
.. code-block:: console
950

    
951
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
952
    make_psycopg_green()
953

    
954
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
955
mentioned above, depending on your setup) argument on your
956
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
957
like this:
958

    
959
.. code-block:: console
960

    
961
    CONFIG = {
962
     'mode': 'django',
963
     'environment': {
964
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
965
     },
966
     'working_dir': '/etc/synnefo',
967
     'user': 'www-data',
968
     'group': 'www-data',
969
     'args': (
970
       '--bind=127.0.0.1:8080',
971
       '--workers=4',
972
       '--worker-class=gevent',
973
       '--log-level=debug',
974
       '--timeout=43200'
975
     ),
976
    }
977

    
978
Stamp Database Revision
979
-----------------------
980

    
981
Pithos uses the alembic_ database migrations tool.
982

    
983
.. _alembic: http://alembic.readthedocs.org
984

    
985
After a sucessful installation, we should stamp it at the most recent
986
revision, so that future migrations know where to start upgrading in
987
the migration history.
988

    
989
First, find the most recent revision in the migration history:
990

    
991
.. code-block:: console
992

    
993
    root@node2:~ # pithos-migrate history
994
    2a309a9a3438 -> 27381099d477 (head), alter public add column url
995
    165ba3fbfe53 -> 2a309a9a3438, fix statistics negative population
996
    3dd56e750a3 -> 165ba3fbfe53, update account in paths
997
    230f8ce9c90f -> 3dd56e750a3, Fix latest_version
998
    8320b1c62d9 -> 230f8ce9c90f, alter nodes add column latest version
999
    None -> 8320b1c62d9, create index nodes.parent
1000

    
1001
Finally, we stamp it with the one found in the previous step:
1002

    
1003
.. code-block:: console
1004

    
1005
    root@node2:~ # pithos-migrate stamp 27381099d477
1006

    
1007
Servers Initialization
1008
----------------------
1009

    
1010
After configuration is done, we initialize the servers on node2:
1011

    
1012
.. code-block:: console
1013

    
1014
    root@node2:~ # /etc/init.d/gunicorn restart
1015
    root@node2:~ # /etc/init.d/apache2 restart
1016

    
1017
You have now finished the Pithos+ setup. Let's test it now.
1018

    
1019

    
1020
Testing of Pithos+
1021
==================
1022

    
1023
Open your browser and go to the Astakos homepage:
1024

    
1025
``http://node1.example.com/im``
1026

    
1027
Login, and you will see your profile page. Now, click the "pithos+" link on the
1028
top black cloudbar. If everything was setup correctly, this will redirect you
1029
to:
1030

    
1031

    
1032
and you will see the blue interface of the Pithos+ application.  Click the
1033
orange "Upload" button and upload your first file. If the file gets uploaded
1034
successfully, then this is your first sign of a successful Pithos+ installation.
1035
Go ahead and experiment with the interface to make sure everything works
1036
correctly.
1037

    
1038
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
1039

    
1040
If you don't stumble on any problems, then you have successfully installed
1041
Pithos+, which you can use as a standalone File Storage Service.
1042

    
1043
If you would like to do more, such as:
1044

    
1045
    * Spawning VMs
1046
    * Spawning VMs from Images stored on Pithos+
1047
    * Uploading your custom Images to Pithos+
1048
    * Spawning VMs from those custom Images
1049
    * Registering existing Pithos+ files as Images
1050
    * Connect VMs to the Internet
1051
    * Create Private Networks
1052
    * Add VMs to Private Networks
1053

    
1054
please continue with the rest of the guide.
1055

    
1056

    
1057
Cyclades (and Plankton) Prerequisites
1058
=====================================
1059

    
1060
Before proceeding with the Cyclades (and Plankton) installation, make sure you
1061
have successfully set up Astakos and Pithos+ first, because Cyclades depends
1062
on them. If you don't have a working Astakos and Pithos+ installation yet,
1063
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
1064

    
1065
Besides Astakos and Pithos+, you will also need a number of additional working
1066
prerequisites, before you start the Cyclades installation.
1067

    
1068
Ganeti
1069
------
1070

    
1071
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1072
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1073
Please refer to the
1074
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1075
gory details. A successful Ganeti installation concludes with a working
1076
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1077
<GANETI_NODES>`.
1078

    
1079
The above Ganeti cluster can run on different physical machines than node1 and
1080
node2 and can scale independently, according to your needs.
1081

    
1082
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1083
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1084
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1085

    
1086
We highly recommend that you read the official Ganeti documentation, if you are
1087
not familiar with Ganeti.
1088

    
1089
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1090
support IP pool management. This feature will be available in Ganeti >= 2.7.
1091
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1092
GRNET provided packages until stable 2.7 is out. To do so:
1093

    
1094
.. code-block:: console
1095

    
1096
   # apt-get install snf-ganeti ganeti-htools
1097
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1098

    
1099
You should have:
1100

    
1101
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1102

    
1103
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1104
both nodes, choose a domain name that resolves to a valid floating IP (let's
1105
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1106
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1107
If not then skip passing --no-ssh-init but be aware that it will replace
1108
/root/.ssh/* related files and you might lose access to master node. Also,
1109
make sure there is an lvm volume group named ``ganeti`` that will host your
1110
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1111
Then run on node1:
1112

    
1113
.. code-block:: console
1114

    
1115
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1116
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1117
                    --master-netdev eth0 ganeti.node1.example.com
1118
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1119
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1120
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1121

    
1122
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1123
                    --vm-capable=yes node2.example.com
1124
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1125
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1126

    
1127
For any problems you may stumble upon installing Ganeti, please refer to the
1128
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1129
of Ganeti is out of the scope of this guide.
1130

    
1131
.. _cyclades-install-snfimage:
1132

    
1133
snf-image
1134
---------
1135

    
1136
Installation
1137
~~~~~~~~~~~~
1138
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1139
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1140
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1141
node1 and node2. You can do this by running on *both* nodes:
1142

    
1143
.. code-block:: console
1144

    
1145
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1146

    
1147
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1148
to handle image files stored on Pithos+. It also needs `python-psycopg2` to be
1149
able to access the Pithos+ database. This is why, we also install them on *all*
1150
VM-capable Ganeti nodes.
1151

    
1152
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1153
    not  work out of the box if you try to use URLs served by servers which do
1154
    not have a valid certificate. To circumvent this you should edit the file
1155
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1156

    
1157
After `snf-image` has been installed successfully, create the helper VM by
1158
running on *both* nodes:
1159

    
1160
.. code-block:: console
1161

    
1162
   # snf-image-update-helper
1163

    
1164
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1165
snf-image to run successfully, and it may take a few minutes depending on your
1166
Internet connection.
1167

    
1168
Configuration
1169
~~~~~~~~~~~~~
1170
snf-image supports native access to Images stored on Pithos+. This means that
1171
it can talk directly to the Pithos+ backend, without the need of providing a
1172
public URL. More details, are described in the next section. For now, the only
1173
thing we need to do, is configure snf-image to access our Pithos+ backend.
1174

    
1175
To do this, we need to set the corresponding variables in
1176
``/etc/default/snf-image``, to reflect our Pithos+ setup:
1177

    
1178
.. code-block:: console
1179

    
1180
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1181

    
1182
    PITHOS_DATA="/srv/pithos/data"
1183

    
1184
If you have installed your Ganeti cluster on different nodes than node1 and
1185
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1186

    
1187
If you would like to use Images that are also/only stored locally, you need to
1188
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1189
Pithos+.
1190

    
1191
Testing
1192
~~~~~~~
1193
You can test that snf-image is successfully installed by running on the
1194
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1195

    
1196
.. code-block:: console
1197

    
1198
   # gnt-os diagnose
1199

    
1200
This should return ``valid`` for snf-image.
1201

    
1202
If you are interested to learn more about snf-image's internals (and even use
1203
it alongside Ganeti without Synnefo), please see
1204
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information
1205
concerning installation instructions, documentation on the design and
1206
implementation, and supported Image formats.
1207

    
1208
.. _snf-image-images:
1209

    
1210
Actual Images for snf-image
1211
---------------------------
1212

    
1213
Now that snf-image is installed successfully we need to provide it with some
1214
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1215
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1216
format. For more information about snf-image Image formats see `here
1217
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1218

    
1219
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1220
above Images to be stored:
1221

    
1222
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1223
      in :file:`/etc/default/snf-image`)
1224
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1225
    * On Pithos+ (accessible natively, not only by its public URL)
1226

    
1227
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1228
on the official `snf-image page
1229
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1230
of type ``diskdump``. We will store it in our new Pithos+ installation.
1231

    
1232
To do so, do the following:
1233

    
1234
a) Download the Image from the official snf-image page.
1235

    
1236
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web
1237
   UI or the command line client `kamaki
1238
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.
1239

    
1240
Once the Image is uploaded successfully, download the Image's metadata file
1241
from the official snf-image page. You will need it, for spawning a VM from
1242
Ganeti, in the next section.
1243

    
1244
Of course, you can repeat the procedure to upload more Images, available from
1245
the `official snf-image page
1246
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1247

    
1248
.. _ganeti-with-pithos-images:
1249

    
1250
Spawning a VM from a Pithos+ Image, using Ganeti
1251
------------------------------------------------
1252

    
1253
Now, it is time to test our installation so far. So, we have Astakos and
1254
Pithos+ installed, we have a working Ganeti installation, the snf-image
1255
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1256
Pithos+. Make sure you also have the `metadata file
1257
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1258

    
1259
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1260

    
1261
.. code-block:: console
1262

    
1263
   # gnt-instance add -o snf-image+default --os-parameters \
1264
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1265
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1266
                      testvm1
1267

    
1268
In the above command:
1269

    
1270
 * ``img_passwd``: the arbitrary root password of your new instance
1271
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1272
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
1273
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1274
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1275
               * ``container``: ``pithos`` (default, if the Web UI was used)
1276
               * ``filename``: the name of file (visible also from the Web UI)
1277
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1278
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1279
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1280

    
1281
If the ``gnt-instance add`` command returns successfully, then run:
1282

    
1283
.. code-block:: console
1284

    
1285
   # gnt-instance info testvm1 | grep "console connection"
1286

    
1287
to find out where to connect using VNC. If you can connect successfully and can
1288
login to your new instance using the root password ``my_vm_example_passw0rd``,
1289
then everything works as expected and you have your new Debian Base VM up and
1290
running.
1291

    
1292
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1293
to access the Pithos+ database and the Pithos+ backend data (newer versions
1294
require UUID instead of a username). Another issue you may encounter is that in
1295
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1296
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1297
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1298
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1299
are sure everything works till this point.
1300

    
1301
If everything works, you have successfully connected Ganeti with Pithos+. Let's
1302
move on to networking now.
1303

    
1304
.. warning::
1305

    
1306
    You can bypass the networking sections and go straight to
1307
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1308
    the Cyclades Network Service, but only the Cyclades Compute Service
1309
    (recommended for now).
1310

    
1311
Networking Setup Overview
1312
-------------------------
1313

    
1314
This part is deployment-specific and must be customized based on the specific
1315
needs of the system administrator. However, to do so, the administrator needs
1316
to understand how each level handles Virtual Networks, to be able to setup the
1317
backend appropriately, before installing Cyclades. To do so, please read the
1318
:ref:`Network <networks>` section before proceeding.
1319

    
1320
Since synnefo 0.11 all network actions are managed with the snf-manage
1321
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1322
snf-network, bridges, vlans) to be already configured correctly. The only
1323
actions needed in this point are:
1324

    
1325
a) Have Ganeti with IP pool management support installed.
1326

    
1327
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1328

    
1329
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1330

    
1331
In order to test that everything is setup correctly before installing Cyclades,
1332
we will make some testing actions in this section, and the actual setup will be
1333
done afterwards with snf-manage commands.
1334

    
1335
.. _snf-network:
1336

    
1337
snf-network
1338
~~~~~~~~~~~
1339

    
1340
snf-network includes `kvm-vif-bridge` script that is invoked every time
1341
a tap (a VM's NIC) is created. Based on environment variables passed by
1342
Ganeti it issues various commands depending on the network type the NIC is
1343
connected to and sets up a corresponding dhcp lease.
1344

    
1345
Install snf-network on all Ganeti nodes:
1346

    
1347
.. code-block:: console
1348

    
1349
   # apt-get install snf-network
1350

    
1351
Then, in :file:`/etc/default/snf-network` set:
1352

    
1353
.. code-block:: console
1354

    
1355
   MAC_MASK=ff:ff:f0:00:00:00
1356

    
1357
.. _nfdhcpd:
1358

    
1359
nfdhcpd
1360
~~~~~~~
1361

    
1362
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1363
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1364
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1365
create a dhcp response.
1366

    
1367
.. code-block:: console
1368

    
1369
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1370
   # apt-get install nfdhcpd
1371

    
1372
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1373
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1374
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1375
VMs. Once you are finished, restart the server on all nodes:
1376

    
1377
.. code-block:: console
1378

    
1379
   # /etc/init.d/nfdhcpd restart
1380

    
1381
If you are using ``ferm``, then you need to run the following:
1382

    
1383
.. code-block:: console
1384

    
1385
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1386
   # /etc/init.d/ferm restart
1387

    
1388
or make sure to run after boot:
1389

    
1390
.. code-block:: console
1391

    
1392
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1393

    
1394
and if you have IPv6 enabled:
1395

    
1396
.. code-block:: console
1397

    
1398
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1399
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1400

    
1401
You can check which clients are currently served by nfdhcpd by running:
1402

    
1403
.. code-block:: console
1404

    
1405
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1406

    
1407
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1408

    
1409
Public Network Setup
1410
--------------------
1411

    
1412
To achieve basic networking the simplest way is to have a common bridge (e.g.
1413
``br0``, on the same collision domain with the router) where all VMs will
1414
connect to. Packets will be "forwarded" to the router and then to the Internet.
1415
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1416
:ref:`Network <networks>` section).
1417

    
1418
Physical Host Setup
1419
~~~~~~~~~~~~~~~~~~~
1420

    
1421
Assuming ``eth0`` on both hosts is the public interface (directly connected
1422
to the router), run on every node:
1423

    
1424
.. code-block:: console
1425

    
1426
   # apt-get install vlan
1427
   # brctl addbr br0
1428
   # ip link set br0 up
1429
   # vconfig add eth0 100
1430
   # ip link set eth0.100 up
1431
   # brctl addif br0 eth0.100
1432

    
1433

    
1434
Testing a Public Network
1435
~~~~~~~~~~~~~~~~~~~~~~~~
1436

    
1437
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1438
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1439
network by running:
1440

    
1441
.. code-block:: console
1442

    
1443
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1444

    
1445
Then, connect the network to all your nodegroups. We assume that we only have
1446
one nodegroup (``default``) in our Ganeti cluster:
1447

    
1448
.. code-block:: console
1449

    
1450
   # gnt-network connect test-net-public default bridged br0
1451

    
1452
Now, it is time to test that the backend infrastracture is correctly setup for
1453
the Public Network. We will add a new VM, the same way we did it on the
1454
previous testing section. However, now will also add one NIC, configured to be
1455
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1456

    
1457
.. code-block:: console
1458

    
1459
   # gnt-instance add -o snf-image+default --os-parameters \
1460
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1461
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1462
                      --net 0:ip=pool,network=test-net-public \
1463
                      testvm2
1464

    
1465
If the above returns successfully, connect to the new VM and run:
1466

    
1467
.. code-block:: console
1468

    
1469
   root@testvm2:~ # ip addr
1470
   root@testvm2:~ # ip route
1471
   root@testvm2:~ # cat /etc/resolv.conf
1472

    
1473
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1474
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1475
ganeti, snf-network and nfdhcpd.
1476

    
1477
Now ping the outside world. If this works too, then you have also configured
1478
correctly your physical host and router.
1479

    
1480
Make sure everything works as expected, before proceeding with the Private
1481
Networks setup.
1482

    
1483
.. _private-networks-setup:
1484

    
1485
Private Networks Setup
1486
----------------------
1487

    
1488
Synnefo supports two types of private networks:
1489

    
1490
 - based on MAC filtering
1491
 - based on physical VLANs
1492

    
1493
Both types provide Layer 2 isolation to the end-user.
1494

    
1495
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1496
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1497
physical VLAN. To this end to assure isolation among end-users' private networks
1498
each has to have different MAC prefix (for the filtering to take place) or to be
1499
"connected" to a different bridge (VLAN actually).
1500

    
1501
Physical Host Setup
1502
~~~~~~~~~~~~~~~~~~~
1503

    
1504
In order to create the necessary VLAN/bridges, one for MAC filtered private
1505
networks and various (e.g. 20) for private networks based on physical VLANs,
1506
run on every node:
1507

    
1508
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1509
configured correctly) connected together, run on every node:
1510

    
1511
.. code-block:: console
1512

    
1513
   # modprobe 8021q
1514
   # $iface=eth0
1515
   # for prv in $(seq 0 20); do
1516
        vlan=$prv
1517
        bridge=prv$prv
1518
        vconfig add $iface $vlan
1519
        ifconfig $iface.$vlan up
1520
        brctl addbr $bridge
1521
        brctl setfd $bridge 0
1522
        brctl addif $bridge $iface.$vlan
1523
        ifconfig $bridge up
1524
      done
1525

    
1526
The above will do the following :
1527

    
1528
 * provision 21 new bridges: ``prv0`` - ``prv20``
1529
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1530
 * add the corresponding vlan to the equivalent bridge
1531

    
1532
You can run ``brctl show`` on both nodes to see if everything was setup
1533
correctly.
1534

    
1535
Testing the Private Networks
1536
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1537

    
1538
To test the Private Networks, we will create two instances and put them in the
1539
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1540
that the instances will have a second NIC connected to the ``prv0``
1541
pre-provisioned bridge and a third to ``prv1``.
1542

    
1543
We run the same command as in the Public Network testing section, but with one
1544
more argument for the second NIC:
1545

    
1546
.. code-block:: console
1547

    
1548
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1549
   # gnt-network connect test-net-prv-mac default bridged prv0
1550

    
1551
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1552
   # gnt-network connect test-net-prv-vlan default bridged prv1
1553

    
1554
   # gnt-instance add -o snf-image+default --os-parameters \
1555
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1556
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1557
                      --net 0:ip=pool,network=test-net-public \
1558
                      --net 1:ip=pool,network=test-net-prv-mac \
1559
                      --net 2:ip=none,network=test-net-prv-vlan \
1560
                      testvm3
1561

    
1562
   # gnt-instance add -o snf-image+default --os-parameters \
1563
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1564
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1565
                      --net 0:ip=pool,network=test-net-public \
1566
                      --net 1:ip=pool,network=test-net-prv-mac \
1567
                      --net 2:ip=none,network=test-net-prv-vlan \
1568
                      testvm4
1569

    
1570
Above, we create two instances with first NIC connected to the internet, their
1571
second NIC connected to a MAC filtered private Network and their third NIC
1572
connected to the first Physical VLAN Private Network. Now, connect to the
1573
instances using VNC and make sure everything works as expected:
1574

    
1575
 a) The instances have access to the public internet through their first eth
1576
    interface (``eth0``), which has been automatically assigned a public IP.
1577

    
1578
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1579

    
1580
 c) ip link set ``eth1``/``eth2`` up
1581

    
1582
 d) dhclient ``eth1``/``eth2``
1583

    
1584
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1585

    
1586
If everything works as expected, then you have finished the Network Setup at the
1587
backend for both types of Networks (Public & Private).
1588

    
1589
.. _cyclades-gtools:
1590

    
1591
Cyclades Ganeti tools
1592
---------------------
1593

    
1594
In order for Ganeti to be connected with Cyclades later on, we need the
1595
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1596
case). You can install them by running in both nodes:
1597

    
1598
.. code-block:: console
1599

    
1600
   # apt-get install snf-cyclades-gtools
1601

    
1602
This will install the following:
1603

    
1604
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1605
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1606
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1607

    
1608
Configure ``snf-cyclades-gtools``
1609
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1610

    
1611
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1612
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1613
that need it:
1614

    
1615
.. code-block:: console
1616

    
1617
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1618

    
1619
The above variables should reflect your :ref:`Message Queue setup
1620
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1621

    
1622
Connect ``snf-image`` with ``snf-progress-monitor``
1623
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1624

    
1625
Finally, we need to configure ``snf-image`` to publish progress messages during
1626
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1627
set the corresponding variable to ``snf-progress-monitor``:
1628

    
1629
.. code-block:: console
1630

    
1631
   PROGRESS_MONITOR="snf-progress-monitor"
1632

    
1633
This file should be editted in all Ganeti nodes.
1634

    
1635
.. _rapi-user:
1636

    
1637
Synnefo RAPI user
1638
-----------------
1639

    
1640
As a last step before installing Cyclades, create a new RAPI user that will
1641
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1642
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1643
You can do this, by first running:
1644

    
1645
.. code-block:: console
1646

    
1647
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1648

    
1649
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1650

    
1651
.. code-block:: console
1652

    
1653
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1654

    
1655
More about Ganeti's RAPI users `here.
1656
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1657

    
1658
You have now finished with all needed Prerequisites for Cyclades (and
1659
Plankton). Let's move on to the actual Cyclades installation.
1660

    
1661

    
1662
Installation of Cyclades (and Plankton) on node1
1663
================================================
1664

    
1665
This section describes the installation of Cyclades. Cyclades is Synnefo's
1666
Compute service. Plankton (the Image Registry service) will get installed
1667
automatically along with Cyclades, because it is contained in the same Synnefo
1668
component right now.
1669

    
1670
We will install Cyclades (and Plankton) on node1. To do so, we install the
1671
corresponding package by running on node1:
1672

    
1673
.. code-block:: console
1674

    
1675
   # apt-get install snf-cyclades-app memcached python-memcache
1676

    
1677
If all packages install successfully, then Cyclades and Plankton are installed
1678
and we proceed with their configuration.
1679

    
1680
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1681
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1682
This is achieved by storing all sensitive information to a CACHE backend and
1683
exporting it via VMAPI. The cache entries are invalidated after the first
1684
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1685
`Django <https://www.djangoproject.com/>`_ cache backend.
1686

    
1687
Configuration of Cyclades (and Plankton)
1688
========================================
1689

    
1690
Conf files
1691
----------
1692

    
1693
After installing Cyclades, a number of new configuration files will appear under
1694
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1695
only the minimal needed changes to result with a working system. In general,
1696
sane defaults have been chosen for the most of the options, to cover most of the
1697
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1698
once you get familiar with the different options.
1699

    
1700
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1701

    
1702
.. code-block:: console
1703

    
1704
   ASTAKOS_URL = 'https://node1.example.com/im/authenticate'
1705

    
1706
   # Set to False if astakos & cyclades are on the same host
1707
   CYCLADES_PROXY_USER_SERVICES = False
1708

    
1709
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set
1710
to point to Astakos (this should have the same value with Pithos+'s
1711
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`).
1712

    
1713
.. warning::
1714

    
1715
   All services must match the quotaholder token and url configured for
1716
   quotaholder.
1717

    
1718
TODO: Document the Network Options here
1719

    
1720
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1721

    
1722
.. code-block:: console
1723

    
1724
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1725
   CLOUDBAR_ACTIVE_SERVICE = '2'
1726
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
1727
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu'
1728

    
1729
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1730
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1731
used by the Cyclades Web UI to get from Astakos all the information needed to
1732
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1733
above should have the same values we put in the corresponding variables in
1734
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1735
:ref:`Pithos configuration <conf-pithos>` section.
1736

    
1737
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos
1738
service. You can see all :ref:`registered services <services-reg>` by running
1739
on the Astakos node (node1):
1740

    
1741
.. code-block:: console
1742

    
1743
   # snf-manage service-list
1744

    
1745
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's
1746
``id`` as shown by the above command, in our case ``2``.
1747

    
1748
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1749

    
1750
.. code-block:: console
1751

    
1752
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1753
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1754

    
1755
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION``
1756
denotes the Pithos+ database (where the Image files are stored). So we set that
1757
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual
1758
Pithos+ data location.
1759

    
1760
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1761

    
1762
.. code-block:: console
1763

    
1764
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1765

    
1766
The above settings denote the Message Queue. Those settings should have the same
1767
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1768
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1769

    
1770
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1771

    
1772
.. code-block:: console
1773

    
1774
   UI_LOGIN_URL = "https://node1.example.com/im/login"
1775
   UI_LOGOUT_URL = "https://node1.example.com/im/logout"
1776

    
1777
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1778
if they are not logged in. We point that to Astakos.
1779

    
1780
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1781
user when he/she logs out. We point that to Astakos, too.
1782

    
1783
Edit ``/etc/synnefo/20-snf-cyclades-app-quotas.conf``:
1784

    
1785
.. code-block:: console
1786

    
1787
   CYCLADES_USE_QUOTAHOLDER = True
1788
   CYCLADES_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
1789
   CYCLADES_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
1790

    
1791
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1792

    
1793
.. code-block:: console
1794

    
1795
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1796
   VMAPI_BASE_URL = "https://node1.example.com"
1797

    
1798
Edit ``/etc/default/vncauthproxy``:
1799

    
1800
.. code-block:: console
1801

    
1802
   CHUID="www-data:nogroup"
1803

    
1804
We have now finished with the basic Cyclades and Plankton configuration.
1805

    
1806
Database Initialization
1807
-----------------------
1808

    
1809
Once Cyclades is configured, we sync the database:
1810

    
1811
.. code-block:: console
1812

    
1813
   $ snf-manage syncdb
1814
   $ snf-manage migrate
1815

    
1816
and load the initial server flavors:
1817

    
1818
.. code-block:: console
1819

    
1820
   $ snf-manage loaddata flavors
1821

    
1822
If everything returns successfully, our database is ready.
1823

    
1824
Add the Ganeti backend
1825
----------------------
1826

    
1827
In our installation we assume that we only have one Ganeti cluster, the one we
1828
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1829
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1830
correctly.
1831

    
1832
.. code-block:: console
1833

    
1834
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1835

    
1836
You can see everything has been setup correctly by running:
1837

    
1838
.. code-block:: console
1839

    
1840
   $ snf-manage backend-list
1841

    
1842
Enable the new backend by running:
1843

    
1844
.. code-block::
1845

    
1846
   $ snf-manage backend-modify --drained False 1
1847

    
1848
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1849
    This means that you cannot add VMs to it. The reason for this is that the
1850
    nodes should be unavailable to Synnefo until the Administrator explicitly
1851
    releases them. To change this setting, use ``snf-manage backend-modify
1852
    --drained False <backend-id>``.
1853

    
1854
If something is not set correctly, you can modify the backend with the
1855
``snf-manage backend-modify`` command. If something has gone wrong, you could
1856
modify the backend to reflect the Ganeti installation by running:
1857

    
1858
.. code-block:: console
1859

    
1860
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1861
                               --user=cyclades
1862
                               --pass=example_rapi_passw0rd
1863
                               1
1864

    
1865
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1866
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1867
can talk to Ganeti even after a Ganeti master-failover.
1868

    
1869
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1870
password.  Once we setup the first backend to point at our Ganeti cluster, we
1871
update the Cyclades backends status by running:
1872

    
1873
.. code-block:: console
1874

    
1875
   $ snf-manage backend-update-status
1876

    
1877
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1878
guide,we won't get into more detail regarding mulitple backends. If you want to
1879
learn more please see /*TODO*/.
1880

    
1881
Add a Public Network
1882
----------------------
1883

    
1884
Cyclades supports different Public Networks on different Ganeti backends.
1885
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1886
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1887
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1888
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1889

    
1890
.. code-block:: console
1891

    
1892
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1893
                               --gateway=5.6.7.1 \
1894
                               --subnet6=2001:648:2FFC:1322::/64 \
1895
                               --gateway6=2001:648:2FFC:1322::1 \
1896
                               --public --dhcp --flavor=CUSTOM \
1897
                               --link=br0 --mode=bridged \
1898
                               --name=public_network \
1899
                               --backend-id=1
1900

    
1901
This will create the Public Network on both Cyclades and the Ganeti backend. To
1902
make sure everything was setup correctly, also run:
1903

    
1904
.. code-block:: console
1905

    
1906
   $ snf-manage reconcile-networks
1907

    
1908
You can see all available networks by running:
1909

    
1910
.. code-block:: console
1911

    
1912
   $ snf-manage network-list
1913

    
1914
and inspect each network's state by running:
1915

    
1916
.. code-block:: console
1917

    
1918
   $ snf-manage network-inspect <net_id>
1919

    
1920
Finally, you can see the networks from the Ganeti perspective by running on the
1921
Ganeti MASTER:
1922

    
1923
.. code-block:: console
1924

    
1925
   $ gnt-network list
1926
   $ gnt-network info <network_name>
1927

    
1928
Create pools for Private Networks
1929
---------------------------------
1930

    
1931
To prevent duplicate assignment of resources to different private networks,
1932
Cyclades supports two types of pools:
1933

    
1934
 - MAC prefix Pool
1935
 - Bridge Pool
1936

    
1937
As long as those resourses have been provisioned, admin has to define two
1938
these pools in Synnefo:
1939

    
1940

    
1941
.. code-block:: console
1942

    
1943
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1944

    
1945
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1946

    
1947
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1948

    
1949
.. code-block:: console
1950

    
1951
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
1952

    
1953
Servers restart
1954
---------------
1955

    
1956
Restart gunicorn on node1:
1957

    
1958
.. code-block:: console
1959

    
1960
   # /etc/init.d/gunicorn restart
1961

    
1962
Now let's do the final connections of Cyclades with Ganeti.
1963

    
1964
``snf-dispatcher`` initialization
1965
---------------------------------
1966

    
1967
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1968
manages the Cyclades database accordingly. It also initializes all exchanges. By
1969
default it is not enabled during installation of Cyclades, so let's enable it in
1970
its configuration file ``/etc/default/snf-dispatcher``:
1971

    
1972
.. code-block:: console
1973

    
1974
   SNF_DSPTCH_ENABLE=true
1975

    
1976
and start the daemon:
1977

    
1978
.. code-block:: console
1979

    
1980
   # /etc/init.d/snf-dispatcher start
1981

    
1982
You can see that everything works correctly by tailing its log file
1983
``/var/log/synnefo/dispatcher.log``.
1984

    
1985
``snf-ganeti-eventd`` on GANETI MASTER
1986
--------------------------------------
1987

    
1988
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1989
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1990
The daemon is already installed on the GANETI MASTER (node1 in our case).
1991
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1992
installation, so we enable it in its configuration file
1993
``/etc/default/snf-ganeti-eventd``:
1994

    
1995
.. code-block:: console
1996

    
1997
   SNF_EVENTD_ENABLE=true
1998

    
1999
and start the daemon:
2000

    
2001
.. code-block:: console
2002

    
2003
   # /etc/init.d/snf-ganeti-eventd start
2004

    
2005
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
2006

    
2007
Apply Quotas
2008
------------
2009

    
2010
.. code-block:: console
2011

    
2012
   node1 # snf-manage astakos-init --load-service-resources
2013
   node1 # snf-manage astakos-quota --verify
2014
   node1 # snf-manage astakos-quota --sync
2015
   node2 # snf-manage pithos-reset-usage
2016
   node1 # snf-manage cyclades-reset-usage
2017

    
2018
If all the above return successfully, then you have finished with the Cyclades
2019
and Plankton installation and setup.
2020

    
2021
Let's test our installation now.
2022

    
2023

    
2024
Testing of Cyclades (and Plankton)
2025
==================================
2026

    
2027
Cyclades Web UI
2028
---------------
2029

    
2030
First of all we need to test that our Cyclades Web UI works correctly. Open your
2031
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2032
top cloud bar. This should redirect you to:
2033

    
2034
 `http://node1.example.com/ui/`
2035

    
2036
and the Cyclades home page should appear. If not, please go back and find what
2037
went wrong. Do not proceed if you don't see the Cyclades home page.
2038

    
2039
If the Cyclades home page appears, click on the orange button 'New machine'. The
2040
first step of the 'New machine wizard' will appear. This step shows all the
2041
available Images from which you can spawn new VMs. The list should be currently
2042
empty, as we haven't registered any Images yet. Close the wizard and browse the
2043
interface (not many things to see yet). If everything seems to work, let's
2044
register our first Image file.
2045

    
2046
Cyclades Images
2047
---------------
2048

    
2049
To test our Cyclades (and Plankton) installation, we will use an Image stored on
2050
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all
2051
steps, even though you may already have uploaded an Image on Pithos+ from a
2052
:ref:`previous <snf-image-images>` section:
2053

    
2054
 * Upload an Image file to Pithos+
2055
 * Register that Image file to Plankton
2056
 * Spawn a new VM from that Image from the Cyclades Web UI
2057

    
2058
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_
2059
command line client to do the uploading and registering of the Image.
2060

    
2061
Installation of `kamaki`
2062
~~~~~~~~~~~~~~~~~~~~~~~~
2063

    
2064
You can install `kamaki` anywhere you like, since it is a standalone client of
2065
the APIs and talks to the installation over `http`. For the purpose of this
2066
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2067
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2068
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2069
too. We do this by running:
2070

    
2071
.. code-block:: console
2072

    
2073
   # apt-get install kamaki
2074

    
2075
Configuration of kamaki
2076
~~~~~~~~~~~~~~~~~~~~~~~
2077

    
2078
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2079
installation. We do this by running:
2080

    
2081
.. code-block:: console
2082

    
2083
   $ kamaki config set astakos.url "https://node1.example.com"
2084
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
2085
   $ kamaki config set image.url "https://node1.example.com/plankton"
2086
   $ kamaki config set store.url "https://node2.example.com/v1"
2087
   $ kamaki config set global.account "user@example.com"
2088
   $ kamaki config set store.enable on
2089
   $ kamaki config set store.pithos_extensions on
2090
   $ kamaki config set store.url "https://node2.example.com/v1"
2091
   $ kamaki config set store.account USER_UUID
2092
   $ kamaki config set global.token USER_TOKEN
2093

    
2094
The USER_TOKEN and USER_UUID appear on the user's (``user@example.com``)
2095
`Profile` web page on the Astakos Web UI.
2096

    
2097
You can see that the new configuration options have been applied correctly, by
2098
running:
2099

    
2100
.. code-block:: console
2101

    
2102
   $ kamaki config list
2103

    
2104
Upload an Image file to Pithos+
2105
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2106

    
2107
Now, that we have set up `kamaki` we will upload the Image that we have
2108
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2109
under the root ``Pithos`` container (as you may have done when uploading the
2110
Image from the Pithos+ Web UI), we will create a new container called ``images``
2111
and store the Image under that container. We do this for two reasons:
2112

    
2113
a) To demonstrate how to create containers other than the default ``Pithos``.
2114
   This can be done only with the `kamaki` client and not through the Web UI.
2115

    
2116
b) As a best organization practise, so that you won't have your Image files
2117
   tangled along with all your other Pithos+ files and directory structures.
2118

    
2119
We create the new ``images`` container by running:
2120

    
2121
.. code-block:: console
2122

    
2123
   $ kamaki store create images
2124

    
2125
Then, we upload the Image file to that container:
2126

    
2127
.. code-block:: console
2128

    
2129
   $ kamaki store upload --container images \
2130
                         /srv/images/debian_base-6.0-7-x86_64.diskdump \
2131
                         debian_base-6.0-7-x86_64.diskdump
2132

    
2133
The first is the local path and the second is the remote path on Pithos+. If
2134
the new container and the file appears on the Pithos+ Web UI, then you have
2135
successfully created the container and uploaded the Image file.
2136

    
2137
Register an existing Image file to Plankton
2138
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2139

    
2140
Once the Image file has been successfully uploaded on Pithos+, then we register
2141
it to Plankton (so that it becomes visible to Cyclades), by running:
2142

    
2143
.. code-block:: console
2144

    
2145
   $ kamaki image register "Debian Base" \
2146
                           pithos://USER_UUID/images/debian_base-6.0-7-x86_64.diskdump \
2147
                           --public \
2148
                           --disk-format=diskdump \
2149
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2150
                           --property description="Debian Squeeze Base System" \
2151
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2152
                           --property sortorder=1 --property USERS=root --property OS=debian
2153

    
2154
This command registers the Pithos+ file
2155
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an
2156
Image in Plankton. This Image will be public (``--public``), so all users will
2157
be able to spawn VMs from it and is of type ``diskdump``. The first two
2158
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2159
properties are optional, but recommended, so that the Images appear nicely on
2160
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2161
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2162
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2163

    
2164
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2165
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see
2166
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2167
used to show information on the Cyclades UI.
2168

    
2169
Spawn a VM from the Cyclades Web UI
2170
-----------------------------------
2171

    
2172
If the registration completes successfully, then go to the Cyclades Web UI from
2173
your browser at:
2174

    
2175
 `https://node1.example.com/ui/`
2176

    
2177
Click on the 'New Machine' button and the first step of the wizard will appear.
2178
Click on 'My Images' (right after 'System' Images) on the left pane of the
2179
wizard. Your previously registered Image "Debian Base" should appear under
2180
'Available Images'. If not, something has gone wrong with the registration. Make
2181
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image
2182
register`` returns successfully with all options and properties as shown above.
2183

    
2184
If the Image appears on the list, select it and complete the wizard by selecting
2185
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2186
write down your password, because you *WON'T* be able to retrieve it later.
2187

    
2188
If everything was setup correctly, after a few minutes your new machine will go
2189
to state 'Running' and you will be able to use it. Click 'Console' to connect
2190
through VNC out of band, or click on the machine's icon to connect directly via
2191
SSH or RDP (for windows machines).
2192

    
2193
Congratulations. You have successfully installed the whole Synnefo stack and
2194
connected all components. Go ahead in the next section to test the Network
2195
functionality from inside Cyclades and discover even more features.
2196

    
2197
General Testing
2198
===============
2199

    
2200
Notes
2201
=====
2202