Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ e5d8df8c

History | View | Annotate | Download (75 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
22
not released yet.
23

    
24
If you just want to install the Object Storage Service (Pithos), follow the
25
guide and just stop after the "Testing of Pithos" section.
26

    
27

    
28
Installation of Synnefo / Introduction
29
======================================
30

    
31
We will install the services with the above list's order. The last three
32
services will be installed in a single step (at the end), because at the moment
33
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
be installed in the second, due to a conflict between the snf-pithos-app and
36
snf-cyclades-app component (scheduled to be fixed in the next version).
37

    
38
For the rest of the documentation we will refer to the first physical node as
39
"node1" and the second as "node2". We will also assume that their domain names
40
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
41
"4.3.2.2" respectively.
42

    
43
.. note:: It is import that the two machines are under the same domain name.
44
    If they are not, you can do this by editting the file ``/etc/hosts``
45
    on both machines, and add the following lines:
46

    
47
    .. code-block:: console
48

    
49
        4.3.2.1     node1.example.com
50
        4.3.2.2     node2.example.com
51

    
52

    
53
General Prerequisites
54
=====================
55

    
56
These are the general synnefo prerequisites, that you need on node1 and node2
57
and are related to all the services (Astakos, Pithos, Cyclades).
58

    
59
To be able to download all synnefo components you need to add the following
60
lines in your ``/etc/apt/sources.list`` file:
61

    
62
| ``deb http://apt2.dev.grnet.gr stable/``
63
| ``deb-src http://apt2.dev.grnet.gr stable/``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
105

    
106
You can install apache2, progresql and ntp by running:
107

    
108
.. code-block:: console
109

    
110
   # apt-get install apache2 postgresql ntp
111

    
112
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
113
the official debian backports:
114

    
115
.. code-block:: console
116

    
117
   # apt-get -t squeeze-backports install gunicorn
118

    
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
120

    
121
.. code-block:: console
122

    
123
   # apt-get -t squeeze-backports install python-gevent
124

    
125
On node1, we will create our databases, so you will also need the
126
python-psycopg2 package:
127

    
128
.. code-block:: console
129

    
130
   # apt-get install python-psycopg2
131

    
132
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
133
following line to ``/etc/apt/sources.list``:
134

    
135
.. code-block:: console
136

    
137
    deb http://www.rabbitmq.com/debian testing main
138

    
139
Add RabbitMQ public key, to trusted key list:
140

    
141
.. code-block:: console
142

    
143
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
144
  # apt-key add rabbitmq-signing-key-public.asc
145

    
146
Finally, to install the package run:
147

    
148
.. code-block:: console
149

    
150
  # apt-get update
151
  # apt-get install rabbitmq-server
152

    
153
Database setup
154
~~~~~~~~~~~~~~
155

    
156
On node1, we create a database called ``snf_apps``, that will host all django
157
apps related tables. We also create the user ``synnefo`` and grant him all
158
privileges on the database. We do this by running:
159

    
160
.. code-block:: console
161

    
162
    root@node1:~ # su - postgres
163
    postgres@node1:~ $ psql
164
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
166
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167

    
168
We also create the database ``snf_pithos`` needed by the Pithos backend and
169
grant the ``synnefo`` user all privileges on the database. This database could
170
be created on node2 instead, but we do it on node1 for simplicity. We will
171
create all needed databases on node1 and then node2 will connect to them.
172

    
173
.. code-block:: console
174

    
175
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
176
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177

    
178
Configure the database to listen to all network interfaces. You can do this by
179
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
180
``listen_addresses`` to ``'*'`` :
181

    
182
.. code-block:: console
183

    
184
    listen_addresses = '*'
185

    
186
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
187
node2 to connect to the database. Add the following lines under ``#IPv4 local
188
connections:`` :
189

    
190
.. code-block:: console
191

    
192
    host		all	all	4.3.2.1/32	md5
193
    host		all	all	4.3.2.2/32	md5
194

    
195
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
196
actual IPs. Now, restart the server to apply the changes:
197

    
198
.. code-block:: console
199

    
200
   # /etc/init.d/postgresql restart
201

    
202
Gunicorn setup
203
~~~~~~~~~~~~~~
204

    
205
Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
206

    
207
.. code-block:: console
208

    
209
    CONFIG = {
210
     'mode': 'django',
211
     'environment': {
212
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
213
     },
214
     'working_dir': '/etc/synnefo',
215
     'user': 'www-data',
216
     'group': 'www-data',
217
     'args': (
218
       '--bind=127.0.0.1:8080',
219
       '--worker-class=gevent',
220
       '--workers=8',
221
       '--log-level=debug',
222
     ),
223
    }
224

    
225
.. warning:: Do NOT start the server yet, because it won't find the
226
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
227
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
228
    ``--worker-class=sync``. We will start the server after successful
229
    installation of astakos. If the server is running::
230

    
231
       # /etc/init.d/gunicorn stop
232

    
233
Apache2 setup
234
~~~~~~~~~~~~~
235

    
236
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
237
following:
238

    
239
.. code-block:: console
240

    
241
    <VirtualHost *:80>
242
        ServerName node1.example.com
243

    
244
        RewriteEngine On
245
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
246
        RewriteRule ^(.*)$ - [F,L]
247
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
248
    </VirtualHost>
249

    
250
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
251
following:
252

    
253
.. code-block:: console
254

    
255
    <IfModule mod_ssl.c>
256
    <VirtualHost _default_:443>
257
        ServerName node1.example.com
258

    
259
        Alias /static "/usr/share/synnefo/static"
260

    
261
        #  SetEnv no-gzip
262
        #  SetEnv dont-vary
263

    
264
       AllowEncodedSlashes On
265

    
266
       RequestHeader set X-Forwarded-Protocol "https"
267

    
268
    <Proxy * >
269
        Order allow,deny
270
        Allow from all
271
    </Proxy>
272

    
273
        SetEnv                proxy-sendchunked
274
        SSLProxyEngine        off
275
        ProxyErrorOverride    off
276

    
277
        ProxyPass        /static !
278
        ProxyPass        / http://localhost:8080/ retry=0
279
        ProxyPassReverse / http://localhost:8080/
280

    
281
        RewriteEngine On
282
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
283
        RewriteRule ^(.*)$ - [F,L]
284

    
285
        SSLEngine on
286
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
287
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
288
    </VirtualHost>
289
    </IfModule>
290

    
291
Now enable sites and modules by running:
292

    
293
.. code-block:: console
294

    
295
   # a2enmod ssl
296
   # a2enmod rewrite
297
   # a2dissite default
298
   # a2ensite synnefo
299
   # a2ensite synnefo-ssl
300
   # a2enmod headers
301
   # a2enmod proxy_http
302

    
303
.. warning:: Do NOT start/restart the server yet. If the server is running::
304

    
305
       # /etc/init.d/apache2 stop
306

    
307
.. _rabbitmq-setup:
308

    
309
Message Queue setup
310
~~~~~~~~~~~~~~~~~~~
311

    
312
The message queue will run on node1, so we need to create the appropriate
313
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
314
exchanges:
315

    
316
.. code-block:: console
317

    
318
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
319
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
320

    
321
We do not need to initialize the exchanges. This will be done automatically,
322
during the Cyclades setup.
323

    
324
Pithos data directory setup
325
~~~~~~~~~~~~~~~~~~~~~~~~~~~
326

    
327
As mentioned in the General Prerequisites section, there is a directory called
328
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
329
directory inside it:
330

    
331
.. code-block:: console
332

    
333
   # cd /srv/pithos
334
   # mkdir data
335
   # chown www-data:www-data data
336
   # chmod g+ws data
337

    
338
You are now ready with all general prerequisites concerning node1. Let's go to
339
node2.
340

    
341
Node2
342
-----
343

    
344
General Synnefo dependencies
345
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346

    
347
    * apache (http server)
348
    * gunicorn (WSGI http server)
349
    * postgresql (database)
350
    * ntp (NTP daemon)
351
    * gevent
352

    
353
You can install the above by running:
354

    
355
.. code-block:: console
356

    
357
   # apt-get install apache2 postgresql ntp
358

    
359
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
360
the official debian backports:
361

    
362
.. code-block:: console
363

    
364
   # apt-get -t squeeze-backports install gunicorn
365

    
366
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
367

    
368
.. code-block:: console
369

    
370
   # apt-get -t squeeze-backports install python-gevent
371

    
372
Node2 will connect to the databases on node1, so you will also need the
373
python-psycopg2 package:
374

    
375
.. code-block:: console
376

    
377
   # apt-get install python-psycopg2
378

    
379
Database setup
380
~~~~~~~~~~~~~~
381

    
382
All databases have been created and setup on node1, so we do not need to take
383
any action here. From node2, we will just connect to them. When you get familiar
384
with the software you may choose to run different databases on different nodes,
385
for performance/scalability/redundancy reasons, but those kind of setups are out
386
of the purpose of this guide.
387

    
388
Gunicorn setup
389
~~~~~~~~~~~~~~
390

    
391
Create the file ``/etc/gunicorn.d/synnefo`` containing the following
392
(same contents as in node1; you can just copy/paste the file):
393

    
394
.. code-block:: console
395

    
396
    CONFIG = {
397
     'mode': 'django',
398
     'environment': {
399
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
400
     },
401
     'working_dir': '/etc/synnefo',
402
     'user': 'www-data',
403
     'group': 'www-data',
404
     'args': (
405
       '--bind=127.0.0.1:8080',
406
       '--worker-class=gevent',
407
       '--workers=4',
408
       '--log-level=debug',
409
       '--timeout=43200'
410
     ),
411
    }
412

    
413
.. warning:: Do NOT start the server yet, because it won't find the
414
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
415
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
416
    ``--worker-class=sync``. We will start the server after successful
417
    installation of astakos. If the server is running::
418

    
419
       # /etc/init.d/gunicorn stop
420

    
421
Apache2 setup
422
~~~~~~~~~~~~~
423

    
424
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
425
following:
426

    
427
.. code-block:: console
428

    
429
    <VirtualHost *:80>
430
        ServerName node2.example.com
431

    
432
        RewriteEngine On
433
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
434
        RewriteRule ^(.*)$ - [F,L]
435
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
436
    </VirtualHost>
437

    
438
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
439
containing the following:
440

    
441
.. code-block:: console
442

    
443
    <IfModule mod_ssl.c>
444
    <VirtualHost _default_:443>
445
        ServerName node2.example.com
446

    
447
        Alias /static "/usr/share/synnefo/static"
448

    
449
        SetEnv no-gzip
450
        SetEnv dont-vary
451
        AllowEncodedSlashes On
452

    
453
        RequestHeader set X-Forwarded-Protocol "https"
454

    
455
        <Proxy * >
456
            Order allow,deny
457
            Allow from all
458
        </Proxy>
459

    
460
        SetEnv                proxy-sendchunked
461
        SSLProxyEngine        off
462
        ProxyErrorOverride    off
463

    
464
        ProxyPass        /static !
465
        ProxyPass        / http://localhost:8080/ retry=0
466
        ProxyPassReverse / http://localhost:8080/
467

    
468
        SSLEngine on
469
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
470
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
471
    </VirtualHost>
472
    </IfModule>
473

    
474
As in node1, enable sites and modules by running:
475

    
476
.. code-block:: console
477

    
478
   # a2enmod ssl
479
   # a2enmod rewrite
480
   # a2dissite default
481
   # a2ensite synnefo
482
   # a2ensite synnefo-ssl
483
   # a2enmod headers
484
   # a2enmod proxy_http
485

    
486
.. warning:: Do NOT start/restart the server yet. If the server is running::
487

    
488
       # /etc/init.d/apache2 stop
489

    
490
We are now ready with all general prerequisites for node2. Now that we have
491
finished with all general prerequisites for both nodes, we can start installing
492
the services. First, let's install Astakos on node1.
493

    
494

    
495
Installation of Astakos on node1
496
================================
497

    
498
To install astakos, grab the package from our repository (make sure  you made
499
the additions needed in your ``/etc/apt/sources.list`` file, as described
500
previously), by running:
501

    
502
.. code-block:: console
503

    
504
   # apt-get install snf-astakos-app snf-quotaholder-app snf-pithos-backend
505

    
506
After successful installation of snf-astakos-app, make sure that also
507
snf-webproject has been installed (marked as "Recommended" package). By default
508
Debian installs "Recommended" packages, but if you have changed your
509
configuration and the package didn't install automatically, you should
510
explicitly install it manually running:
511

    
512
.. code-block:: console
513

    
514
   # apt-get install snf-webproject
515

    
516
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
517
the experienced administrator the ability to install Synnefo in a custom made
518
`Django <https://www.djangoproject.com/>`_ project. This corner case
519
concerns only very advanced users that know what they are doing and want to
520
experiment with synnefo.
521

    
522

    
523
.. _conf-astakos:
524

    
525
Configuration of Astakos
526
========================
527

    
528
Conf Files
529
----------
530

    
531
After astakos is successfully installed, you will find the directory
532
``/etc/synnefo`` and some configuration files inside it. The files contain
533
commented configuration options, which are the default options. While installing
534
new snf-* components, new configuration files will appear inside the directory.
535
In this guide (and for all services), we will edit only the minimum necessary
536
configuration options, to reflect our setup. Everything else will remain as is.
537

    
538
After getting familiar with synnefo, you will be able to customize the software
539
as you wish and fits your needs. Many options are available, to empower the
540
administrator with extensively customizable setups.
541

    
542
For the snf-webproject component (installed as an astakos dependency), we
543
need the following:
544

    
545
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
546
uncomment and edit the ``DATABASES`` block to reflect our database:
547

    
548
.. code-block:: console
549

    
550
    DATABASES = {
551
     'default': {
552
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
553
         'ENGINE': 'postgresql_psycopg2',
554
         # ATTENTION: This *must* be the absolute path if using sqlite3.
555
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
556
         'NAME': 'snf_apps',
557
         'USER': 'synnefo',                      # Not used with sqlite3.
558
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
559
         # Set to empty string for localhost. Not used with sqlite3.
560
         'HOST': '4.3.2.1',
561
         # Set to empty string for default. Not used with sqlite3.
562
         'PORT': '5432',
563
     }
564
    }
565

    
566
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
567
``SECRET_KEY``. This is a Django specific setting which is used to provide a
568
seed in secret-key hashing algorithms. Set this to a random string of your
569
choise and keep it private:
570

    
571
.. code-block:: console
572

    
573
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
574

    
575
For astakos specific configuration, edit the following options in
576
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
577

    
578
.. code-block:: console
579

    
580
    ASTAKOS_DEFAULT_ADMIN_EMAIL = None
581

    
582
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
583

    
584
    ASTAKOS_BASEURL = 'https://node1.example.com'
585

    
586
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
587
services). ``ASTAKOS_BASEURL`` is the astakos home page.
588

    
589
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
590
Every time a new account is created a notification is sent to this email.
591
For this we need access to a running mail server, so we have disabled
592
it for now by setting its value to None. For more informations on this,
593
read the relative :ref:`section <mail-server>`.
594

    
595
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
596
    If you would like to enable it, you have to edit the following options:
597

    
598
    .. code-block:: console
599

    
600
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
601
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
602
        ASTAKOS_RECAPTCHA_USE_SSL = True
603
        ASTAKOS_RECAPTCHA_ENABLED = True
604

    
605
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
606
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
607

    
608
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
609

    
610
.. code-block:: console
611

    
612
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
613

    
614
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
615

    
616
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
617

    
618
Those settings have to do with the black cloudbar endpoints and will be
619
described in more detail later on in this guide. For now, just edit the domain
620
to point at node1 which is where we have installed Astakos.
621

    
622
If you are an advanced user and want to use the Shibboleth Authentication
623
method, read the relative :ref:`section <shibboleth-auth>`.
624

    
625
.. note:: Because Cyclades and Astakos are running on the same machine
626
    in our example, we have to deactivate the CSRF verification. We can do so
627
    by adding to
628
    ``/etc/synnefo/99-local.conf``:
629

    
630
    .. code-block:: console
631

    
632
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
633
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
634

    
635
Enable Pooling
636
--------------
637

    
638
This section can be bypassed, but we strongly recommend you apply the following,
639
since they result in a significant performance boost.
640

    
641
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
642
around Psycopg2. This allows independent Django requests to reuse pooled DB
643
connections, with significant performance gains.
644

    
645
To use, first monkey-patch psycopg2. For Django, run this before the
646
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
647

    
648
.. code-block:: console
649

    
650
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
651
    monkey_patch_psycopg2()
652

    
653
Since we are running with greenlets, we should modify psycopg2 behavior, so it
654
works properly in a greenlet context:
655

    
656
.. code-block:: console
657

    
658
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
659
    make_psycopg_green()
660

    
661
Use the Psycopg2 driver as usual. For Django, this means using
662
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
663
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
664
driver, through ``DATABASES.OPTIONS`` in Django.
665

    
666
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
667
file that looks like this:
668

    
669
.. code-block:: console
670

    
671
    # Monkey-patch psycopg2
672
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
673
    monkey_patch_psycopg2()
674

    
675
    # If running with greenlets
676
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
677
    make_psycopg_green()
678

    
679
    DATABASES = {
680
     'default': {
681
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
682
         'ENGINE': 'postgresql_psycopg2',
683
         'OPTIONS': {'synnefo_poolsize': 8},
684

    
685
         # ATTENTION: This *must* be the absolute path if using sqlite3.
686
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
687
         'NAME': 'snf_apps',
688
         'USER': 'synnefo',                      # Not used with sqlite3.
689
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
690
         # Set to empty string for localhost. Not used with sqlite3.
691
         'HOST': '4.3.2.1',
692
         # Set to empty string for default. Not used with sqlite3.
693
         'PORT': '5432',
694
     }
695
    }
696

    
697
Database Initialization
698
-----------------------
699

    
700
After configuration is done, we initialize the database by running:
701

    
702
.. code-block:: console
703

    
704
    # snf-manage syncdb
705

    
706
At this example we don't need to create a django superuser, so we select
707
``[no]`` to the question. After a successful sync, we run the migration needed
708
for astakos:
709

    
710
.. code-block:: console
711

    
712
    # snf-manage migrate im
713

    
714
Then, we load the pre-defined user groups
715

    
716
.. code-block:: console
717

    
718
    # snf-manage loaddata groups
719

    
720
.. _services-reg:
721

    
722
Services Registration
723
---------------------
724

    
725
When the database is ready, we configure the elements of the Astakos cloudbar,
726
to point to our future services:
727

    
728
.. code-block:: console
729

    
730
    # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
731
    # snf-manage service-add "cyclades" https://node1.example.com/ui/
732
    # snf-manage service-add "pithos" https://node2.example.com/ui/
733

    
734
Servers Initialization
735
----------------------
736

    
737
Finally, we initialize the servers on node1:
738

    
739
.. code-block:: console
740

    
741
    root@node1:~ # /etc/init.d/gunicorn restart
742
    root@node1:~ # /etc/init.d/apache2 restart
743

    
744
We have now finished the Astakos setup. Let's test it now.
745

    
746

    
747
Testing of Astakos
748
==================
749

    
750
Open your favorite browser and go to:
751

    
752
``http://node1.example.com/im``
753

    
754
If this redirects you to ``https://node1.example.com/im/`` and you can see
755
the "welcome" door of Astakos, then you have successfully setup Astakos.
756

    
757
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
758
and fill all your data at the sign up form. Then click "SUBMIT". You should now
759
see a green box on the top, which informs you that you made a successful request
760
and the request has been sent to the administrators. So far so good, let's
761
assume that you created the user with username ``user@example.com``.
762

    
763
Now we need to activate that user. Return to a command prompt at node1 and run:
764

    
765
.. code-block:: console
766

    
767
    root@node1:~ # snf-manage user-list
768

    
769
This command should show you a list with only one user; the one we just created.
770
This user should have an id with a value of ``1``. It should also have an
771
"active" status with the value of ``0`` (inactive). Now run:
772

    
773
.. code-block:: console
774

    
775
    root@node1:~ # snf-manage user-update --set-active 1
776

    
777
This modifies the active value to ``1``, and actually activates the user.
778
When running in production, the activation is done automatically with different
779
types of moderation, that Astakos supports. You can see the moderation methods
780
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
781
documentation. In production, you can also manually activate a user, by sending
782
him/her an activation email. See how to do this at the :ref:`User
783
activation <user_activation>` section.
784

    
785
Now let's go back to the homepage. Open ``http://node1.example.com/im/`` with
786
your browser again. Try to sign in using your new credentials. If the astakos
787
menu appears and you can see your profile, then you have successfully setup
788
Astakos.
789

    
790
Let's continue to install Pithos now.
791

    
792

    
793
Installation of Pithos on node2
794
===============================
795

    
796
To install Pithos, grab the packages from our repository (make sure  you made
797
the additions needed in your ``/etc/apt/sources.list`` file, as described
798
previously), by running:
799

    
800
.. code-block:: console
801

    
802
   # apt-get install snf-pithos-app snf-pithos-backend
803

    
804
After successful installation of snf-pithos-app, make sure that also
805
snf-webproject has been installed (marked as "Recommended" package). Refer to
806
the "Installation of Astakos on node1" section, if you don't remember why this
807
should happen. Now, install the pithos web interface:
808

    
809
.. code-block:: console
810

    
811
   # apt-get install snf-pithos-webclient
812

    
813
This package provides the standalone pithos web client. The web client is the
814
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
815
interface's cloudbar, at the top of the Astakos homepage.
816

    
817

    
818
.. _conf-pithos:
819

    
820
Configuration of Pithos
821
=======================
822

    
823
Conf Files
824
----------
825

    
826
After Pithos is successfully installed, you will find the directory
827
``/etc/synnefo`` and some configuration files inside it, as you did in node1
828
after installation of astakos. Here, you will not have to change anything that
829
has to do with snf-common or snf-webproject. Everything is set at node1. You
830
only need to change settings that have to do with Pithos. Specifically:
831

    
832
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
833
this options:
834

    
835
.. code-block:: console
836

    
837
   ASTAKOS_URL = 'https://node1.example.com/'
838

    
839
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
840
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
841

    
842

    
843
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
844

    
845
   # Set to False if astakos & pithos are on the same host
846
   #PITHOS_PROXY_USER_SERVICES = True
847

    
848

    
849
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
850
find the Pithos backend database. Above we tell Pithos that its database is
851
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
852
``example_passw0rd``.  All those settings where setup during node1's "Database
853
setup" section.
854

    
855
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
856
the Pithos backend data. Above we tell Pithos to store its data under
857
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
858
directory at node1's "Pithos data directory setup" section.
859

    
860
The ``ASTAKOS_URL`` option tells to the Pithos app in which URI
861
is available the astakos authentication api.
862

    
863
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos token returned by running on
864
the Astakos node (node1 in our case):
865

    
866
.. code-block:: console
867

    
868
   # snf-manage service-list
869

    
870
The token has been generated automatically during the :ref:`Pithos service
871
registration <services-reg>`.
872

    
873
Then we need to setup the web UI and connect it to astakos. To do so, edit
874
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
875

    
876
.. code-block:: console
877

    
878
    PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
879
    PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
880

    
881
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
882
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
883
Pithos feedback form. Astakos already provides a generic feedback form for all
884
services, so we use this one.
885

    
886
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
887
object checksums. This results to improved performance during object uploading.
888
However, if compatibility with the OpenStack Object Storage API is important
889
then it should be changed to ``True``.
890

    
891
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
892
Pithos web UI with the astakos web UI (through the top cloudbar):
893

    
894
.. code-block:: console
895

    
896
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
897
    PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
898
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
899
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
900

    
901
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
902
cloudbar.
903

    
904
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
905
Astakos service. You can see all :ref:`registered services <services-reg>` by
906
running on the Astakos node (node1):
907

    
908
.. code-block:: console
909

    
910
   # snf-manage service-list
911

    
912
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos
913
service's ``id`` as shown by the above command, in our case ``3``.
914

    
915
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
916
Pithos web client to get from astakos all the information needed to fill its
917
own cloudbar. So we put our astakos deployment urls there.
918

    
919
Pooling and Greenlets
920
---------------------
921

    
922
Pithos is pooling-ready without the need of further configuration, because it
923
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
924
backend objects for access to the Pithos DB.
925

    
926
However, as in Astakos, since we are running with Greenlets, it is also
927
recommended to modify psycopg2 behavior so it works properly in a greenlet
928
context. This means adding the following lines at the top of your
929
``/etc/synnefo/10-snf-webproject-database.conf`` file:
930

    
931
.. code-block:: console
932

    
933
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
934
    make_psycopg_green()
935

    
936
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
937
mentioned above, depending on your setup) argument on your
938
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
939
like this:
940

    
941
.. code-block:: console
942

    
943
    CONFIG = {
944
     'mode': 'django',
945
     'environment': {
946
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
947
     },
948
     'working_dir': '/etc/synnefo',
949
     'user': 'www-data',
950
     'group': 'www-data',
951
     'args': (
952
       '--bind=127.0.0.1:8080',
953
       '--workers=4',
954
       '--worker-class=gevent',
955
       '--log-level=debug',
956
       '--timeout=43200'
957
     ),
958
    }
959

    
960
Stamp Database Revision
961
-----------------------
962

    
963
Pithos uses the alembic_ database migrations tool.
964

    
965
.. _alembic: http://alembic.readthedocs.org
966

    
967
After a sucessful installation, we should stamp it at the most recent
968
revision, so that future migrations know where to start upgrading in
969
the migration history.
970

    
971
First, find the most recent revision in the migration history:
972

    
973
.. code-block:: console
974

    
975
    root@node2:~ # pithos-migrate history
976
    2a309a9a3438 -> 27381099d477 (head), alter public add column url
977
    165ba3fbfe53 -> 2a309a9a3438, fix statistics negative population
978
    3dd56e750a3 -> 165ba3fbfe53, update account in paths
979
    230f8ce9c90f -> 3dd56e750a3, Fix latest_version
980
    8320b1c62d9 -> 230f8ce9c90f, alter nodes add column latest version
981
    None -> 8320b1c62d9, create index nodes.parent
982

    
983
Finally, we stamp it with the one found in the previous step:
984

    
985
.. code-block:: console
986

    
987
    root@node2:~ # pithos-migrate stamp 27381099d477
988

    
989
Servers Initialization
990
----------------------
991

    
992
After configuration is done, we initialize the servers on node2:
993

    
994
.. code-block:: console
995

    
996
    root@node2:~ # /etc/init.d/gunicorn restart
997
    root@node2:~ # /etc/init.d/apache2 restart
998

    
999
You have now finished the Pithos setup. Let's test it now.
1000

    
1001

    
1002
Testing of Pithos
1003
=================
1004

    
1005
Open your browser and go to the Astakos homepage:
1006

    
1007
``http://node1.example.com/im``
1008

    
1009
Login, and you will see your profile page. Now, click the "pithos" link on the
1010
top black cloudbar. If everything was setup correctly, this will redirect you
1011
to:
1012

    
1013

    
1014
and you will see the blue interface of the Pithos application.  Click the
1015
orange "Upload" button and upload your first file. If the file gets uploaded
1016
successfully, then this is your first sign of a successful Pithos installation.
1017
Go ahead and experiment with the interface to make sure everything works
1018
correctly.
1019

    
1020
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1021

    
1022
If you don't stumble on any problems, then you have successfully installed
1023
Pithos, which you can use as a standalone File Storage Service.
1024

    
1025
If you would like to do more, such as:
1026

    
1027
    * Spawning VMs
1028
    * Spawning VMs from Images stored on Pithos
1029
    * Uploading your custom Images to Pithos
1030
    * Spawning VMs from those custom Images
1031
    * Registering existing Pithos files as Images
1032
    * Connect VMs to the Internet
1033
    * Create Private Networks
1034
    * Add VMs to Private Networks
1035

    
1036
please continue with the rest of the guide.
1037

    
1038

    
1039
Cyclades Prerequisites
1040
======================
1041

    
1042
Before proceeding with the Cyclades installation, make sure you have
1043
successfully set up Astakos and Pithos first, because Cyclades depends on
1044
them. If you don't have a working Astakos and Pithos installation yet, please
1045
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1046

    
1047
Besides Astakos and Pithos, you will also need a number of additional working
1048
prerequisites, before you start the Cyclades installation.
1049

    
1050
Ganeti
1051
------
1052

    
1053
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1054
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1055
Please refer to the
1056
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1057
gory details. A successful Ganeti installation concludes with a working
1058
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1059
<GANETI_NODES>`.
1060

    
1061
The above Ganeti cluster can run on different physical machines than node1 and
1062
node2 and can scale independently, according to your needs.
1063

    
1064
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1065
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1066
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1067

    
1068
We highly recommend that you read the official Ganeti documentation, if you are
1069
not familiar with Ganeti.
1070

    
1071
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1072
support IP pool management. This feature will be available in Ganeti >= 2.7.
1073
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1074
GRNET provided packages until stable 2.7 is out. To do so:
1075

    
1076
.. code-block:: console
1077

    
1078
   # apt-get install snf-ganeti ganeti-htools
1079
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1080

    
1081
You should have:
1082

    
1083
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1084

    
1085
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1086
both nodes, choose a domain name that resolves to a valid floating IP (let's
1087
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1088
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1089
If not then skip passing --no-ssh-init but be aware that it will replace
1090
/root/.ssh/* related files and you might lose access to master node. Also,
1091
make sure there is an lvm volume group named ``ganeti`` that will host your
1092
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1093
Then run on node1:
1094

    
1095
.. code-block:: console
1096

    
1097
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1098
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1099
                    --master-netdev eth0 ganeti.node1.example.com
1100
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1101
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1102
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1103

    
1104
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1105
                    --vm-capable=yes node2.example.com
1106
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1107
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1108

    
1109
For any problems you may stumble upon installing Ganeti, please refer to the
1110
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1111
of Ganeti is out of the scope of this guide.
1112

    
1113
.. _cyclades-install-snfimage:
1114

    
1115
snf-image
1116
---------
1117

    
1118
Installation
1119
~~~~~~~~~~~~
1120
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1121
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1122
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1123
node1 and node2. You can do this by running on *both* nodes:
1124

    
1125
.. code-block:: console
1126

    
1127
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1128

    
1129
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1130
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1131
able to access the Pithos database. This is why, we also install them on *all*
1132
VM-capable Ganeti nodes.
1133

    
1134
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1135
    not  work out of the box if you try to use URLs served by servers which do
1136
    not have a valid certificate. To circumvent this you should edit the file
1137
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1138

    
1139
After `snf-image` has been installed successfully, create the helper VM by
1140
running on *both* nodes:
1141

    
1142
.. code-block:: console
1143

    
1144
   # snf-image-update-helper
1145

    
1146
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1147
snf-image to run successfully, and it may take a few minutes depending on your
1148
Internet connection.
1149

    
1150
Configuration
1151
~~~~~~~~~~~~~
1152
snf-image supports native access to Images stored on Pithos. This means that
1153
it can talk directly to the Pithos backend, without the need of providing a
1154
public URL. More details, are described in the next section. For now, the only
1155
thing we need to do, is configure snf-image to access our Pithos backend.
1156

    
1157
To do this, we need to set the corresponding variables in
1158
``/etc/default/snf-image``, to reflect our Pithos setup:
1159

    
1160
.. code-block:: console
1161

    
1162
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1163

    
1164
    PITHOS_DATA="/srv/pithos/data"
1165

    
1166
If you have installed your Ganeti cluster on different nodes than node1 and
1167
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1168

    
1169
If you would like to use Images that are also/only stored locally, you need to
1170
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1171
Pithos.
1172

    
1173
Testing
1174
~~~~~~~
1175
You can test that snf-image is successfully installed by running on the
1176
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1177

    
1178
.. code-block:: console
1179

    
1180
   # gnt-os diagnose
1181

    
1182
This should return ``valid`` for snf-image.
1183

    
1184
If you are interested to learn more about snf-image's internals (and even use
1185
it alongside Ganeti without Synnefo), please see
1186
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information
1187
concerning installation instructions, documentation on the design and
1188
implementation, and supported Image formats.
1189

    
1190
.. _snf-image-images:
1191

    
1192
Actual Images for snf-image
1193
---------------------------
1194

    
1195
Now that snf-image is installed successfully we need to provide it with some
1196
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1197
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1198
format. For more information about snf-image Image formats see `here
1199
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1200

    
1201
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1202
above Images to be stored:
1203

    
1204
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1205
      in :file:`/etc/default/snf-image`)
1206
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1207
    * On Pithos (accessible natively, not only by its public URL)
1208

    
1209
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1210
on the official `snf-image page
1211
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1212
of type ``diskdump``. We will store it in our new Pithos installation.
1213

    
1214
To do so, do the following:
1215

    
1216
a) Download the Image from the official snf-image page.
1217

    
1218
b) Upload the Image to your Pithos installation, either using the Pithos Web
1219
   UI or the command line client `kamaki
1220
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1221

    
1222
Once the Image is uploaded successfully, download the Image's metadata file
1223
from the official snf-image page. You will need it, for spawning a VM from
1224
Ganeti, in the next section.
1225

    
1226
Of course, you can repeat the procedure to upload more Images, available from
1227
the `official snf-image page
1228
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1229

    
1230
.. _ganeti-with-pithos-images:
1231

    
1232
Spawning a VM from a Pithos Image, using Ganeti
1233
-----------------------------------------------
1234

    
1235
Now, it is time to test our installation so far. So, we have Astakos and
1236
Pithos installed, we have a working Ganeti installation, the snf-image
1237
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1238
Pithos. Make sure you also have the `metadata file
1239
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1240

    
1241
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1242

    
1243
.. code-block:: console
1244

    
1245
   # gnt-instance add -o snf-image+default --os-parameters \
1246
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1247
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1248
                      testvm1
1249

    
1250
In the above command:
1251

    
1252
 * ``img_passwd``: the arbitrary root password of your new instance
1253
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1254
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1255
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1256
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1257
               * ``container``: ``pithos`` (default, if the Web UI was used)
1258
               * ``filename``: the name of file (visible also from the Web UI)
1259
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1260
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1261
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1262

    
1263
If the ``gnt-instance add`` command returns successfully, then run:
1264

    
1265
.. code-block:: console
1266

    
1267
   # gnt-instance info testvm1 | grep "console connection"
1268

    
1269
to find out where to connect using VNC. If you can connect successfully and can
1270
login to your new instance using the root password ``my_vm_example_passw0rd``,
1271
then everything works as expected and you have your new Debian Base VM up and
1272
running.
1273

    
1274
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1275
to access the Pithos database and the Pithos backend data (newer versions
1276
require UUID instead of a username). Another issue you may encounter is that in
1277
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1278
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1279
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1280
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1281
are sure everything works till this point.
1282

    
1283
If everything works, you have successfully connected Ganeti with Pithos. Let's
1284
move on to networking now.
1285

    
1286
.. warning::
1287

    
1288
    You can bypass the networking sections and go straight to
1289
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1290
    the Cyclades Network Service, but only the Cyclades Compute Service
1291
    (recommended for now).
1292

    
1293
Networking Setup Overview
1294
-------------------------
1295

    
1296
This part is deployment-specific and must be customized based on the specific
1297
needs of the system administrator. However, to do so, the administrator needs
1298
to understand how each level handles Virtual Networks, to be able to setup the
1299
backend appropriately, before installing Cyclades. To do so, please read the
1300
:ref:`Network <networks>` section before proceeding.
1301

    
1302
Since synnefo 0.11 all network actions are managed with the snf-manage
1303
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1304
snf-network, bridges, vlans) to be already configured correctly. The only
1305
actions needed in this point are:
1306

    
1307
a) Have Ganeti with IP pool management support installed.
1308

    
1309
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1310

    
1311
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1312

    
1313
In order to test that everything is setup correctly before installing Cyclades,
1314
we will make some testing actions in this section, and the actual setup will be
1315
done afterwards with snf-manage commands.
1316

    
1317
.. _snf-network:
1318

    
1319
snf-network
1320
~~~~~~~~~~~
1321

    
1322
snf-network includes `kvm-vif-bridge` script that is invoked every time
1323
a tap (a VM's NIC) is created. Based on environment variables passed by
1324
Ganeti it issues various commands depending on the network type the NIC is
1325
connected to and sets up a corresponding dhcp lease.
1326

    
1327
Install snf-network on all Ganeti nodes:
1328

    
1329
.. code-block:: console
1330

    
1331
   # apt-get install snf-network
1332

    
1333
Then, in :file:`/etc/default/snf-network` set:
1334

    
1335
.. code-block:: console
1336

    
1337
   MAC_MASK=ff:ff:f0:00:00:00
1338

    
1339
.. _nfdhcpd:
1340

    
1341
nfdhcpd
1342
~~~~~~~
1343

    
1344
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1345
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1346
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1347
create a dhcp response.
1348

    
1349
.. code-block:: console
1350

    
1351
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1352
   # apt-get install nfdhcpd
1353

    
1354
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1355
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1356
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1357
VMs. Once you are finished, restart the server on all nodes:
1358

    
1359
.. code-block:: console
1360

    
1361
   # /etc/init.d/nfdhcpd restart
1362

    
1363
If you are using ``ferm``, then you need to run the following:
1364

    
1365
.. code-block:: console
1366

    
1367
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1368
   # /etc/init.d/ferm restart
1369

    
1370
or make sure to run after boot:
1371

    
1372
.. code-block:: console
1373

    
1374
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1375

    
1376
and if you have IPv6 enabled:
1377

    
1378
.. code-block:: console
1379

    
1380
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1381
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1382

    
1383
You can check which clients are currently served by nfdhcpd by running:
1384

    
1385
.. code-block:: console
1386

    
1387
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1388

    
1389
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1390

    
1391
Public Network Setup
1392
--------------------
1393

    
1394
To achieve basic networking the simplest way is to have a common bridge (e.g.
1395
``br0``, on the same collision domain with the router) where all VMs will
1396
connect to. Packets will be "forwarded" to the router and then to the Internet.
1397
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1398
:ref:`Network <networks>` section).
1399

    
1400
Physical Host Setup
1401
~~~~~~~~~~~~~~~~~~~
1402

    
1403
Assuming ``eth0`` on both hosts is the public interface (directly connected
1404
to the router), run on every node:
1405

    
1406
.. code-block:: console
1407

    
1408
   # apt-get install vlan
1409
   # brctl addbr br0
1410
   # ip link set br0 up
1411
   # vconfig add eth0 100
1412
   # ip link set eth0.100 up
1413
   # brctl addif br0 eth0.100
1414

    
1415

    
1416
Testing a Public Network
1417
~~~~~~~~~~~~~~~~~~~~~~~~
1418

    
1419
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1420
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1421
network by running:
1422

    
1423
.. code-block:: console
1424

    
1425
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1426

    
1427
Then, connect the network to all your nodegroups. We assume that we only have
1428
one nodegroup (``default``) in our Ganeti cluster:
1429

    
1430
.. code-block:: console
1431

    
1432
   # gnt-network connect test-net-public default bridged br0
1433

    
1434
Now, it is time to test that the backend infrastracture is correctly setup for
1435
the Public Network. We will add a new VM, the same way we did it on the
1436
previous testing section. However, now will also add one NIC, configured to be
1437
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1438

    
1439
.. code-block:: console
1440

    
1441
   # gnt-instance add -o snf-image+default --os-parameters \
1442
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1443
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1444
                      --net 0:ip=pool,network=test-net-public \
1445
                      testvm2
1446

    
1447
If the above returns successfully, connect to the new VM and run:
1448

    
1449
.. code-block:: console
1450

    
1451
   root@testvm2:~ # ip addr
1452
   root@testvm2:~ # ip route
1453
   root@testvm2:~ # cat /etc/resolv.conf
1454

    
1455
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1456
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1457
ganeti, snf-network and nfdhcpd.
1458

    
1459
Now ping the outside world. If this works too, then you have also configured
1460
correctly your physical host and router.
1461

    
1462
Make sure everything works as expected, before proceeding with the Private
1463
Networks setup.
1464

    
1465
.. _private-networks-setup:
1466

    
1467
Private Networks Setup
1468
----------------------
1469

    
1470
Synnefo supports two types of private networks:
1471

    
1472
 - based on MAC filtering
1473
 - based on physical VLANs
1474

    
1475
Both types provide Layer 2 isolation to the end-user.
1476

    
1477
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1478
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1479
physical VLAN. To this end to assure isolation among end-users' private networks
1480
each has to have different MAC prefix (for the filtering to take place) or to be
1481
"connected" to a different bridge (VLAN actually).
1482

    
1483
Physical Host Setup
1484
~~~~~~~~~~~~~~~~~~~
1485

    
1486
In order to create the necessary VLAN/bridges, one for MAC filtered private
1487
networks and various (e.g. 20) for private networks based on physical VLANs,
1488
run on every node:
1489

    
1490
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1491
configured correctly) connected together, run on every node:
1492

    
1493
.. code-block:: console
1494

    
1495
   # modprobe 8021q
1496
   # $iface=eth0
1497
   # for prv in $(seq 0 20); do
1498
        vlan=$prv
1499
        bridge=prv$prv
1500
        vconfig add $iface $vlan
1501
        ifconfig $iface.$vlan up
1502
        brctl addbr $bridge
1503
        brctl setfd $bridge 0
1504
        brctl addif $bridge $iface.$vlan
1505
        ifconfig $bridge up
1506
      done
1507

    
1508
The above will do the following :
1509

    
1510
 * provision 21 new bridges: ``prv0`` - ``prv20``
1511
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1512
 * add the corresponding vlan to the equivalent bridge
1513

    
1514
You can run ``brctl show`` on both nodes to see if everything was setup
1515
correctly.
1516

    
1517
Testing the Private Networks
1518
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1519

    
1520
To test the Private Networks, we will create two instances and put them in the
1521
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1522
that the instances will have a second NIC connected to the ``prv0``
1523
pre-provisioned bridge and a third to ``prv1``.
1524

    
1525
We run the same command as in the Public Network testing section, but with one
1526
more argument for the second NIC:
1527

    
1528
.. code-block:: console
1529

    
1530
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1531
   # gnt-network connect test-net-prv-mac default bridged prv0
1532

    
1533
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1534
   # gnt-network connect test-net-prv-vlan default bridged prv1
1535

    
1536
   # gnt-instance add -o snf-image+default --os-parameters \
1537
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1538
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1539
                      --net 0:ip=pool,network=test-net-public \
1540
                      --net 1:ip=pool,network=test-net-prv-mac \
1541
                      --net 2:ip=none,network=test-net-prv-vlan \
1542
                      testvm3
1543

    
1544
   # gnt-instance add -o snf-image+default --os-parameters \
1545
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1546
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1547
                      --net 0:ip=pool,network=test-net-public \
1548
                      --net 1:ip=pool,network=test-net-prv-mac \
1549
                      --net 2:ip=none,network=test-net-prv-vlan \
1550
                      testvm4
1551

    
1552
Above, we create two instances with first NIC connected to the internet, their
1553
second NIC connected to a MAC filtered private Network and their third NIC
1554
connected to the first Physical VLAN Private Network. Now, connect to the
1555
instances using VNC and make sure everything works as expected:
1556

    
1557
 a) The instances have access to the public internet through their first eth
1558
    interface (``eth0``), which has been automatically assigned a public IP.
1559

    
1560
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1561

    
1562
 c) ip link set ``eth1``/``eth2`` up
1563

    
1564
 d) dhclient ``eth1``/``eth2``
1565

    
1566
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1567

    
1568
If everything works as expected, then you have finished the Network Setup at the
1569
backend for both types of Networks (Public & Private).
1570

    
1571
.. _cyclades-gtools:
1572

    
1573
Cyclades Ganeti tools
1574
---------------------
1575

    
1576
In order for Ganeti to be connected with Cyclades later on, we need the
1577
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1578
case). You can install them by running in both nodes:
1579

    
1580
.. code-block:: console
1581

    
1582
   # apt-get install snf-cyclades-gtools
1583

    
1584
This will install the following:
1585

    
1586
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1587
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1588
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1589

    
1590
Configure ``snf-cyclades-gtools``
1591
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1592

    
1593
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1594
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1595
that need it:
1596

    
1597
.. code-block:: console
1598

    
1599
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1600

    
1601
The above variables should reflect your :ref:`Message Queue setup
1602
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1603

    
1604
Connect ``snf-image`` with ``snf-progress-monitor``
1605
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1606

    
1607
Finally, we need to configure ``snf-image`` to publish progress messages during
1608
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1609
set the corresponding variable to ``snf-progress-monitor``:
1610

    
1611
.. code-block:: console
1612

    
1613
   PROGRESS_MONITOR="snf-progress-monitor"
1614

    
1615
This file should be editted in all Ganeti nodes.
1616

    
1617
.. _rapi-user:
1618

    
1619
Synnefo RAPI user
1620
-----------------
1621

    
1622
As a last step before installing Cyclades, create a new RAPI user that will
1623
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1624
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1625
You can do this, by first running:
1626

    
1627
.. code-block:: console
1628

    
1629
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1630

    
1631
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1632

    
1633
.. code-block:: console
1634

    
1635
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1636

    
1637
More about Ganeti's RAPI users `here.
1638
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1639

    
1640
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1641
to the actual Cyclades installation.
1642

    
1643

    
1644
Installation of Cyclades on node1
1645
=================================
1646

    
1647
This section describes the installation of Cyclades. Cyclades is Synnefo's
1648
Compute service. The Image Service will get installed automatically along with
1649
Cyclades, because it is contained in the same Synnefo component.
1650

    
1651
We will install Cyclades on node1. To do so, we install the corresponding
1652
package by running on node1:
1653

    
1654
.. code-block:: console
1655

    
1656
   # apt-get install snf-cyclades-app memcached python-memcache
1657

    
1658
If all packages install successfully, then Cyclades are installed and we
1659
proceed with their configuration.
1660

    
1661
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1662
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1663
This is achieved by storing all sensitive information to a CACHE backend and
1664
exporting it via VMAPI. The cache entries are invalidated after the first
1665
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1666
`Django <https://www.djangoproject.com/>`_ cache backend.
1667

    
1668
Configuration of Cyclades
1669
=========================
1670

    
1671
Conf files
1672
----------
1673

    
1674
After installing Cyclades, a number of new configuration files will appear under
1675
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1676
only the minimal needed changes to result with a working system. In general,
1677
sane defaults have been chosen for the most of the options, to cover most of the
1678
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1679
once you get familiar with the different options.
1680

    
1681
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1682

    
1683
.. code-block:: console
1684

    
1685
   ASTAKOS_URL = 'https://node1.example.com/'
1686

    
1687
   # Set to False if astakos & cyclades are on the same host
1688
   CYCLADES_PROXY_USER_SERVICES = False
1689

    
1690
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set
1691
to point to Astakos (this should have the same value with Pithos's
1692
``ASTAKOS_URL``, setup :ref:`previously <conf-pithos>`).
1693

    
1694
.. warning::
1695

    
1696
   All services must match the quotaholder token and url configured for
1697
   quotaholder.
1698

    
1699
TODO: Document the Network Options here
1700

    
1701
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1702

    
1703
.. code-block:: console
1704

    
1705
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1706
   CLOUDBAR_ACTIVE_SERVICE = '2'
1707
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
1708
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu'
1709

    
1710
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1711
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1712
used by the Cyclades Web UI to get from Astakos all the information needed to
1713
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1714
above should have the same values we put in the corresponding variables in
1715
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1716
:ref:`Pithos configuration <conf-pithos>` section.
1717

    
1718
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos
1719
service. You can see all :ref:`registered services <services-reg>` by running
1720
on the Astakos node (node1):
1721

    
1722
.. code-block:: console
1723

    
1724
   # snf-manage service-list
1725

    
1726
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's
1727
``id`` as shown by the above command, in our case ``2``.
1728

    
1729
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1730

    
1731
.. code-block:: console
1732

    
1733
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1734
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1735

    
1736
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1737
denotes the Pithos database (where the Image files are stored). So we set that
1738
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1739
Pithos data location.
1740

    
1741
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1742

    
1743
.. code-block:: console
1744

    
1745
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1746

    
1747
The above settings denote the Message Queue. Those settings should have the same
1748
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1749
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1750

    
1751
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1752

    
1753
.. code-block:: console
1754

    
1755
   UI_LOGIN_URL = "https://node1.example.com/im/login"
1756
   UI_LOGOUT_URL = "https://node1.example.com/im/logout"
1757

    
1758
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1759
if they are not logged in. We point that to Astakos.
1760

    
1761
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1762
user when he/she logs out. We point that to Astakos, too.
1763

    
1764
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1765

    
1766
.. code-block:: console
1767

    
1768
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1769
   VMAPI_BASE_URL = "https://node1.example.com"
1770

    
1771
Edit ``/etc/default/vncauthproxy``:
1772

    
1773
.. code-block:: console
1774

    
1775
   CHUID="nobody:www-data"
1776

    
1777
We have now finished with the basic Cyclades configuration.
1778

    
1779
Database Initialization
1780
-----------------------
1781

    
1782
Once Cyclades is configured, we sync the database:
1783

    
1784
.. code-block:: console
1785

    
1786
   $ snf-manage syncdb
1787
   $ snf-manage migrate
1788

    
1789
and load the initial server flavors:
1790

    
1791
.. code-block:: console
1792

    
1793
   $ snf-manage loaddata flavors
1794

    
1795
If everything returns successfully, our database is ready.
1796

    
1797
Add the Ganeti backend
1798
----------------------
1799

    
1800
In our installation we assume that we only have one Ganeti cluster, the one we
1801
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1802
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1803
correctly.
1804

    
1805
.. code-block:: console
1806

    
1807
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1808

    
1809
You can see everything has been setup correctly by running:
1810

    
1811
.. code-block:: console
1812

    
1813
   $ snf-manage backend-list
1814

    
1815
Enable the new backend by running:
1816

    
1817
.. code-block::
1818

    
1819
   $ snf-manage backend-modify --drained False 1
1820

    
1821
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1822
    This means that you cannot add VMs to it. The reason for this is that the
1823
    nodes should be unavailable to Synnefo until the Administrator explicitly
1824
    releases them. To change this setting, use ``snf-manage backend-modify
1825
    --drained False <backend-id>``.
1826

    
1827
If something is not set correctly, you can modify the backend with the
1828
``snf-manage backend-modify`` command. If something has gone wrong, you could
1829
modify the backend to reflect the Ganeti installation by running:
1830

    
1831
.. code-block:: console
1832

    
1833
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1834
                               --user=cyclades
1835
                               --pass=example_rapi_passw0rd
1836
                               1
1837

    
1838
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1839
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1840
can talk to Ganeti even after a Ganeti master-failover.
1841

    
1842
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1843
password.  Once we setup the first backend to point at our Ganeti cluster, we
1844
update the Cyclades backends status by running:
1845

    
1846
.. code-block:: console
1847

    
1848
   $ snf-manage backend-update-status
1849

    
1850
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1851
guide,we won't get into more detail regarding mulitple backends. If you want to
1852
learn more please see /*TODO*/.
1853

    
1854
Add a Public Network
1855
----------------------
1856

    
1857
Cyclades supports different Public Networks on different Ganeti backends.
1858
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1859
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1860
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1861
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1862

    
1863
.. code-block:: console
1864

    
1865
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1866
                               --gateway=5.6.7.1 \
1867
                               --subnet6=2001:648:2FFC:1322::/64 \
1868
                               --gateway6=2001:648:2FFC:1322::1 \
1869
                               --public --dhcp --flavor=CUSTOM \
1870
                               --link=br0 --mode=bridged \
1871
                               --name=public_network \
1872
                               --backend-id=1
1873

    
1874
This will create the Public Network on both Cyclades and the Ganeti backend. To
1875
make sure everything was setup correctly, also run:
1876

    
1877
.. code-block:: console
1878

    
1879
   $ snf-manage reconcile-networks
1880

    
1881
You can see all available networks by running:
1882

    
1883
.. code-block:: console
1884

    
1885
   $ snf-manage network-list
1886

    
1887
and inspect each network's state by running:
1888

    
1889
.. code-block:: console
1890

    
1891
   $ snf-manage network-inspect <net_id>
1892

    
1893
Finally, you can see the networks from the Ganeti perspective by running on the
1894
Ganeti MASTER:
1895

    
1896
.. code-block:: console
1897

    
1898
   $ gnt-network list
1899
   $ gnt-network info <network_name>
1900

    
1901
Create pools for Private Networks
1902
---------------------------------
1903

    
1904
To prevent duplicate assignment of resources to different private networks,
1905
Cyclades supports two types of pools:
1906

    
1907
 - MAC prefix Pool
1908
 - Bridge Pool
1909

    
1910
As long as those resourses have been provisioned, admin has to define two
1911
these pools in Synnefo:
1912

    
1913

    
1914
.. code-block:: console
1915

    
1916
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1917

    
1918
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1919

    
1920
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1921

    
1922
.. code-block:: console
1923

    
1924
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
1925

    
1926
Servers restart
1927
---------------
1928

    
1929
Restart gunicorn on node1:
1930

    
1931
.. code-block:: console
1932

    
1933
   # /etc/init.d/gunicorn restart
1934

    
1935
Now let's do the final connections of Cyclades with Ganeti.
1936

    
1937
``snf-dispatcher`` initialization
1938
---------------------------------
1939

    
1940
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1941
manages the Cyclades database accordingly. It also initializes all exchanges. By
1942
default it is not enabled during installation of Cyclades, so let's enable it in
1943
its configuration file ``/etc/default/snf-dispatcher``:
1944

    
1945
.. code-block:: console
1946

    
1947
   SNF_DSPTCH_ENABLE=true
1948

    
1949
and start the daemon:
1950

    
1951
.. code-block:: console
1952

    
1953
   # /etc/init.d/snf-dispatcher start
1954

    
1955
You can see that everything works correctly by tailing its log file
1956
``/var/log/synnefo/dispatcher.log``.
1957

    
1958
``snf-ganeti-eventd`` on GANETI MASTER
1959
--------------------------------------
1960

    
1961
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1962
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1963
The daemon is already installed on the GANETI MASTER (node1 in our case).
1964
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1965
installation, so we enable it in its configuration file
1966
``/etc/default/snf-ganeti-eventd``:
1967

    
1968
.. code-block:: console
1969

    
1970
   SNF_EVENTD_ENABLE=true
1971

    
1972
and start the daemon:
1973

    
1974
.. code-block:: console
1975

    
1976
   # /etc/init.d/snf-ganeti-eventd start
1977

    
1978
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1979

    
1980
Apply Quotas
1981
------------
1982

    
1983
.. code-block:: console
1984

    
1985
   node1 # snf-manage astakos-init --load-service-resources
1986
   node1 # snf-manage quota --verify
1987
   node1 # snf-manage quota --sync
1988
   node2 # snf-manage pithos-reset-usage
1989
   node1 # snf-manage reconcile-resources-cyclades --fix
1990

    
1991
If all the above return successfully, then you have finished with the Cyclades
1992
installation and setup.
1993

    
1994
Let's test our installation now.
1995

    
1996

    
1997
Testing of Cyclades
1998
===================
1999

    
2000
Cyclades Web UI
2001
---------------
2002

    
2003
First of all we need to test that our Cyclades Web UI works correctly. Open your
2004
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2005
top cloud bar. This should redirect you to:
2006

    
2007
 `http://node1.example.com/ui/`
2008

    
2009
and the Cyclades home page should appear. If not, please go back and find what
2010
went wrong. Do not proceed if you don't see the Cyclades home page.
2011

    
2012
If the Cyclades home page appears, click on the orange button 'New machine'. The
2013
first step of the 'New machine wizard' will appear. This step shows all the
2014
available Images from which you can spawn new VMs. The list should be currently
2015
empty, as we haven't registered any Images yet. Close the wizard and browse the
2016
interface (not many things to see yet). If everything seems to work, let's
2017
register our first Image file.
2018

    
2019
Cyclades Images
2020
---------------
2021

    
2022
To test our Cyclades installation, we will use an Image stored on Pithos to
2023
spawn a new VM from the Cyclades interface. We will describe all steps, even
2024
though you may already have uploaded an Image on Pithos from a :ref:`previous
2025
<snf-image-images>` section:
2026

    
2027
 * Upload an Image file to Pithos
2028
 * Register that Image file to Cyclades
2029
 * Spawn a new VM from that Image from the Cyclades Web UI
2030

    
2031
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2032
command line client to do the uploading and registering of the Image.
2033

    
2034
Installation of `kamaki`
2035
~~~~~~~~~~~~~~~~~~~~~~~~
2036

    
2037
You can install `kamaki` anywhere you like, since it is a standalone client of
2038
the APIs and talks to the installation over `http`. For the purpose of this
2039
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2040
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2041
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2042
too. We do this by running:
2043

    
2044
.. code-block:: console
2045

    
2046
   # apt-get install kamaki
2047

    
2048
Configuration of kamaki
2049
~~~~~~~~~~~~~~~~~~~~~~~
2050

    
2051
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2052
installation. We do this by running:
2053

    
2054
.. code-block:: console
2055

    
2056
   $ kamaki config set user.url "https://node1.example.com"
2057
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
2058
   $ kamaki config set image.url "https://node1.example.com/image"
2059
   $ kamaki config set file.url "https://node2.example.com/v1"
2060
   $ kamaki config set token USER_TOKEN
2061

    
2062
The USER_TOKEN appears on the user's `Profile` web page on the Astakos Web UI.
2063

    
2064
You can see that the new configuration options have been applied correctly,
2065
either by checking the editable file ``~/.kamakirc`` or by running:
2066

    
2067
.. code-block:: console
2068

    
2069
   $ kamaki config list
2070

    
2071
A quick test to check that kamaki is configured correctly, is to try to
2072
authenticate a user based on his/her token (in this case the user is you):
2073

    
2074
.. code-block:: console
2075

    
2076
  $ kamaki user authenticate
2077

    
2078
The above operation provides various user information, e.g. UUID (the unique
2079
user id) which might prove useful in some operations.
2080

    
2081
Upload an Image file to Pithos
2082
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2083

    
2084
Now, that we have set up `kamaki` we will upload the Image that we have
2085
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2086
under the root ``Pithos`` container (as you may have done when uploading the
2087
Image from the Pithos Web UI), we will create a new container called ``images``
2088
and store the Image under that container. We do this for two reasons:
2089

    
2090
a) To demonstrate how to create containers other than the default ``Pithos``.
2091
   This can be done only with the `kamaki` client and not through the Web UI.
2092

    
2093
b) As a best organization practise, so that you won't have your Image files
2094
   tangled along with all your other Pithos files and directory structures.
2095

    
2096
We create the new ``images`` container by running:
2097

    
2098
.. code-block:: console
2099

    
2100
   $ kamaki file create images
2101

    
2102
To check if the container has been created, list all containers of your
2103
account:
2104

    
2105
.. code-block:: console
2106

    
2107
  $ kamaki file list
2108

    
2109
Then, we upload the Image file to that container:
2110

    
2111
.. code-block:: console
2112

    
2113
   $ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump images
2114

    
2115
The first is the local path and the second is the remote container on Pithos.
2116
Check if the file has been uploaded, by listing the container contents:
2117

    
2118
.. code-block:: console
2119

    
2120
  $ kamaki file list images
2121

    
2122
Alternatively check if the new container and file appear on the Pithos Web UI.
2123

    
2124
Register an existing Image file to Cyclades
2125
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2126

    
2127
For the purposes of the following example, we assume that the user UUID is
2128
``u53r-un1qu3-1d``.
2129

    
2130
Once the Image file has been successfully uploaded on Pithos then we register
2131
it to Cyclades, by running:
2132

    
2133
.. code-block:: console
2134

    
2135
   $ kamaki image register "Debian Base" \
2136
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump \
2137
                           --public \
2138
                           --disk-format=diskdump \
2139
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2140
                           --property description="Debian Squeeze Base System" \
2141
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2142
                           --property sortorder=1 --property USERS=root --property OS=debian
2143

    
2144
This command registers the Pithos file
2145
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump`` as an
2146
Image in Cyclades. This Image will be public (``--public``), so all users will
2147
be able to spawn VMs from it and is of type ``diskdump``. The first two
2148
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2149
properties are optional, but recommended, so that the Images appear nicely on
2150
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2151
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2152
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2153

    
2154
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2155
from Cyclades to Ganeti and then `snf-image` (also see
2156
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2157
used to show information on the Cyclades UI.
2158

    
2159
Spawn a VM from the Cyclades Web UI
2160
-----------------------------------
2161

    
2162
If the registration completes successfully, then go to the Cyclades Web UI from
2163
your browser at:
2164

    
2165
 `https://node1.example.com/ui/`
2166

    
2167
Click on the 'New Machine' button and the first step of the wizard will appear.
2168
Click on 'My Images' (right after 'System' Images) on the left pane of the
2169
wizard. Your previously registered Image "Debian Base" should appear under
2170
'Available Images'. If not, something has gone wrong with the registration. Make
2171
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2172
register`` returns successfully with all options and properties as shown above.
2173

    
2174
If the Image appears on the list, select it and complete the wizard by selecting
2175
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2176
write down your password, because you *WON'T* be able to retrieve it later.
2177

    
2178
If everything was setup correctly, after a few minutes your new machine will go
2179
to state 'Running' and you will be able to use it. Click 'Console' to connect
2180
through VNC out of band, or click on the machine's icon to connect directly via
2181
SSH or RDP (for windows machines).
2182

    
2183
Congratulations. You have successfully installed the whole Synnefo stack and
2184
connected all components. Go ahead in the next section to test the Network
2185
functionality from inside Cyclades and discover even more features.
2186

    
2187
General Testing
2188
===============
2189

    
2190
Notes
2191
=====
2192