Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 7fb14dbb

History | View | Annotate | Download (76 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
22
not released yet.
23

    
24
If you just want to install the Object Storage Service (Pithos), follow the
25
guide and just stop after the "Testing of Pithos" section.
26

    
27

    
28
Installation of Synnefo / Introduction
29
======================================
30

    
31
We will install the services with the above list's order. The last three
32
services will be installed in a single step (at the end), because at the moment
33
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
be installed in the second, due to a conflict between the snf-pithos-app and
36
snf-cyclades-app component (scheduled to be fixed in the next version).
37

    
38
For the rest of the documentation we will refer to the first physical node as
39
"node1" and the second as "node2". We will also assume that their domain names
40
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
41
"4.3.2.2" respectively.
42

    
43
.. note:: It is import that the two machines are under the same domain name.
44
    If they are not, you can do this by editting the file ``/etc/hosts``
45
    on both machines, and add the following lines:
46

    
47
    .. code-block:: console
48

    
49
        4.3.2.1     node1.example.com
50
        4.3.2.2     node2.example.com
51

    
52

    
53
General Prerequisites
54
=====================
55

    
56
These are the general synnefo prerequisites, that you need on node1 and node2
57
and are related to all the services (Astakos, Pithos, Cyclades).
58

    
59
To be able to download all synnefo components you need to add the following
60
lines in your ``/etc/apt/sources.list`` file:
61

    
62
| ``deb http://apt.dev.grnet.gr squeeze/``
63
| ``deb-src http://apt.dev.grnet.gr squeeze/``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
105

    
106
You can install apache2, progresql and ntp by running:
107

    
108
.. code-block:: console
109

    
110
   # apt-get install apache2 postgresql ntp
111

    
112
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
113
the official debian backports:
114

    
115
.. code-block:: console
116

    
117
   # apt-get -t squeeze-backports install gunicorn
118

    
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
120

    
121
.. code-block:: console
122

    
123
   # apt-get -t squeeze-backports install python-gevent
124

    
125
On node1, we will create our databases, so you will also need the
126
python-psycopg2 package:
127

    
128
.. code-block:: console
129

    
130
   # apt-get install python-psycopg2
131

    
132
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
133
following line to ``/etc/apt/sources.list``:
134

    
135
.. code-block:: console
136

    
137
    deb http://www.rabbitmq.com/debian testing main
138

    
139
Add RabbitMQ public key, to trusted key list:
140

    
141
.. code-block:: console
142

    
143
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
144
  # apt-key add rabbitmq-signing-key-public.asc
145

    
146
Finally, to install the package run:
147

    
148
.. code-block:: console
149

    
150
  # apt-get update
151
  # apt-get install rabbitmq-server
152

    
153
Database setup
154
~~~~~~~~~~~~~~
155

    
156
On node1, we create a database called ``snf_apps``, that will host all django
157
apps related tables. We also create the user ``synnefo`` and grant him all
158
privileges on the database. We do this by running:
159

    
160
.. code-block:: console
161

    
162
    root@node1:~ # su - postgres
163
    postgres@node1:~ $ psql
164
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
166
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167

    
168
We also create the database ``snf_pithos`` needed by the Pithos backend and
169
grant the ``synnefo`` user all privileges on the database. This database could
170
be created on node2 instead, but we do it on node1 for simplicity. We will
171
create all needed databases on node1 and then node2 will connect to them.
172

    
173
.. code-block:: console
174

    
175
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
176
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177

    
178
Configure the database to listen to all network interfaces. You can do this by
179
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
180
``listen_addresses`` to ``'*'`` :
181

    
182
.. code-block:: console
183

    
184
    listen_addresses = '*'
185

    
186
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
187
node2 to connect to the database. Add the following lines under ``#IPv4 local
188
connections:`` :
189

    
190
.. code-block:: console
191

    
192
    host		all	all	4.3.2.1/32	md5
193
    host		all	all	4.3.2.2/32	md5
194

    
195
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
196
actual IPs. Now, restart the server to apply the changes:
197

    
198
.. code-block:: console
199

    
200
   # /etc/init.d/postgresql restart
201

    
202
Gunicorn setup
203
~~~~~~~~~~~~~~
204

    
205
Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
206

    
207
.. code-block:: console
208

    
209
    CONFIG = {
210
     'mode': 'django',
211
     'environment': {
212
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
213
     },
214
     'working_dir': '/etc/synnefo',
215
     'user': 'www-data',
216
     'group': 'www-data',
217
     'args': (
218
       '--bind=127.0.0.1:8080',
219
       '--worker-class=gevent',
220
       '--workers=8',
221
       '--log-level=debug',
222
     ),
223
    }
224

    
225
.. warning:: Do NOT start the server yet, because it won't find the
226
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
227
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
228
    ``--worker-class=sync``. We will start the server after successful
229
    installation of astakos. If the server is running::
230

    
231
       # /etc/init.d/gunicorn stop
232

    
233
Apache2 setup
234
~~~~~~~~~~~~~
235

    
236
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
237
following:
238

    
239
.. code-block:: console
240

    
241
    <VirtualHost *:80>
242
        ServerName node1.example.com
243

    
244
        RewriteEngine On
245
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
246
        RewriteRule ^(.*)$ - [F,L]
247
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
248
    </VirtualHost>
249

    
250
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
251
following:
252

    
253
.. code-block:: console
254

    
255
    <IfModule mod_ssl.c>
256
    <VirtualHost _default_:443>
257
        ServerName node1.example.com
258

    
259
        Alias /static "/usr/share/synnefo/static"
260

    
261
        #  SetEnv no-gzip
262
        #  SetEnv dont-vary
263

    
264
       AllowEncodedSlashes On
265

    
266
       RequestHeader set X-Forwarded-Protocol "https"
267

    
268
    <Proxy * >
269
        Order allow,deny
270
        Allow from all
271
    </Proxy>
272

    
273
        SetEnv                proxy-sendchunked
274
        SSLProxyEngine        off
275
        ProxyErrorOverride    off
276

    
277
        ProxyPass        /static !
278
        ProxyPass        / http://localhost:8080/ retry=0
279
        ProxyPassReverse / http://localhost:8080/
280

    
281
        RewriteEngine On
282
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
283
        RewriteRule ^(.*)$ - [F,L]
284

    
285
        SSLEngine on
286
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
287
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
288
    </VirtualHost>
289
    </IfModule>
290

    
291
Now enable sites and modules by running:
292

    
293
.. code-block:: console
294

    
295
   # a2enmod ssl
296
   # a2enmod rewrite
297
   # a2dissite default
298
   # a2ensite synnefo
299
   # a2ensite synnefo-ssl
300
   # a2enmod headers
301
   # a2enmod proxy_http
302

    
303
.. warning:: Do NOT start/restart the server yet. If the server is running::
304

    
305
       # /etc/init.d/apache2 stop
306

    
307
.. _rabbitmq-setup:
308

    
309
Message Queue setup
310
~~~~~~~~~~~~~~~~~~~
311

    
312
The message queue will run on node1, so we need to create the appropriate
313
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
314
exchanges:
315

    
316
.. code-block:: console
317

    
318
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
319
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
320

    
321
We do not need to initialize the exchanges. This will be done automatically,
322
during the Cyclades setup.
323

    
324
Pithos data directory setup
325
~~~~~~~~~~~~~~~~~~~~~~~~~~~
326

    
327
As mentioned in the General Prerequisites section, there is a directory called
328
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
329
directory inside it:
330

    
331
.. code-block:: console
332

    
333
   # cd /srv/pithos
334
   # mkdir data
335
   # chown www-data:www-data data
336
   # chmod g+ws data
337

    
338
You are now ready with all general prerequisites concerning node1. Let's go to
339
node2.
340

    
341
Node2
342
-----
343

    
344
General Synnefo dependencies
345
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346

    
347
    * apache (http server)
348
    * gunicorn (WSGI http server)
349
    * postgresql (database)
350
    * ntp (NTP daemon)
351
    * gevent
352

    
353
You can install the above by running:
354

    
355
.. code-block:: console
356

    
357
   # apt-get install apache2 postgresql ntp
358

    
359
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
360
the official debian backports:
361

    
362
.. code-block:: console
363

    
364
   # apt-get -t squeeze-backports install gunicorn
365

    
366
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
367

    
368
.. code-block:: console
369

    
370
   # apt-get -t squeeze-backports install python-gevent
371

    
372
Node2 will connect to the databases on node1, so you will also need the
373
python-psycopg2 package:
374

    
375
.. code-block:: console
376

    
377
   # apt-get install python-psycopg2
378

    
379
Database setup
380
~~~~~~~~~~~~~~
381

    
382
All databases have been created and setup on node1, so we do not need to take
383
any action here. From node2, we will just connect to them. When you get familiar
384
with the software you may choose to run different databases on different nodes,
385
for performance/scalability/redundancy reasons, but those kind of setups are out
386
of the purpose of this guide.
387

    
388
Gunicorn setup
389
~~~~~~~~~~~~~~
390

    
391
Create the file ``/etc/gunicorn.d/synnefo`` containing the following
392
(same contents as in node1; you can just copy/paste the file):
393

    
394
.. code-block:: console
395

    
396
    CONFIG = {
397
     'mode': 'django',
398
     'environment': {
399
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
400
     },
401
     'working_dir': '/etc/synnefo',
402
     'user': 'www-data',
403
     'group': 'www-data',
404
     'args': (
405
       '--bind=127.0.0.1:8080',
406
       '--worker-class=gevent',
407
       '--workers=4',
408
       '--log-level=debug',
409
       '--timeout=43200'
410
     ),
411
    }
412

    
413
.. warning:: Do NOT start the server yet, because it won't find the
414
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
415
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
416
    ``--worker-class=sync``. We will start the server after successful
417
    installation of astakos. If the server is running::
418

    
419
       # /etc/init.d/gunicorn stop
420

    
421
Apache2 setup
422
~~~~~~~~~~~~~
423

    
424
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
425
following:
426

    
427
.. code-block:: console
428

    
429
    <VirtualHost *:80>
430
        ServerName node2.example.com
431

    
432
        RewriteEngine On
433
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
434
        RewriteRule ^(.*)$ - [F,L]
435
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
436
    </VirtualHost>
437

    
438
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
439
containing the following:
440

    
441
.. code-block:: console
442

    
443
    <IfModule mod_ssl.c>
444
    <VirtualHost _default_:443>
445
        ServerName node2.example.com
446

    
447
        Alias /static "/usr/share/synnefo/static"
448

    
449
        SetEnv no-gzip
450
        SetEnv dont-vary
451
        AllowEncodedSlashes On
452

    
453
        RequestHeader set X-Forwarded-Protocol "https"
454

    
455
        <Proxy * >
456
            Order allow,deny
457
            Allow from all
458
        </Proxy>
459

    
460
        SetEnv                proxy-sendchunked
461
        SSLProxyEngine        off
462
        ProxyErrorOverride    off
463

    
464
        ProxyPass        /static !
465
        ProxyPass        / http://localhost:8080/ retry=0
466
        ProxyPassReverse / http://localhost:8080/
467

    
468
        SSLEngine on
469
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
470
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
471
    </VirtualHost>
472
    </IfModule>
473

    
474
As in node1, enable sites and modules by running:
475

    
476
.. code-block:: console
477

    
478
   # a2enmod ssl
479
   # a2enmod rewrite
480
   # a2dissite default
481
   # a2ensite synnefo
482
   # a2ensite synnefo-ssl
483
   # a2enmod headers
484
   # a2enmod proxy_http
485

    
486
.. warning:: Do NOT start/restart the server yet. If the server is running::
487

    
488
       # /etc/init.d/apache2 stop
489

    
490
We are now ready with all general prerequisites for node2. Now that we have
491
finished with all general prerequisites for both nodes, we can start installing
492
the services. First, let's install Astakos on node1.
493

    
494

    
495
Installation of Astakos on node1
496
================================
497

    
498
To install astakos, grab the package from our repository (make sure  you made
499
the additions needed in your ``/etc/apt/sources.list`` file, as described
500
previously), by running:
501

    
502
.. code-block:: console
503

    
504
   # apt-get install snf-astakos-app snf-pithos-backend
505

    
506
After successful installation of snf-astakos-app, make sure that also
507
snf-webproject has been installed (marked as "Recommended" package). By default
508
Debian installs "Recommended" packages, but if you have changed your
509
configuration and the package didn't install automatically, you should
510
explicitly install it manually running:
511

    
512
.. code-block:: console
513

    
514
   # apt-get install snf-webproject
515

    
516
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
517
the experienced administrator the ability to install Synnefo in a custom made
518
`Django <https://www.djangoproject.com/>`_ project. This corner case
519
concerns only very advanced users that know what they are doing and want to
520
experiment with synnefo.
521

    
522

    
523
.. _conf-astakos:
524

    
525
Configuration of Astakos
526
========================
527

    
528
Conf Files
529
----------
530

    
531
After astakos is successfully installed, you will find the directory
532
``/etc/synnefo`` and some configuration files inside it. The files contain
533
commented configuration options, which are the default options. While installing
534
new snf-* components, new configuration files will appear inside the directory.
535
In this guide (and for all services), we will edit only the minimum necessary
536
configuration options, to reflect our setup. Everything else will remain as is.
537

    
538
After getting familiar with synnefo, you will be able to customize the software
539
as you wish and fits your needs. Many options are available, to empower the
540
administrator with extensively customizable setups.
541

    
542
For the snf-webproject component (installed as an astakos dependency), we
543
need the following:
544

    
545
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
546
uncomment and edit the ``DATABASES`` block to reflect our database:
547

    
548
.. code-block:: console
549

    
550
    DATABASES = {
551
     'default': {
552
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
553
         'ENGINE': 'postgresql_psycopg2',
554
         # ATTENTION: This *must* be the absolute path if using sqlite3.
555
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
556
         'NAME': 'snf_apps',
557
         'USER': 'synnefo',                      # Not used with sqlite3.
558
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
559
         # Set to empty string for localhost. Not used with sqlite3.
560
         'HOST': '4.3.2.1',
561
         # Set to empty string for default. Not used with sqlite3.
562
         'PORT': '5432',
563
     }
564
    }
565

    
566
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
567
``SECRET_KEY``. This is a Django specific setting which is used to provide a
568
seed in secret-key hashing algorithms. Set this to a random string of your
569
choise and keep it private:
570

    
571
.. code-block:: console
572

    
573
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
574

    
575
For astakos specific configuration, edit the following options in
576
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
577

    
578
.. code-block:: console
579

    
580
    ASTAKOS_DEFAULT_ADMIN_EMAIL = None
581

    
582
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
583

    
584
    ASTAKOS_BASE_URL = 'https://node1.example.com'
585

    
586
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
587
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL.
588

    
589
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
590
Every time a new account is created a notification is sent to this email.
591
For this we need access to a running mail server, so we have disabled
592
it for now by setting its value to None. For more informations on this,
593
read the relative :ref:`section <mail-server>`.
594

    
595
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
596
    If you would like to enable it, you have to edit the following options:
597

    
598
    .. code-block:: console
599

    
600
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
601
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
602
        ASTAKOS_RECAPTCHA_USE_SSL = True
603
        ASTAKOS_RECAPTCHA_ENABLED = True
604

    
605
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
606
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
607

    
608
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
609

    
610
.. code-block:: console
611

    
612
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
613

    
614
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/ui/get_services'
615

    
616
    CLOUDBAR_MENU_URL = 'https://node1.example.com/ui/get_menu'
617

    
618
Those settings have to do with the black cloudbar endpoints and will be
619
described in more detail later on in this guide. For now, just edit the domain
620
to point at node1 which is where we have installed Astakos.
621

    
622
If you are an advanced user and want to use the Shibboleth Authentication
623
method, read the relative :ref:`section <shibboleth-auth>`.
624

    
625
.. note:: Because Cyclades and Astakos are running on the same machine
626
    in our example, we have to deactivate the CSRF verification. We can do so
627
    by adding to
628
    ``/etc/synnefo/99-local.conf``:
629

    
630
    .. code-block:: console
631

    
632
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
633
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
634

    
635
Enable Pooling
636
--------------
637

    
638
This section can be bypassed, but we strongly recommend you apply the following,
639
since they result in a significant performance boost.
640

    
641
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
642
around Psycopg2. This allows independent Django requests to reuse pooled DB
643
connections, with significant performance gains.
644

    
645
To use, first monkey-patch psycopg2. For Django, run this before the
646
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
647

    
648
.. code-block:: console
649

    
650
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
651
    monkey_patch_psycopg2()
652

    
653
Since we are running with greenlets, we should modify psycopg2 behavior, so it
654
works properly in a greenlet context:
655

    
656
.. code-block:: console
657

    
658
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
659
    make_psycopg_green()
660

    
661
Use the Psycopg2 driver as usual. For Django, this means using
662
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
663
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
664
driver, through ``DATABASES.OPTIONS`` in Django.
665

    
666
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
667
file that looks like this:
668

    
669
.. code-block:: console
670

    
671
    # Monkey-patch psycopg2
672
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
673
    monkey_patch_psycopg2()
674

    
675
    # If running with greenlets
676
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
677
    make_psycopg_green()
678

    
679
    DATABASES = {
680
     'default': {
681
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
682
         'ENGINE': 'postgresql_psycopg2',
683
         'OPTIONS': {'synnefo_poolsize': 8},
684

    
685
         # ATTENTION: This *must* be the absolute path if using sqlite3.
686
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
687
         'NAME': 'snf_apps',
688
         'USER': 'synnefo',                      # Not used with sqlite3.
689
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
690
         # Set to empty string for localhost. Not used with sqlite3.
691
         'HOST': '4.3.2.1',
692
         # Set to empty string for default. Not used with sqlite3.
693
         'PORT': '5432',
694
     }
695
    }
696

    
697
Database Initialization
698
-----------------------
699

    
700
After configuration is done, we initialize the database by running:
701

    
702
.. code-block:: console
703

    
704
    # snf-manage syncdb
705

    
706
At this example we don't need to create a django superuser, so we select
707
``[no]`` to the question. After a successful sync, we run the migration needed
708
for astakos:
709

    
710
.. code-block:: console
711

    
712
    # snf-manage migrate im
713
    # snf-manage migrate quotaholder_app
714

    
715
Then, we load the pre-defined user groups
716

    
717
.. code-block:: console
718

    
719
    # snf-manage loaddata groups
720

    
721
.. _services-reg:
722

    
723
Services Registration
724
---------------------
725

    
726
When the database is ready, we need to register the services. The following
727
command will ask you to register the standard Synnefo components (astakos,
728
cyclades, and pithos) along with the services they provide. Note that you
729
have to register at least astakos in order to have a usable authentication
730
system. For each component, you will be asked to provide its base
731
installation URL as well as the UI URL (to appear in the Cloudbar).
732
Moreover, the command will automatically register the resource definitions
733
offered by the services.
734

    
735
.. code-block:: console
736

    
737
    # snf-register-components
738

    
739
.. note::
740

    
741
   This command is equivalent to running the following series of commands;
742
   it registers the three components in astakos and then in each host it
743
   exports the respective service definitions, copies the exported json file
744
   to the astakos host, where it finally imports it:
745

    
746
    .. code-block:: console
747

    
748
       astakos-host$ snf-manage component-add astakos astakos_ui_url
749
       astakos-host$ snf-manage component-add cyclades cyclades_ui_url
750
       astakos-host$ snf-manage component-add pithos pithos_ui_url
751
       astakos-host$ snf-manage service-export-astakos > astakos.json
752
       astakos-host$ snf-manage service-import --json astakos.json
753
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
754
       # copy the file to astakos-host
755
       astakos-host$ snf-manage service-import --json cyclades.json
756
       pithos-host$ snf-manage service-export-pithos > pithos.json
757
       # copy the file to astakos-host
758
       astakos-host$ snf-manage service-import --json pithos.json
759

    
760
Setting Default Base Quota for Resources
761
----------------------------------------
762

    
763
We now have to specify the limit on resources that each user can employ
764
(exempting resources offered by projects).
765

    
766
.. code-block:: console
767

    
768
    # snf-manage resource-modify --limit-interactive
769

    
770

    
771
Servers Initialization
772
----------------------
773

    
774
Finally, we initialize the servers on node1:
775

    
776
.. code-block:: console
777

    
778
    root@node1:~ # /etc/init.d/gunicorn restart
779
    root@node1:~ # /etc/init.d/apache2 restart
780

    
781
We have now finished the Astakos setup. Let's test it now.
782

    
783

    
784
Testing of Astakos
785
==================
786

    
787
Open your favorite browser and go to:
788

    
789
``http://node1.example.com/im``
790

    
791
If this redirects you to ``https://node1.example.com/ui/`` and you can see
792
the "welcome" door of Astakos, then you have successfully setup Astakos.
793

    
794
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
795
and fill all your data at the sign up form. Then click "SUBMIT". You should now
796
see a green box on the top, which informs you that you made a successful request
797
and the request has been sent to the administrators. So far so good, let's
798
assume that you created the user with username ``user@example.com``.
799

    
800
Now we need to activate that user. Return to a command prompt at node1 and run:
801

    
802
.. code-block:: console
803

    
804
    root@node1:~ # snf-manage user-list
805

    
806
This command should show you a list with only one user; the one we just created.
807
This user should have an id with a value of ``1``. It should also have an
808
"active" status with the value of ``0`` (inactive). Now run:
809

    
810
.. code-block:: console
811

    
812
    root@node1:~ # snf-manage user-update --set-active 1
813

    
814
This modifies the active value to ``1``, and actually activates the user.
815
When running in production, the activation is done automatically with different
816
types of moderation, that Astakos supports. You can see the moderation methods
817
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
818
documentation. In production, you can also manually activate a user, by sending
819
him/her an activation email. See how to do this at the :ref:`User
820
activation <user_activation>` section.
821

    
822
Now let's go back to the homepage. Open ``http://node1.example.com/ui/`` with
823
your browser again. Try to sign in using your new credentials. If the astakos
824
menu appears and you can see your profile, then you have successfully setup
825
Astakos.
826

    
827
Let's continue to install Pithos now.
828

    
829

    
830
Installation of Pithos on node2
831
===============================
832

    
833
To install Pithos, grab the packages from our repository (make sure  you made
834
the additions needed in your ``/etc/apt/sources.list`` file, as described
835
previously), by running:
836

    
837
.. code-block:: console
838

    
839
   # apt-get install snf-pithos-app snf-pithos-backend
840

    
841
After successful installation of snf-pithos-app, make sure that also
842
snf-webproject has been installed (marked as "Recommended" package). Refer to
843
the "Installation of Astakos on node1" section, if you don't remember why this
844
should happen. Now, install the pithos web interface:
845

    
846
.. code-block:: console
847

    
848
   # apt-get install snf-pithos-webclient
849

    
850
This package provides the standalone pithos web client. The web client is the
851
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
852
interface's cloudbar, at the top of the Astakos homepage.
853

    
854

    
855
.. _conf-pithos:
856

    
857
Configuration of Pithos
858
=======================
859

    
860
Conf Files
861
----------
862

    
863
After Pithos is successfully installed, you will find the directory
864
``/etc/synnefo`` and some configuration files inside it, as you did in node1
865
after installation of astakos. Here, you will not have to change anything that
866
has to do with snf-common or snf-webproject. Everything is set at node1. You
867
only need to change settings that have to do with Pithos. Specifically:
868

    
869
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
870
this options:
871

    
872
.. code-block:: console
873

    
874
   ASTAKOS_BASE_URL = 'https://node1.example.com/'
875

    
876
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
877
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
878

    
879

    
880
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
881

    
882
   # Set to False if astakos & pithos are on the same host
883
   #PITHOS_PROXY_USER_SERVICES = True
884

    
885

    
886
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
887
find the Pithos backend database. Above we tell Pithos that its database is
888
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
889
``example_passw0rd``.  All those settings where setup during node1's "Database
890
setup" section.
891

    
892
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
893
the Pithos backend data. Above we tell Pithos to store its data under
894
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
895
directory at node1's "Pithos data directory setup" section.
896

    
897
The ``ASTAKOS_BASE_URL`` option informs the Pithos app where Astakos is.
898
The Astakos service is used for user management (authentication, quotas, etc.)
899

    
900
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos token returned by running on
901
the Astakos node (node1 in our case):
902

    
903
.. code-block:: console
904

    
905
   # snf-manage service-list
906

    
907
The token has been generated automatically during the :ref:`Pithos service
908
registration <services-reg>`.
909

    
910
Then we need to setup the web UI and connect it to astakos. To do so, edit
911
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
912

    
913
.. code-block:: console
914

    
915
    PITHOS_UI_LOGIN_URL = "https://node1.example.com/ui/login?next="
916
    PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
917

    
918
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
919
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
920
Pithos feedback form. Astakos already provides a generic feedback form for all
921
services, so we use this one.
922

    
923
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
924
object checksums. This results to improved performance during object uploading.
925
However, if compatibility with the OpenStack Object Storage API is important
926
then it should be changed to ``True``.
927

    
928
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
929
Pithos web UI with the astakos web UI (through the top cloudbar):
930

    
931
.. code-block:: console
932

    
933
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
934
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/ui/get_services'
935
    CLOUDBAR_MENU_URL = 'https://node1.example.com/ui/get_menu'
936

    
937
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
938
cloudbar.
939

    
940
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
941
Pithos web client to get from astakos all the information needed to fill its
942
own cloudbar. So we put our astakos deployment urls there.
943

    
944
Pooling and Greenlets
945
---------------------
946

    
947
Pithos is pooling-ready without the need of further configuration, because it
948
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
949
backend objects for access to the Pithos DB.
950

    
951
However, as in Astakos, since we are running with Greenlets, it is also
952
recommended to modify psycopg2 behavior so it works properly in a greenlet
953
context. This means adding the following lines at the top of your
954
``/etc/synnefo/10-snf-webproject-database.conf`` file:
955

    
956
.. code-block:: console
957

    
958
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
959
    make_psycopg_green()
960

    
961
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
962
mentioned above, depending on your setup) argument on your
963
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
964
like this:
965

    
966
.. code-block:: console
967

    
968
    CONFIG = {
969
     'mode': 'django',
970
     'environment': {
971
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
972
     },
973
     'working_dir': '/etc/synnefo',
974
     'user': 'www-data',
975
     'group': 'www-data',
976
     'args': (
977
       '--bind=127.0.0.1:8080',
978
       '--workers=4',
979
       '--worker-class=gevent',
980
       '--log-level=debug',
981
       '--timeout=43200'
982
     ),
983
    }
984

    
985
Stamp Database Revision
986
-----------------------
987

    
988
Pithos uses the alembic_ database migrations tool.
989

    
990
.. _alembic: http://alembic.readthedocs.org
991

    
992
After a sucessful installation, we should stamp it at the most recent
993
revision, so that future migrations know where to start upgrading in
994
the migration history.
995

    
996
First, find the most recent revision in the migration history:
997

    
998
.. code-block:: console
999

    
1000
    root@node2:~ # pithos-migrate history
1001
    2a309a9a3438 -> 27381099d477 (head), alter public add column url
1002
    165ba3fbfe53 -> 2a309a9a3438, fix statistics negative population
1003
    3dd56e750a3 -> 165ba3fbfe53, update account in paths
1004
    230f8ce9c90f -> 3dd56e750a3, Fix latest_version
1005
    8320b1c62d9 -> 230f8ce9c90f, alter nodes add column latest version
1006
    None -> 8320b1c62d9, create index nodes.parent
1007

    
1008
Finally, we stamp it with the one found in the previous step:
1009

    
1010
.. code-block:: console
1011

    
1012
    root@node2:~ # pithos-migrate stamp 27381099d477
1013

    
1014
Servers Initialization
1015
----------------------
1016

    
1017
After configuration is done, we initialize the servers on node2:
1018

    
1019
.. code-block:: console
1020

    
1021
    root@node2:~ # /etc/init.d/gunicorn restart
1022
    root@node2:~ # /etc/init.d/apache2 restart
1023

    
1024
You have now finished the Pithos setup. Let's test it now.
1025

    
1026

    
1027
Testing of Pithos
1028
=================
1029

    
1030
Open your browser and go to the Astakos homepage:
1031

    
1032
``http://node1.example.com/im``
1033

    
1034
Login, and you will see your profile page. Now, click the "pithos" link on the
1035
top black cloudbar. If everything was setup correctly, this will redirect you
1036
to:
1037

    
1038

    
1039
and you will see the blue interface of the Pithos application.  Click the
1040
orange "Upload" button and upload your first file. If the file gets uploaded
1041
successfully, then this is your first sign of a successful Pithos installation.
1042
Go ahead and experiment with the interface to make sure everything works
1043
correctly.
1044

    
1045
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1046

    
1047
If you don't stumble on any problems, then you have successfully installed
1048
Pithos, which you can use as a standalone File Storage Service.
1049

    
1050
If you would like to do more, such as:
1051

    
1052
    * Spawning VMs
1053
    * Spawning VMs from Images stored on Pithos
1054
    * Uploading your custom Images to Pithos
1055
    * Spawning VMs from those custom Images
1056
    * Registering existing Pithos files as Images
1057
    * Connect VMs to the Internet
1058
    * Create Private Networks
1059
    * Add VMs to Private Networks
1060

    
1061
please continue with the rest of the guide.
1062

    
1063

    
1064
Cyclades Prerequisites
1065
======================
1066

    
1067
Before proceeding with the Cyclades installation, make sure you have
1068
successfully set up Astakos and Pithos first, because Cyclades depends on
1069
them. If you don't have a working Astakos and Pithos installation yet, please
1070
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1071

    
1072
Besides Astakos and Pithos, you will also need a number of additional working
1073
prerequisites, before you start the Cyclades installation.
1074

    
1075
Ganeti
1076
------
1077

    
1078
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1079
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1080
Please refer to the
1081
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1082
gory details. A successful Ganeti installation concludes with a working
1083
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1084
<GANETI_NODES>`.
1085

    
1086
The above Ganeti cluster can run on different physical machines than node1 and
1087
node2 and can scale independently, according to your needs.
1088

    
1089
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1090
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1091
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1092

    
1093
We highly recommend that you read the official Ganeti documentation, if you are
1094
not familiar with Ganeti.
1095

    
1096
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1097
support IP pool management. This feature will be available in Ganeti >= 2.7.
1098
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1099
GRNET provided packages until stable 2.7 is out. To do so:
1100

    
1101
.. code-block:: console
1102

    
1103
   # apt-get install snf-ganeti ganeti-htools
1104
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1105

    
1106
You should have:
1107

    
1108
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1109

    
1110
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1111
both nodes, choose a domain name that resolves to a valid floating IP (let's
1112
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1113
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1114
If not then skip passing --no-ssh-init but be aware that it will replace
1115
/root/.ssh/* related files and you might lose access to master node. Also,
1116
make sure there is an lvm volume group named ``ganeti`` that will host your
1117
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1118
Then run on node1:
1119

    
1120
.. code-block:: console
1121

    
1122
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1123
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1124
                    --master-netdev eth0 ganeti.node1.example.com
1125
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1126
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1127
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1128

    
1129
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1130
                    --vm-capable=yes node2.example.com
1131
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1132
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1133

    
1134
For any problems you may stumble upon installing Ganeti, please refer to the
1135
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1136
of Ganeti is out of the scope of this guide.
1137

    
1138
.. _cyclades-install-snfimage:
1139

    
1140
snf-image
1141
---------
1142

    
1143
Installation
1144
~~~~~~~~~~~~
1145
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1146
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1147
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1148
node1 and node2. You can do this by running on *both* nodes:
1149

    
1150
.. code-block:: console
1151

    
1152
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1153

    
1154
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1155
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1156
able to access the Pithos database. This is why, we also install them on *all*
1157
VM-capable Ganeti nodes.
1158

    
1159
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1160
    not  work out of the box if you try to use URLs served by servers which do
1161
    not have a valid certificate. To circumvent this you should edit the file
1162
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1163

    
1164
After `snf-image` has been installed successfully, create the helper VM by
1165
running on *both* nodes:
1166

    
1167
.. code-block:: console
1168

    
1169
   # snf-image-update-helper
1170

    
1171
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1172
snf-image to run successfully, and it may take a few minutes depending on your
1173
Internet connection.
1174

    
1175
Configuration
1176
~~~~~~~~~~~~~
1177
snf-image supports native access to Images stored on Pithos. This means that
1178
it can talk directly to the Pithos backend, without the need of providing a
1179
public URL. More details, are described in the next section. For now, the only
1180
thing we need to do, is configure snf-image to access our Pithos backend.
1181

    
1182
To do this, we need to set the corresponding variables in
1183
``/etc/default/snf-image``, to reflect our Pithos setup:
1184

    
1185
.. code-block:: console
1186

    
1187
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1188

    
1189
    PITHOS_DATA="/srv/pithos/data"
1190

    
1191
If you have installed your Ganeti cluster on different nodes than node1 and
1192
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1193

    
1194
If you would like to use Images that are also/only stored locally, you need to
1195
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1196
Pithos.
1197

    
1198
Testing
1199
~~~~~~~
1200
You can test that snf-image is successfully installed by running on the
1201
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1202

    
1203
.. code-block:: console
1204

    
1205
   # gnt-os diagnose
1206

    
1207
This should return ``valid`` for snf-image.
1208

    
1209
If you are interested to learn more about snf-image's internals (and even use
1210
it alongside Ganeti without Synnefo), please see
1211
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information
1212
concerning installation instructions, documentation on the design and
1213
implementation, and supported Image formats.
1214

    
1215
.. _snf-image-images:
1216

    
1217
Actual Images for snf-image
1218
---------------------------
1219

    
1220
Now that snf-image is installed successfully we need to provide it with some
1221
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1222
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1223
format. For more information about snf-image Image formats see `here
1224
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1225

    
1226
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1227
above Images to be stored:
1228

    
1229
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1230
      in :file:`/etc/default/snf-image`)
1231
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1232
    * On Pithos (accessible natively, not only by its public URL)
1233

    
1234
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1235
on the official `snf-image page
1236
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1237
of type ``diskdump``. We will store it in our new Pithos installation.
1238

    
1239
To do so, do the following:
1240

    
1241
a) Download the Image from the official snf-image page.
1242

    
1243
b) Upload the Image to your Pithos installation, either using the Pithos Web
1244
   UI or the command line client `kamaki
1245
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1246

    
1247
Once the Image is uploaded successfully, download the Image's metadata file
1248
from the official snf-image page. You will need it, for spawning a VM from
1249
Ganeti, in the next section.
1250

    
1251
Of course, you can repeat the procedure to upload more Images, available from
1252
the `official snf-image page
1253
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1254

    
1255
.. _ganeti-with-pithos-images:
1256

    
1257
Spawning a VM from a Pithos Image, using Ganeti
1258
-----------------------------------------------
1259

    
1260
Now, it is time to test our installation so far. So, we have Astakos and
1261
Pithos installed, we have a working Ganeti installation, the snf-image
1262
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1263
Pithos. Make sure you also have the `metadata file
1264
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1265

    
1266
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1267

    
1268
.. code-block:: console
1269

    
1270
   # gnt-instance add -o snf-image+default --os-parameters \
1271
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1272
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1273
                      testvm1
1274

    
1275
In the above command:
1276

    
1277
 * ``img_passwd``: the arbitrary root password of your new instance
1278
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1279
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1280
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1281
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1282
               * ``container``: ``pithos`` (default, if the Web UI was used)
1283
               * ``filename``: the name of file (visible also from the Web UI)
1284
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1285
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1286
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1287

    
1288
If the ``gnt-instance add`` command returns successfully, then run:
1289

    
1290
.. code-block:: console
1291

    
1292
   # gnt-instance info testvm1 | grep "console connection"
1293

    
1294
to find out where to connect using VNC. If you can connect successfully and can
1295
login to your new instance using the root password ``my_vm_example_passw0rd``,
1296
then everything works as expected and you have your new Debian Base VM up and
1297
running.
1298

    
1299
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1300
to access the Pithos database and the Pithos backend data (newer versions
1301
require UUID instead of a username). Another issue you may encounter is that in
1302
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1303
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1304
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1305
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1306
are sure everything works till this point.
1307

    
1308
If everything works, you have successfully connected Ganeti with Pithos. Let's
1309
move on to networking now.
1310

    
1311
.. warning::
1312

    
1313
    You can bypass the networking sections and go straight to
1314
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1315
    the Cyclades Network Service, but only the Cyclades Compute Service
1316
    (recommended for now).
1317

    
1318
Networking Setup Overview
1319
-------------------------
1320

    
1321
This part is deployment-specific and must be customized based on the specific
1322
needs of the system administrator. However, to do so, the administrator needs
1323
to understand how each level handles Virtual Networks, to be able to setup the
1324
backend appropriately, before installing Cyclades. To do so, please read the
1325
:ref:`Network <networks>` section before proceeding.
1326

    
1327
Since synnefo 0.11 all network actions are managed with the snf-manage
1328
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1329
snf-network, bridges, vlans) to be already configured correctly. The only
1330
actions needed in this point are:
1331

    
1332
a) Have Ganeti with IP pool management support installed.
1333

    
1334
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1335

    
1336
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1337

    
1338
In order to test that everything is setup correctly before installing Cyclades,
1339
we will make some testing actions in this section, and the actual setup will be
1340
done afterwards with snf-manage commands.
1341

    
1342
.. _snf-network:
1343

    
1344
snf-network
1345
~~~~~~~~~~~
1346

    
1347
snf-network includes `kvm-vif-bridge` script that is invoked every time
1348
a tap (a VM's NIC) is created. Based on environment variables passed by
1349
Ganeti it issues various commands depending on the network type the NIC is
1350
connected to and sets up a corresponding dhcp lease.
1351

    
1352
Install snf-network on all Ganeti nodes:
1353

    
1354
.. code-block:: console
1355

    
1356
   # apt-get install snf-network
1357

    
1358
Then, in :file:`/etc/default/snf-network` set:
1359

    
1360
.. code-block:: console
1361

    
1362
   MAC_MASK=ff:ff:f0:00:00:00
1363

    
1364
.. _nfdhcpd:
1365

    
1366
nfdhcpd
1367
~~~~~~~
1368

    
1369
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1370
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1371
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1372
create a dhcp response.
1373

    
1374
.. code-block:: console
1375

    
1376
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1377
   # apt-get install nfdhcpd
1378

    
1379
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1380
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1381
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1382
VMs. Once you are finished, restart the server on all nodes:
1383

    
1384
.. code-block:: console
1385

    
1386
   # /etc/init.d/nfdhcpd restart
1387

    
1388
If you are using ``ferm``, then you need to run the following:
1389

    
1390
.. code-block:: console
1391

    
1392
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1393
   # /etc/init.d/ferm restart
1394

    
1395
or make sure to run after boot:
1396

    
1397
.. code-block:: console
1398

    
1399
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1400

    
1401
and if you have IPv6 enabled:
1402

    
1403
.. code-block:: console
1404

    
1405
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1406
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1407

    
1408
You can check which clients are currently served by nfdhcpd by running:
1409

    
1410
.. code-block:: console
1411

    
1412
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1413

    
1414
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1415

    
1416
Public Network Setup
1417
--------------------
1418

    
1419
To achieve basic networking the simplest way is to have a common bridge (e.g.
1420
``br0``, on the same collision domain with the router) where all VMs will
1421
connect to. Packets will be "forwarded" to the router and then to the Internet.
1422
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1423
:ref:`Network <networks>` section).
1424

    
1425
Physical Host Setup
1426
~~~~~~~~~~~~~~~~~~~
1427

    
1428
Assuming ``eth0`` on both hosts is the public interface (directly connected
1429
to the router), run on every node:
1430

    
1431
.. code-block:: console
1432

    
1433
   # apt-get install vlan
1434
   # brctl addbr br0
1435
   # ip link set br0 up
1436
   # vconfig add eth0 100
1437
   # ip link set eth0.100 up
1438
   # brctl addif br0 eth0.100
1439

    
1440

    
1441
Testing a Public Network
1442
~~~~~~~~~~~~~~~~~~~~~~~~
1443

    
1444
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1445
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1446
network by running:
1447

    
1448
.. code-block:: console
1449

    
1450
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1451

    
1452
Then, connect the network to all your nodegroups. We assume that we only have
1453
one nodegroup (``default``) in our Ganeti cluster:
1454

    
1455
.. code-block:: console
1456

    
1457
   # gnt-network connect test-net-public default bridged br0
1458

    
1459
Now, it is time to test that the backend infrastracture is correctly setup for
1460
the Public Network. We will add a new VM, the same way we did it on the
1461
previous testing section. However, now will also add one NIC, configured to be
1462
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1463

    
1464
.. code-block:: console
1465

    
1466
   # gnt-instance add -o snf-image+default --os-parameters \
1467
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1468
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1469
                      --net 0:ip=pool,network=test-net-public \
1470
                      testvm2
1471

    
1472
If the above returns successfully, connect to the new VM and run:
1473

    
1474
.. code-block:: console
1475

    
1476
   root@testvm2:~ # ip addr
1477
   root@testvm2:~ # ip route
1478
   root@testvm2:~ # cat /etc/resolv.conf
1479

    
1480
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1481
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1482
ganeti, snf-network and nfdhcpd.
1483

    
1484
Now ping the outside world. If this works too, then you have also configured
1485
correctly your physical host and router.
1486

    
1487
Make sure everything works as expected, before proceeding with the Private
1488
Networks setup.
1489

    
1490
.. _private-networks-setup:
1491

    
1492
Private Networks Setup
1493
----------------------
1494

    
1495
Synnefo supports two types of private networks:
1496

    
1497
 - based on MAC filtering
1498
 - based on physical VLANs
1499

    
1500
Both types provide Layer 2 isolation to the end-user.
1501

    
1502
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1503
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1504
physical VLAN. To this end to assure isolation among end-users' private networks
1505
each has to have different MAC prefix (for the filtering to take place) or to be
1506
"connected" to a different bridge (VLAN actually).
1507

    
1508
Physical Host Setup
1509
~~~~~~~~~~~~~~~~~~~
1510

    
1511
In order to create the necessary VLAN/bridges, one for MAC filtered private
1512
networks and various (e.g. 20) for private networks based on physical VLANs,
1513
run on every node:
1514

    
1515
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1516
configured correctly) connected together, run on every node:
1517

    
1518
.. code-block:: console
1519

    
1520
   # modprobe 8021q
1521
   # $iface=eth0
1522
   # for prv in $(seq 0 20); do
1523
        vlan=$prv
1524
        bridge=prv$prv
1525
        vconfig add $iface $vlan
1526
        ifconfig $iface.$vlan up
1527
        brctl addbr $bridge
1528
        brctl setfd $bridge 0
1529
        brctl addif $bridge $iface.$vlan
1530
        ifconfig $bridge up
1531
      done
1532

    
1533
The above will do the following :
1534

    
1535
 * provision 21 new bridges: ``prv0`` - ``prv20``
1536
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1537
 * add the corresponding vlan to the equivalent bridge
1538

    
1539
You can run ``brctl show`` on both nodes to see if everything was setup
1540
correctly.
1541

    
1542
Testing the Private Networks
1543
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1544

    
1545
To test the Private Networks, we will create two instances and put them in the
1546
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1547
that the instances will have a second NIC connected to the ``prv0``
1548
pre-provisioned bridge and a third to ``prv1``.
1549

    
1550
We run the same command as in the Public Network testing section, but with one
1551
more argument for the second NIC:
1552

    
1553
.. code-block:: console
1554

    
1555
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1556
   # gnt-network connect test-net-prv-mac default bridged prv0
1557

    
1558
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1559
   # gnt-network connect test-net-prv-vlan default bridged prv1
1560

    
1561
   # gnt-instance add -o snf-image+default --os-parameters \
1562
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1563
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1564
                      --net 0:ip=pool,network=test-net-public \
1565
                      --net 1:ip=pool,network=test-net-prv-mac \
1566
                      --net 2:ip=none,network=test-net-prv-vlan \
1567
                      testvm3
1568

    
1569
   # gnt-instance add -o snf-image+default --os-parameters \
1570
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1571
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1572
                      --net 0:ip=pool,network=test-net-public \
1573
                      --net 1:ip=pool,network=test-net-prv-mac \
1574
                      --net 2:ip=none,network=test-net-prv-vlan \
1575
                      testvm4
1576

    
1577
Above, we create two instances with first NIC connected to the internet, their
1578
second NIC connected to a MAC filtered private Network and their third NIC
1579
connected to the first Physical VLAN Private Network. Now, connect to the
1580
instances using VNC and make sure everything works as expected:
1581

    
1582
 a) The instances have access to the public internet through their first eth
1583
    interface (``eth0``), which has been automatically assigned a public IP.
1584

    
1585
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1586

    
1587
 c) ip link set ``eth1``/``eth2`` up
1588

    
1589
 d) dhclient ``eth1``/``eth2``
1590

    
1591
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1592

    
1593
If everything works as expected, then you have finished the Network Setup at the
1594
backend for both types of Networks (Public & Private).
1595

    
1596
.. _cyclades-gtools:
1597

    
1598
Cyclades Ganeti tools
1599
---------------------
1600

    
1601
In order for Ganeti to be connected with Cyclades later on, we need the
1602
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1603
case). You can install them by running in both nodes:
1604

    
1605
.. code-block:: console
1606

    
1607
   # apt-get install snf-cyclades-gtools
1608

    
1609
This will install the following:
1610

    
1611
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1612
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1613
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1614

    
1615
Configure ``snf-cyclades-gtools``
1616
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1617

    
1618
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1619
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1620
that need it:
1621

    
1622
.. code-block:: console
1623

    
1624
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1625

    
1626
The above variables should reflect your :ref:`Message Queue setup
1627
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1628

    
1629
Connect ``snf-image`` with ``snf-progress-monitor``
1630
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1631

    
1632
Finally, we need to configure ``snf-image`` to publish progress messages during
1633
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1634
set the corresponding variable to ``snf-progress-monitor``:
1635

    
1636
.. code-block:: console
1637

    
1638
   PROGRESS_MONITOR="snf-progress-monitor"
1639

    
1640
This file should be editted in all Ganeti nodes.
1641

    
1642
.. _rapi-user:
1643

    
1644
Synnefo RAPI user
1645
-----------------
1646

    
1647
As a last step before installing Cyclades, create a new RAPI user that will
1648
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1649
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1650
You can do this, by first running:
1651

    
1652
.. code-block:: console
1653

    
1654
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1655

    
1656
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1657

    
1658
.. code-block:: console
1659

    
1660
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1661

    
1662
More about Ganeti's RAPI users `here.
1663
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1664

    
1665
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1666
to the actual Cyclades installation.
1667

    
1668

    
1669
Installation of Cyclades on node1
1670
=================================
1671

    
1672
This section describes the installation of Cyclades. Cyclades is Synnefo's
1673
Compute service. The Image Service will get installed automatically along with
1674
Cyclades, because it is contained in the same Synnefo component.
1675

    
1676
We will install Cyclades on node1. To do so, we install the corresponding
1677
package by running on node1:
1678

    
1679
.. code-block:: console
1680

    
1681
   # apt-get install snf-cyclades-app memcached python-memcache
1682

    
1683
If all packages install successfully, then Cyclades are installed and we
1684
proceed with their configuration.
1685

    
1686
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1687
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1688
This is achieved by storing all sensitive information to a CACHE backend and
1689
exporting it via VMAPI. The cache entries are invalidated after the first
1690
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1691
`Django <https://www.djangoproject.com/>`_ cache backend.
1692

    
1693
Configuration of Cyclades
1694
=========================
1695

    
1696
Conf files
1697
----------
1698

    
1699
After installing Cyclades, a number of new configuration files will appear under
1700
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1701
only the minimal needed changes to result with a working system. In general,
1702
sane defaults have been chosen for the most of the options, to cover most of the
1703
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1704
once you get familiar with the different options.
1705

    
1706
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1707

    
1708
.. code-block:: console
1709

    
1710
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1711
   ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
1712

    
1713
   # Set to False if astakos & cyclades are on the same host
1714
   CYCLADES_PROXY_USER_SERVICES = False
1715

    
1716
The ``ASTAKOS_BASE_URL`` denotes the Astakos endpoint for Cyclades,
1717
which is used for all user management, including authentication.
1718
Since our Astakos, Cyclades, and Pithos installations belong together,
1719
they should all have identical ``ASTAKOS_BASE_URL`` setting
1720
(see also, :ref:`previously <conf-pithos>`).
1721

    
1722
TODO: Document the Network Options here
1723

    
1724
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1725

    
1726
.. code-block:: console
1727

    
1728
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1729
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/ui/get_services'
1730
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/ui/get_menu'
1731

    
1732
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1733
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1734
used by the Cyclades Web UI to get from Astakos all the information needed to
1735
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1736
above should have the same values we put in the corresponding variables in
1737
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1738
:ref:`Pithos configuration <conf-pithos>` section.
1739

    
1740
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1741

    
1742
.. code-block:: console
1743

    
1744
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1745
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1746

    
1747
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1748
denotes the Pithos database (where the Image files are stored). So we set that
1749
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1750
Pithos data location.
1751

    
1752
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1753

    
1754
.. code-block:: console
1755

    
1756
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1757

    
1758
The above settings denote the Message Queue. Those settings should have the same
1759
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1760
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1761

    
1762
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1763

    
1764
.. code-block:: console
1765

    
1766
   UI_LOGIN_URL = "https://node1.example.com/ui/login"
1767
   UI_LOGOUT_URL = "https://node1.example.com/ui/logout"
1768

    
1769
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1770
if they are not logged in. We point that to Astakos.
1771

    
1772
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1773
user when he/she logs out. We point that to Astakos, too.
1774

    
1775
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1776

    
1777
.. code-block:: console
1778

    
1779
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1780
   VMAPI_BASE_URL = "https://node1.example.com"
1781

    
1782
Edit ``/etc/default/vncauthproxy``:
1783

    
1784
.. code-block:: console
1785

    
1786
   CHUID="nobody:www-data"
1787

    
1788
We have now finished with the basic Cyclades configuration.
1789

    
1790
Database Initialization
1791
-----------------------
1792

    
1793
Once Cyclades is configured, we sync the database:
1794

    
1795
.. code-block:: console
1796

    
1797
   $ snf-manage syncdb
1798
   $ snf-manage migrate
1799

    
1800
and load the initial server flavors:
1801

    
1802
.. code-block:: console
1803

    
1804
   $ snf-manage loaddata flavors
1805

    
1806
If everything returns successfully, our database is ready.
1807

    
1808
Add the Ganeti backend
1809
----------------------
1810

    
1811
In our installation we assume that we only have one Ganeti cluster, the one we
1812
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1813
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1814
correctly.
1815

    
1816
.. code-block:: console
1817

    
1818
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1819

    
1820
You can see everything has been setup correctly by running:
1821

    
1822
.. code-block:: console
1823

    
1824
   $ snf-manage backend-list
1825

    
1826
Enable the new backend by running:
1827

    
1828
.. code-block::
1829

    
1830
   $ snf-manage backend-modify --drained False 1
1831

    
1832
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1833
    This means that you cannot add VMs to it. The reason for this is that the
1834
    nodes should be unavailable to Synnefo until the Administrator explicitly
1835
    releases them. To change this setting, use ``snf-manage backend-modify
1836
    --drained False <backend-id>``.
1837

    
1838
If something is not set correctly, you can modify the backend with the
1839
``snf-manage backend-modify`` command. If something has gone wrong, you could
1840
modify the backend to reflect the Ganeti installation by running:
1841

    
1842
.. code-block:: console
1843

    
1844
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1845
                               --user=cyclades
1846
                               --pass=example_rapi_passw0rd
1847
                               1
1848

    
1849
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1850
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1851
can talk to Ganeti even after a Ganeti master-failover.
1852

    
1853
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1854
password.  Once we setup the first backend to point at our Ganeti cluster, we
1855
update the Cyclades backends status by running:
1856

    
1857
.. code-block:: console
1858

    
1859
   $ snf-manage backend-update-status
1860

    
1861
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1862
guide,we won't get into more detail regarding mulitple backends. If you want to
1863
learn more please see /*TODO*/.
1864

    
1865
Add a Public Network
1866
----------------------
1867

    
1868
Cyclades supports different Public Networks on different Ganeti backends.
1869
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1870
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1871
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1872
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1873

    
1874
.. code-block:: console
1875

    
1876
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1877
                               --gateway=5.6.7.1 \
1878
                               --subnet6=2001:648:2FFC:1322::/64 \
1879
                               --gateway6=2001:648:2FFC:1322::1 \
1880
                               --public --dhcp --flavor=CUSTOM \
1881
                               --link=br0 --mode=bridged \
1882
                               --name=public_network \
1883
                               --backend-id=1
1884

    
1885
This will create the Public Network on both Cyclades and the Ganeti backend. To
1886
make sure everything was setup correctly, also run:
1887

    
1888
.. code-block:: console
1889

    
1890
   $ snf-manage reconcile-networks
1891

    
1892
You can see all available networks by running:
1893

    
1894
.. code-block:: console
1895

    
1896
   $ snf-manage network-list
1897

    
1898
and inspect each network's state by running:
1899

    
1900
.. code-block:: console
1901

    
1902
   $ snf-manage network-inspect <net_id>
1903

    
1904
Finally, you can see the networks from the Ganeti perspective by running on the
1905
Ganeti MASTER:
1906

    
1907
.. code-block:: console
1908

    
1909
   $ gnt-network list
1910
   $ gnt-network info <network_name>
1911

    
1912
Create pools for Private Networks
1913
---------------------------------
1914

    
1915
To prevent duplicate assignment of resources to different private networks,
1916
Cyclades supports two types of pools:
1917

    
1918
 - MAC prefix Pool
1919
 - Bridge Pool
1920

    
1921
As long as those resourses have been provisioned, admin has to define two
1922
these pools in Synnefo:
1923

    
1924

    
1925
.. code-block:: console
1926

    
1927
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1928

    
1929
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1930

    
1931
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1932

    
1933
.. code-block:: console
1934

    
1935
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
1936

    
1937
Servers restart
1938
---------------
1939

    
1940
Restart gunicorn on node1:
1941

    
1942
.. code-block:: console
1943

    
1944
   # /etc/init.d/gunicorn restart
1945

    
1946
Now let's do the final connections of Cyclades with Ganeti.
1947

    
1948
``snf-dispatcher`` initialization
1949
---------------------------------
1950

    
1951
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1952
manages the Cyclades database accordingly. It also initializes all exchanges. By
1953
default it is not enabled during installation of Cyclades, so let's enable it in
1954
its configuration file ``/etc/default/snf-dispatcher``:
1955

    
1956
.. code-block:: console
1957

    
1958
   SNF_DSPTCH_ENABLE=true
1959

    
1960
and start the daemon:
1961

    
1962
.. code-block:: console
1963

    
1964
   # /etc/init.d/snf-dispatcher start
1965

    
1966
You can see that everything works correctly by tailing its log file
1967
``/var/log/synnefo/dispatcher.log``.
1968

    
1969
``snf-ganeti-eventd`` on GANETI MASTER
1970
--------------------------------------
1971

    
1972
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1973
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1974
The daemon is already installed on the GANETI MASTER (node1 in our case).
1975
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1976
installation, so we enable it in its configuration file
1977
``/etc/default/snf-ganeti-eventd``:
1978

    
1979
.. code-block:: console
1980

    
1981
   SNF_EVENTD_ENABLE=true
1982

    
1983
and start the daemon:
1984

    
1985
.. code-block:: console
1986

    
1987
   # /etc/init.d/snf-ganeti-eventd start
1988

    
1989
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1990

    
1991
Apply Quota
1992
-----------
1993

    
1994
The following commands will check and fix the integrity of user quota.
1995
In a freshly installed system, these commands have no effect and can be
1996
skipped.
1997

    
1998
.. code-block:: console
1999

    
2000
   node1 # snf-manage quota --sync
2001
   node1 # snf-manage reconcile-resources-astakos --fix
2002
   node2 # snf-manage reconcile-resources-pithos --fix
2003
   node1 # snf-manage reconcile-resources-cyclades --fix
2004

    
2005
If all the above return successfully, then you have finished with the Cyclades
2006
installation and setup.
2007

    
2008
Let's test our installation now.
2009

    
2010

    
2011
Testing of Cyclades
2012
===================
2013

    
2014
Cyclades Web UI
2015
---------------
2016

    
2017
First of all we need to test that our Cyclades Web UI works correctly. Open your
2018
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2019
top cloud bar. This should redirect you to:
2020

    
2021
 `http://node1.example.com/ui/`
2022

    
2023
and the Cyclades home page should appear. If not, please go back and find what
2024
went wrong. Do not proceed if you don't see the Cyclades home page.
2025

    
2026
If the Cyclades home page appears, click on the orange button 'New machine'. The
2027
first step of the 'New machine wizard' will appear. This step shows all the
2028
available Images from which you can spawn new VMs. The list should be currently
2029
empty, as we haven't registered any Images yet. Close the wizard and browse the
2030
interface (not many things to see yet). If everything seems to work, let's
2031
register our first Image file.
2032

    
2033
Cyclades Images
2034
---------------
2035

    
2036
To test our Cyclades installation, we will use an Image stored on Pithos to
2037
spawn a new VM from the Cyclades interface. We will describe all steps, even
2038
though you may already have uploaded an Image on Pithos from a :ref:`previous
2039
<snf-image-images>` section:
2040

    
2041
 * Upload an Image file to Pithos
2042
 * Register that Image file to Cyclades
2043
 * Spawn a new VM from that Image from the Cyclades Web UI
2044

    
2045
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2046
command line client to do the uploading and registering of the Image.
2047

    
2048
Installation of `kamaki`
2049
~~~~~~~~~~~~~~~~~~~~~~~~
2050

    
2051
You can install `kamaki` anywhere you like, since it is a standalone client of
2052
the APIs and talks to the installation over `http`. For the purpose of this
2053
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2054
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2055
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2056
too. We do this by running:
2057

    
2058
.. code-block:: console
2059

    
2060
   # apt-get install kamaki
2061

    
2062
Configuration of kamaki
2063
~~~~~~~~~~~~~~~~~~~~~~~
2064

    
2065
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2066
installation. We do this by running:
2067

    
2068
.. code-block:: console
2069

    
2070
   $ kamaki config set user.url "https://node1.example.com"
2071
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
2072
   $ kamaki config set image.url "https://node1.example.com/image"
2073
   $ kamaki config set file.url "https://node2.example.com/v1"
2074
   $ kamaki config set token USER_TOKEN
2075

    
2076
The USER_TOKEN appears on the user's `Profile` web page on the Astakos Web UI.
2077

    
2078
You can see that the new configuration options have been applied correctly,
2079
either by checking the editable file ``~/.kamakirc`` or by running:
2080

    
2081
.. code-block:: console
2082

    
2083
   $ kamaki config list
2084

    
2085
A quick test to check that kamaki is configured correctly, is to try to
2086
authenticate a user based on his/her token (in this case the user is you):
2087

    
2088
.. code-block:: console
2089

    
2090
  $ kamaki user authenticate
2091

    
2092
The above operation provides various user information, e.g. UUID (the unique
2093
user id) which might prove useful in some operations.
2094

    
2095
Upload an Image file to Pithos
2096
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2097

    
2098
Now, that we have set up `kamaki` we will upload the Image that we have
2099
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2100
under the root ``Pithos`` container (as you may have done when uploading the
2101
Image from the Pithos Web UI), we will create a new container called ``images``
2102
and store the Image under that container. We do this for two reasons:
2103

    
2104
a) To demonstrate how to create containers other than the default ``Pithos``.
2105
   This can be done only with the `kamaki` client and not through the Web UI.
2106

    
2107
b) As a best organization practise, so that you won't have your Image files
2108
   tangled along with all your other Pithos files and directory structures.
2109

    
2110
We create the new ``images`` container by running:
2111

    
2112
.. code-block:: console
2113

    
2114
   $ kamaki file create images
2115

    
2116
To check if the container has been created, list all containers of your
2117
account:
2118

    
2119
.. code-block:: console
2120

    
2121
  $ kamaki file list
2122

    
2123
Then, we upload the Image file to that container:
2124

    
2125
.. code-block:: console
2126

    
2127
   $ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump images
2128

    
2129
The first is the local path and the second is the remote container on Pithos.
2130
Check if the file has been uploaded, by listing the container contents:
2131

    
2132
.. code-block:: console
2133

    
2134
  $ kamaki file list images
2135

    
2136
Alternatively check if the new container and file appear on the Pithos Web UI.
2137

    
2138
Register an existing Image file to Cyclades
2139
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2140

    
2141
For the purposes of the following example, we assume that the user UUID is
2142
``u53r-un1qu3-1d``.
2143

    
2144
Once the Image file has been successfully uploaded on Pithos then we register
2145
it to Cyclades, by running:
2146

    
2147
.. code-block:: console
2148

    
2149
   $ kamaki image register "Debian Base" \
2150
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump \
2151
                           --public \
2152
                           --disk-format=diskdump \
2153
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2154
                           --property description="Debian Squeeze Base System" \
2155
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2156
                           --property sortorder=1 --property USERS=root --property OS=debian
2157

    
2158
This command registers the Pithos file
2159
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump`` as an
2160
Image in Cyclades. This Image will be public (``--public``), so all users will
2161
be able to spawn VMs from it and is of type ``diskdump``. The first two
2162
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2163
properties are optional, but recommended, so that the Images appear nicely on
2164
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2165
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2166
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2167

    
2168
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2169
from Cyclades to Ganeti and then `snf-image` (also see
2170
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2171
used to show information on the Cyclades UI.
2172

    
2173
Spawn a VM from the Cyclades Web UI
2174
-----------------------------------
2175

    
2176
If the registration completes successfully, then go to the Cyclades Web UI from
2177
your browser at:
2178

    
2179
 `https://node1.example.com/ui/`
2180

    
2181
Click on the 'New Machine' button and the first step of the wizard will appear.
2182
Click on 'My Images' (right after 'System' Images) on the left pane of the
2183
wizard. Your previously registered Image "Debian Base" should appear under
2184
'Available Images'. If not, something has gone wrong with the registration. Make
2185
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2186
register`` returns successfully with all options and properties as shown above.
2187

    
2188
If the Image appears on the list, select it and complete the wizard by selecting
2189
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2190
write down your password, because you *WON'T* be able to retrieve it later.
2191

    
2192
If everything was setup correctly, after a few minutes your new machine will go
2193
to state 'Running' and you will be able to use it. Click 'Console' to connect
2194
through VNC out of band, or click on the machine's icon to connect directly via
2195
SSH or RDP (for windows machines).
2196

    
2197
Congratulations. You have successfully installed the whole Synnefo stack and
2198
connected all components. Go ahead in the next section to test the Network
2199
functionality from inside Cyclades and discover even more features.
2200

    
2201
General Testing
2202
===============
2203

    
2204
Notes
2205
=====
2206