Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ bc7e4f5f

History | View | Annotate | Download (75.2 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos+)
15
    * Compute Service (Cyclades)
16
    * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the
24
guide and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42
.. note:: It is import that the two machines are under the same domain name.
43
    If they are not, you can do this by editting the file ``/etc/hosts``
44
    on both machines, and add the following lines:
45

    
46
    .. code-block:: console
47

    
48
        4.3.2.1     node1.example.com
49
        4.3.2.2     node2.example.com
50

    
51

    
52
General Prerequisites
53
=====================
54

    
55
These are the general synnefo prerequisites, that you need on node1 and node2
56
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
57

    
58
To be able to download all synnefo components you need to add the following
59
lines in your ``/etc/apt/sources.list`` file:
60

    
61
| ``deb http://apt.dev.grnet.gr squeeze main``
62
| ``deb-src http://apt.dev.grnet.gr squeeze main``
63
| ``deb http://apt.dev.grnet.gr squeeze-backports main``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos+ will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
105

    
106
You can install apache2, progresql and ntp by running:
107

    
108
.. code-block:: console
109

    
110
   # apt-get install apache2 postgresql ntp
111

    
112
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
113
the official debian backports:
114

    
115
.. code-block:: console
116

    
117
   # apt-get -t squeeze-backports install gunicorn
118

    
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
120

    
121
.. code-block:: console
122

    
123
   # apt-get -t squeeze-backports install python-gevent
124

    
125
On node1, we will create our databases, so you will also need the
126
python-psycopg2 package:
127

    
128
.. code-block:: console
129

    
130
   # apt-get install python-psycopg2
131

    
132
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
133
following line to ``/etc/apt/sources.list``:
134

    
135
.. code-block:: console
136

    
137
    deb http://www.rabbitmq.com/debian testing main
138

    
139
Add RabbitMQ public key, to trusted key list:
140

    
141
.. code-block:: console
142

    
143
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
144
  # apt-key add rabbitmq-signing-key-public.asc
145

    
146
Finally, to install the package run:
147

    
148
.. code-block:: console
149

    
150
  # apt-get update
151
  # apt-get install rabbitmq-server
152

    
153
Database setup
154
~~~~~~~~~~~~~~
155

    
156
On node1, we create a database called ``snf_apps``, that will host all django
157
apps related tables. We also create the user ``synnefo`` and grant him all
158
privileges on the database. We do this by running:
159

    
160
.. code-block:: console
161

    
162
    root@node1:~ # su - postgres
163
    postgres@node1:~ $ psql
164
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
166
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167

    
168
We also create the database ``snf_pithos`` needed by the pithos+ backend and
169
grant the ``synnefo`` user all privileges on the database. This database could
170
be created on node2 instead, but we do it on node1 for simplicity. We will
171
create all needed databases on node1 and then node2 will connect to them.
172

    
173
.. code-block:: console
174

    
175
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
176
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177

    
178
Configure the database to listen to all network interfaces. You can do this by
179
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
180
``listen_addresses`` to ``'*'`` :
181

    
182
.. code-block:: console
183

    
184
    listen_addresses = '*'
185

    
186
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
187
node2 to connect to the database. Add the following lines under ``#IPv4 local
188
connections:`` :
189

    
190
.. code-block:: console
191

    
192
    host		all	all	4.3.2.1/32	md5
193
    host		all	all	4.3.2.2/32	md5
194

    
195
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
196
actual IPs. Now, restart the server to apply the changes:
197

    
198
.. code-block:: console
199

    
200
   # /etc/init.d/postgresql restart
201

    
202
Gunicorn setup
203
~~~~~~~~~~~~~~
204

    
205
Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
206

    
207
.. code-block:: console
208

    
209
    CONFIG = {
210
     'mode': 'django',
211
     'environment': {
212
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
213
     },
214
     'working_dir': '/etc/synnefo',
215
     'user': 'www-data',
216
     'group': 'www-data',
217
     'args': (
218
       '--bind=127.0.0.1:8080',
219
       '--worker-class=gevent',
220
       '--workers=8',
221
       '--log-level=debug',
222
     ),
223
    }
224

    
225
.. warning:: Do NOT start the server yet, because it won't find the
226
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
227
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
228
    ``--worker-class=sync``. We will start the server after successful
229
    installation of astakos. If the server is running::
230

    
231
       # /etc/init.d/gunicorn stop
232

    
233
Apache2 setup
234
~~~~~~~~~~~~~
235

    
236
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
237
following:
238

    
239
.. code-block:: console
240

    
241
    <VirtualHost *:80>
242
        ServerName node1.example.com
243

    
244
        RewriteEngine On
245
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
246
        RewriteRule ^(.*)$ - [F,L]
247
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
248
    </VirtualHost>
249

    
250
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
251
following:
252

    
253
.. code-block:: console
254

    
255
    <IfModule mod_ssl.c>
256
    <VirtualHost _default_:443>
257
        ServerName node1.example.com
258

    
259
        Alias /static "/usr/share/synnefo/static"
260

    
261
        #  SetEnv no-gzip
262
        #  SetEnv dont-vary
263

    
264
       AllowEncodedSlashes On
265

    
266
       RequestHeader set X-Forwarded-Protocol "https"
267

    
268
    <Proxy * >
269
        Order allow,deny
270
        Allow from all
271
    </Proxy>
272

    
273
        SetEnv                proxy-sendchunked
274
        SSLProxyEngine        off
275
        ProxyErrorOverride    off
276

    
277
        ProxyPass        /static !
278
        ProxyPass        / http://localhost:8080/ retry=0
279
        ProxyPassReverse / http://localhost:8080/
280

    
281
        RewriteEngine On
282
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
283
        RewriteRule ^(.*)$ - [F,L]
284

    
285
        SSLEngine on
286
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
287
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
288
    </VirtualHost>
289
    </IfModule>
290

    
291
Now enable sites and modules by running:
292

    
293
.. code-block:: console
294

    
295
   # a2enmod ssl
296
   # a2enmod rewrite
297
   # a2dissite default
298
   # a2ensite synnefo
299
   # a2ensite synnefo-ssl
300
   # a2enmod headers
301
   # a2enmod proxy_http
302

    
303
.. warning:: Do NOT start/restart the server yet. If the server is running::
304

    
305
       # /etc/init.d/apache2 stop
306

    
307
.. _rabbitmq-setup:
308

    
309
Message Queue setup
310
~~~~~~~~~~~~~~~~~~~
311

    
312
The message queue will run on node1, so we need to create the appropriate
313
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
314
exchanges:
315

    
316
.. code-block:: console
317

    
318
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
319
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
320

    
321
We do not need to initialize the exchanges. This will be done automatically,
322
during the Cyclades setup.
323

    
324
Pithos+ data directory setup
325
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
326

    
327
As mentioned in the General Prerequisites section, there is a directory called
328
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
329
directory inside it:
330

    
331
.. code-block:: console
332

    
333
   # cd /srv/pithos
334
   # mkdir data
335
   # chown www-data:www-data data
336
   # chmod g+ws data
337

    
338
You are now ready with all general prerequisites concerning node1. Let's go to
339
node2.
340

    
341
Node2
342
-----
343

    
344
General Synnefo dependencies
345
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346

    
347
    * apache (http server)
348
    * gunicorn (WSGI http server)
349
    * postgresql (database)
350
    * ntp (NTP daemon)
351
    * gevent
352

    
353
You can install the above by running:
354

    
355
.. code-block:: console
356

    
357
   # apt-get install apache2 postgresql ntp
358

    
359
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
360
the official debian backports:
361

    
362
.. code-block:: console
363

    
364
   # apt-get -t squeeze-backports install gunicorn
365

    
366
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:
367

    
368
.. code-block:: console
369

    
370
   # apt-get -t squeeze-backports install python-gevent
371

    
372
Node2 will connect to the databases on node1, so you will also need the
373
python-psycopg2 package:
374

    
375
.. code-block:: console
376

    
377
   # apt-get install python-psycopg2
378

    
379
Database setup
380
~~~~~~~~~~~~~~
381

    
382
All databases have been created and setup on node1, so we do not need to take
383
any action here. From node2, we will just connect to them. When you get familiar
384
with the software you may choose to run different databases on different nodes,
385
for performance/scalability/redundancy reasons, but those kind of setups are out
386
of the purpose of this guide.
387

    
388
Gunicorn setup
389
~~~~~~~~~~~~~~
390

    
391
Create the file ``/etc/gunicorn.d/synnefo`` containing the following
392
(same contents as in node1; you can just copy/paste the file):
393

    
394
.. code-block:: console
395

    
396
    CONFIG = {
397
     'mode': 'django',
398
     'environment': {
399
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
400
     },
401
     'working_dir': '/etc/synnefo',
402
     'user': 'www-data',
403
     'group': 'www-data',
404
     'args': (
405
       '--bind=127.0.0.1:8080',
406
       '--worker-class=gevent',
407
       '--workers=4',
408
       '--log-level=debug',
409
       '--timeout=43200'
410
     ),
411
    }
412

    
413
.. warning:: Do NOT start the server yet, because it won't find the
414
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
415
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
416
    ``--worker-class=sync``. We will start the server after successful
417
    installation of astakos. If the server is running::
418

    
419
       # /etc/init.d/gunicorn stop
420

    
421
Apache2 setup
422
~~~~~~~~~~~~~
423

    
424
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
425
following:
426

    
427
.. code-block:: console
428

    
429
    <VirtualHost *:80>
430
        ServerName node2.example.com
431

    
432
        RewriteEngine On
433
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
434
        RewriteRule ^(.*)$ - [F,L]
435
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
436
    </VirtualHost>
437

    
438
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
439
containing the following:
440

    
441
.. code-block:: console
442

    
443
    <IfModule mod_ssl.c>
444
    <VirtualHost _default_:443>
445
        ServerName node2.example.com
446

    
447
        Alias /static "/usr/share/synnefo/static"
448

    
449
        SetEnv no-gzip
450
        SetEnv dont-vary
451
        AllowEncodedSlashes On
452

    
453
        RequestHeader set X-Forwarded-Protocol "https"
454

    
455
        <Proxy * >
456
            Order allow,deny
457
            Allow from all
458
        </Proxy>
459

    
460
        SetEnv                proxy-sendchunked
461
        SSLProxyEngine        off
462
        ProxyErrorOverride    off
463

    
464
        ProxyPass        /static !
465
        ProxyPass        / http://localhost:8080/ retry=0
466
        ProxyPassReverse / http://localhost:8080/
467

    
468
        SSLEngine on
469
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
470
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
471
    </VirtualHost>
472
    </IfModule>
473

    
474
As in node1, enable sites and modules by running:
475

    
476
.. code-block:: console
477

    
478
   # a2enmod ssl
479
   # a2enmod rewrite
480
   # a2dissite default
481
   # a2ensite synnefo
482
   # a2ensite synnefo-ssl
483
   # a2enmod headers
484
   # a2enmod proxy_http
485

    
486
.. warning:: Do NOT start/restart the server yet. If the server is running::
487

    
488
       # /etc/init.d/apache2 stop
489

    
490
We are now ready with all general prerequisites for node2. Now that we have
491
finished with all general prerequisites for both nodes, we can start installing
492
the services. First, let's install Astakos on node1.
493

    
494

    
495
Installation of Astakos on node1
496
================================
497

    
498
To install astakos, grab the package from our repository (make sure  you made
499
the additions needed in your ``/etc/apt/sources.list`` file, as described
500
previously), by running:
501

    
502
.. code-block:: console
503

    
504
   # apt-get install snf-astakos-app snf-quotaholder-app snf-pithos-backend
505

    
506
After successful installation of snf-astakos-app, make sure that also
507
snf-webproject has been installed (marked as "Recommended" package). By default
508
Debian installs "Recommended" packages, but if you have changed your
509
configuration and the package didn't install automatically, you should
510
explicitly install it manually running:
511

    
512
.. code-block:: console
513

    
514
   # apt-get install snf-webproject
515

    
516
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
517
the experienced administrator the ability to install Synnefo in a custom made
518
`Django <https://www.djangoproject.com/>`_ project. This corner case
519
concerns only very advanced users that know what they are doing and want to
520
experiment with synnefo.
521

    
522

    
523
.. _conf-astakos:
524

    
525
Configuration of Astakos
526
========================
527

    
528
Conf Files
529
----------
530

    
531
After astakos is successfully installed, you will find the directory
532
``/etc/synnefo`` and some configuration files inside it. The files contain
533
commented configuration options, which are the default options. While installing
534
new snf-* components, new configuration files will appear inside the directory.
535
In this guide (and for all services), we will edit only the minimum necessary
536
configuration options, to reflect our setup. Everything else will remain as is.
537

    
538
After getting familiar with synnefo, you will be able to customize the software
539
as you wish and fits your needs. Many options are available, to empower the
540
administrator with extensively customizable setups.
541

    
542
For the snf-webproject component (installed as an astakos dependency), we
543
need the following:
544

    
545
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
546
uncomment and edit the ``DATABASES`` block to reflect our database:
547

    
548
.. code-block:: console
549

    
550
    DATABASES = {
551
     'default': {
552
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
553
         'ENGINE': 'postgresql_psycopg2',
554
         # ATTENTION: This *must* be the absolute path if using sqlite3.
555
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
556
         'NAME': 'snf_apps',
557
         'USER': 'synnefo',                      # Not used with sqlite3.
558
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
559
         # Set to empty string for localhost. Not used with sqlite3.
560
         'HOST': '4.3.2.1',
561
         # Set to empty string for default. Not used with sqlite3.
562
         'PORT': '5432',
563
     }
564
    }
565

    
566
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
567
``SECRET_KEY``. This is a Django specific setting which is used to provide a
568
seed in secret-key hashing algorithms. Set this to a random string of your
569
choise and keep it private:
570

    
571
.. code-block:: console
572

    
573
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
574

    
575
For astakos specific configuration, edit the following options in
576
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
577

    
578
.. code-block:: console
579

    
580
    ASTAKOS_DEFAULT_ADMIN_EMAIL = None
581

    
582
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
583

    
584
    ASTAKOS_BASEURL = 'https://node1.example.com'
585

    
586
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
587
services). ``ASTAKOS_BASEURL`` is the astakos home page.
588

    
589
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
590
Every time a new account is created a notification is sent to this email.
591
For this we need access to a running mail server, so we have disabled
592
it for now by setting its value to None. For more informations on this,
593
read the relative :ref:`section <mail-server>`.
594

    
595
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
596
    If you would like to enable it, you have to edit the following options:
597

    
598
    .. code-block:: console
599

    
600
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
601
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
602
        ASTAKOS_RECAPTCHA_USE_SSL = True
603
        ASTAKOS_RECAPTCHA_ENABLED = True
604

    
605
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
606
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
607

    
608
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
609

    
610
.. code-block:: console
611

    
612
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
613

    
614
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
615

    
616
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
617

    
618
Those settings have to do with the black cloudbar endpoints and will be
619
described in more detail later on in this guide. For now, just edit the domain
620
to point at node1 which is where we have installed Astakos.
621

    
622
If you are an advanced user and want to use the Shibboleth Authentication
623
method, read the relative :ref:`section <shibboleth-auth>`.
624

    
625
.. note:: Because Cyclades and Astakos are running on the same machine
626
    in our example, we have to deactivate the CSRF verification. We can do so
627
    by adding to
628
    ``/etc/synnefo/99-local.conf``:
629

    
630
    .. code-block:: console
631

    
632
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
633
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
634

    
635
Since version 0.13 you need to configure some basic settings for the new *Quota*
636
feature.
637

    
638
Specifically:
639

    
640
Edit ``/etc/synnefo/20-snf-astakos-app-settings.conf``:
641

    
642
.. code-block:: console
643

    
644
    QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
645
    QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
646
    ASTAKOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
647
    ASTAKOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
648

    
649
Enable Pooling
650
--------------
651

    
652
This section can be bypassed, but we strongly recommend you apply the following,
653
since they result in a significant performance boost.
654

    
655
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
656
around Psycopg2. This allows independent Django requests to reuse pooled DB
657
connections, with significant performance gains.
658

    
659
To use, first monkey-patch psycopg2. For Django, run this before the
660
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
661

    
662
.. code-block:: console
663

    
664
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
665
    monkey_patch_psycopg2()
666

    
667
Since we are running with greenlets, we should modify psycopg2 behavior, so it
668
works properly in a greenlet context:
669

    
670
.. code-block:: console
671

    
672
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
673
    make_psycopg_green()
674

    
675
Use the Psycopg2 driver as usual. For Django, this means using
676
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
677
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
678
driver, through ``DATABASES.OPTIONS`` in Django.
679

    
680
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
681
file that looks like this:
682

    
683
.. code-block:: console
684

    
685
    # Monkey-patch psycopg2
686
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
687
    monkey_patch_psycopg2()
688

    
689
    # If running with greenlets
690
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
691
    make_psycopg_green()
692

    
693
    DATABASES = {
694
     'default': {
695
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
696
         'ENGINE': 'postgresql_psycopg2',
697
         'OPTIONS': {'synnefo_poolsize': 8},
698

    
699
         # ATTENTION: This *must* be the absolute path if using sqlite3.
700
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
701
         'NAME': 'snf_apps',
702
         'USER': 'synnefo',                      # Not used with sqlite3.
703
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
704
         # Set to empty string for localhost. Not used with sqlite3.
705
         'HOST': '4.3.2.1',
706
         # Set to empty string for default. Not used with sqlite3.
707
         'PORT': '5432',
708
     }
709
    }
710

    
711
Database Initialization
712
-----------------------
713

    
714
After configuration is done, we initialize the database by running:
715

    
716
.. code-block:: console
717

    
718
    # snf-manage syncdb
719

    
720
At this example we don't need to create a django superuser, so we select
721
``[no]`` to the question. After a successful sync, we run the migration needed
722
for astakos:
723

    
724
.. code-block:: console
725

    
726
    # snf-manage migrate im
727

    
728
Then, we load the pre-defined user groups
729

    
730
.. code-block:: console
731

    
732
    # snf-manage loaddata groups
733

    
734
.. _services-reg:
735

    
736
Services Registration
737
---------------------
738

    
739
When the database is ready, we configure the elements of the Astakos cloudbar,
740
to point to our future services:
741

    
742
.. code-block:: console
743

    
744
    # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
745
    # snf-manage service-add "cyclades" https://node1.example.com/ui/
746
    # snf-manage service-add "pithos+" https://node2.example.com/ui/
747

    
748
Servers Initialization
749
----------------------
750

    
751
Finally, we initialize the servers on node1:
752

    
753
.. code-block:: console
754

    
755
    root@node1:~ # /etc/init.d/gunicorn restart
756
    root@node1:~ # /etc/init.d/apache2 restart
757

    
758
We have now finished the Astakos setup. Let's test it now.
759

    
760

    
761
Testing of Astakos
762
==================
763

    
764
Open your favorite browser and go to:
765

    
766
``http://node1.example.com/im``
767

    
768
If this redirects you to ``https://node1.example.com/im/`` and you can see
769
the "welcome" door of Astakos, then you have successfully setup Astakos.
770

    
771
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
772
and fill all your data at the sign up form. Then click "SUBMIT". You should now
773
see a green box on the top, which informs you that you made a successful request
774
and the request has been sent to the administrators. So far so good, let's
775
assume that you created the user with username ``user@example.com``.
776

    
777
Now we need to activate that user. Return to a command prompt at node1 and run:
778

    
779
.. code-block:: console
780

    
781
    root@node1:~ # snf-manage user-list
782

    
783
This command should show you a list with only one user; the one we just created.
784
This user should have an id with a value of ``1``. It should also have an
785
"active" status with the value of ``0`` (inactive). Now run:
786

    
787
.. code-block:: console
788

    
789
    root@node1:~ # snf-manage user-update --set-active 1
790

    
791
This modifies the active value to ``1``, and actually activates the user.
792
When running in production, the activation is done automatically with different
793
types of moderation, that Astakos supports. You can see the moderation methods
794
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
795
documentation. In production, you can also manually activate a user, by sending
796
him/her an activation email. See how to do this at the :ref:`User
797
activation <user_activation>` section.
798

    
799
Now let's go back to the homepage. Open ``http://node1.example.com/im/`` with
800
your browser again. Try to sign in using your new credentials. If the astakos
801
menu appears and you can see your profile, then you have successfully setup
802
Astakos.
803

    
804
Let's continue to install Pithos+ now.
805

    
806

    
807
Installation of Pithos+ on node2
808
================================
809

    
810
To install pithos+, grab the packages from our repository (make sure  you made
811
the additions needed in your ``/etc/apt/sources.list`` file, as described
812
previously), by running:
813

    
814
.. code-block:: console
815

    
816
   # apt-get install snf-pithos-app snf-pithos-backend
817

    
818
After successful installation of snf-pithos-app, make sure that also
819
snf-webproject has been installed (marked as "Recommended" package). Refer to
820
the "Installation of Astakos on node1" section, if you don't remember why this
821
should happen. Now, install the pithos web interface:
822

    
823
.. code-block:: console
824

    
825
   # apt-get install snf-pithos-webclient
826

    
827
This package provides the standalone pithos web client. The web client is the
828
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
829
interface's cloudbar, at the top of the Astakos homepage.
830

    
831

    
832
.. _conf-pithos:
833

    
834
Configuration of Pithos+
835
========================
836

    
837
Conf Files
838
----------
839

    
840
After pithos+ is successfully installed, you will find the directory
841
``/etc/synnefo`` and some configuration files inside it, as you did in node1
842
after installation of astakos. Here, you will not have to change anything that
843
has to do with snf-common or snf-webproject. Everything is set at node1. You
844
only need to change settings that have to do with pithos+. Specifically:
845

    
846
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
847
this options:
848

    
849
.. code-block:: console
850

    
851
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
852

    
853
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
854

    
855
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
856
   PITHOS_AUTHENTICATION_USERS = None
857

    
858
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
859
   PITHOS_USER_CATALOG_URL = 'https://node1.example.com/user_catalogs'
860
   PITHOS_USER_FEEDBACK_URL = 'https://node1.example.com/feedback'
861
   PITHOS_USER_LOGIN_URL = 'https://node1.example.com/login'
862

    
863
   PITHOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
864
   PITHOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
865
   PITHOS_USE_QUOTAHOLDER = True
866

    
867
   # Set to False if astakos & pithos are on the same host
868
   #PITHOS_PROXY_USER_SERVICES = True
869

    
870

    
871
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
872
find the pithos+ backend database. Above we tell pithos+ that its database is
873
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
874
``example_passw0rd``.  All those settings where setup during node1's "Database
875
setup" section.
876

    
877
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
878
the pithos+ backend data. Above we tell pithos+ to store its data under
879
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
880
directory at node1's "Pithos+ data directory setup" section.
881

    
882
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
883
is available the astakos authentication api. If not set, pithos+ tries to
884
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
885

    
886
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on
887
the Astakos node (node1 in our case):
888

    
889
.. code-block:: console
890

    
891
   # snf-manage service-list
892

    
893
The token has been generated automatically during the :ref:`Pithos+ service
894
registration <services-reg>`.
895

    
896
Then we need to setup the web UI and connect it to astakos. To do so, edit
897
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
898

    
899
.. code-block:: console
900

    
901
    PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
902
    PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
903

    
904
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
905
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
906
pithos+ feedback form. Astakos already provides a generic feedback form for all
907
services, so we use this one.
908

    
909
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
910
pithos+ web UI with the astakos web UI (through the top cloudbar):
911

    
912
.. code-block:: console
913

    
914
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
915
    PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
916
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
917
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
918

    
919
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
920
cloudbar.
921

    
922
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
923
Astakos service. You can see all :ref:`registered services <services-reg>` by
924
running on the Astakos node (node1):
925

    
926
.. code-block:: console
927

    
928
   # snf-manage service-list
929

    
930
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos
931
service's ``id`` as shown by the above command, in our case ``3``.
932

    
933
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
934
pithos+ web client to get from astakos all the information needed to fill its
935
own cloudbar. So we put our astakos deployment urls there.
936

    
937
Pooling and Greenlets
938
---------------------
939

    
940
Pithos is pooling-ready without the need of further configuration, because it
941
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
942
backend objects for access to the Pithos DB.
943

    
944
However, as in Astakos, since we are running with Greenlets, it is also
945
recommended to modify psycopg2 behavior so it works properly in a greenlet
946
context. This means adding the following lines at the top of your
947
``/etc/synnefo/10-snf-webproject-database.conf`` file:
948

    
949
.. code-block:: console
950

    
951
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
952
    make_psycopg_green()
953

    
954
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
955
mentioned above, depending on your setup) argument on your
956
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
957
like this:
958

    
959
.. code-block:: console
960

    
961
    CONFIG = {
962
     'mode': 'django',
963
     'environment': {
964
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
965
     },
966
     'working_dir': '/etc/synnefo',
967
     'user': 'www-data',
968
     'group': 'www-data',
969
     'args': (
970
       '--bind=127.0.0.1:8080',
971
       '--workers=4',
972
       '--worker-class=gevent',
973
       '--log-level=debug',
974
       '--timeout=43200'
975
     ),
976
    }
977

    
978
Servers Initialization
979
----------------------
980

    
981
After configuration is done, we initialize the servers on node2:
982

    
983
.. code-block:: console
984

    
985
    root@node2:~ # /etc/init.d/gunicorn restart
986
    root@node2:~ # /etc/init.d/apache2 restart
987

    
988
You have now finished the Pithos+ setup. Let's test it now.
989

    
990

    
991
Testing of Pithos+
992
==================
993

    
994
Open your browser and go to the Astakos homepage:
995

    
996
``http://node1.example.com/im``
997

    
998
Login, and you will see your profile page. Now, click the "pithos+" link on the
999
top black cloudbar. If everything was setup correctly, this will redirect you
1000
to:
1001

    
1002

    
1003
and you will see the blue interface of the Pithos+ application.  Click the
1004
orange "Upload" button and upload your first file. If the file gets uploaded
1005
successfully, then this is your first sign of a successful Pithos+ installation.
1006
Go ahead and experiment with the interface to make sure everything works
1007
correctly.
1008

    
1009
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
1010

    
1011
If you don't stumble on any problems, then you have successfully installed
1012
Pithos+, which you can use as a standalone File Storage Service.
1013

    
1014
If you would like to do more, such as:
1015

    
1016
    * Spawning VMs
1017
    * Spawning VMs from Images stored on Pithos+
1018
    * Uploading your custom Images to Pithos+
1019
    * Spawning VMs from those custom Images
1020
    * Registering existing Pithos+ files as Images
1021
    * Connect VMs to the Internet
1022
    * Create Private Networks
1023
    * Add VMs to Private Networks
1024

    
1025
please continue with the rest of the guide.
1026

    
1027

    
1028
Cyclades (and Plankton) Prerequisites
1029
=====================================
1030

    
1031
Before proceeding with the Cyclades (and Plankton) installation, make sure you
1032
have successfully set up Astakos and Pithos+ first, because Cyclades depends
1033
on them. If you don't have a working Astakos and Pithos+ installation yet,
1034
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
1035

    
1036
Besides Astakos and Pithos+, you will also need a number of additional working
1037
prerequisites, before you start the Cyclades installation.
1038

    
1039
Ganeti
1040
------
1041

    
1042
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1043
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1044
Please refer to the
1045
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1046
gory details. A successful Ganeti installation concludes with a working
1047
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1048
<GANETI_NODES>`.
1049

    
1050
The above Ganeti cluster can run on different physical machines than node1 and
1051
node2 and can scale independently, according to your needs.
1052

    
1053
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1054
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1055
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1056

    
1057
We highly recommend that you read the official Ganeti documentation, if you are
1058
not familiar with Ganeti.
1059

    
1060
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1061
support IP pool management. This feature will be available in Ganeti >= 2.7.
1062
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1063
GRNET provided packages until stable 2.7 is out. To do so:
1064

    
1065
.. code-block:: console
1066

    
1067
   # apt-get install snf-ganeti ganeti-htools
1068
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1069

    
1070
You should have:
1071

    
1072
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1073

    
1074
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1075
both nodes, choose a domain name that resolves to a valid floating IP (let's
1076
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1077
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1078
If not then skip passing --no-ssh-init but be aware that it will replace
1079
/root/.ssh/* related files and you might lose access to master node. Also,
1080
make sure there is an lvm volume group named ``ganeti`` that will host your
1081
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1082
Then run on node1:
1083

    
1084
.. code-block:: console
1085

    
1086
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1087
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
1088
                    --master-netdev eth0 ganeti.node1.example.com
1089
    root@node1:~ # gnt-cluster modify --default-iallocator hail
1090
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1091
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1092

    
1093
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1094
                    --vm-capable=yes node2.example.com
1095
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1096
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1097

    
1098
For any problems you may stumble upon installing Ganeti, please refer to the
1099
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1100
of Ganeti is out of the scope of this guide.
1101

    
1102
.. _cyclades-install-snfimage:
1103

    
1104
snf-image
1105
---------
1106

    
1107
Installation
1108
~~~~~~~~~~~~
1109
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1110
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1111
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1112
node1 and node2. You can do this by running on *both* nodes:
1113

    
1114
.. code-block:: console
1115

    
1116
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1117

    
1118
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1119
to handle image files stored on Pithos+. It also needs `python-psycopg2` to be
1120
able to access the Pithos+ database. This is why, we also install them on *all*
1121
VM-capable Ganeti nodes.
1122

    
1123
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1124
    not  work out of the box if you try to use URLs served by servers which do
1125
    not have a valid certificate. To circumvent this you should edit the file
1126
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1127

    
1128
After `snf-image` has been installed successfully, create the helper VM by
1129
running on *both* nodes:
1130

    
1131
.. code-block:: console
1132

    
1133
   # snf-image-update-helper
1134

    
1135
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1136
snf-image to run successfully, and it may take a few minutes depending on your
1137
Internet connection.
1138

    
1139
Configuration
1140
~~~~~~~~~~~~~
1141
snf-image supports native access to Images stored on Pithos+. This means that
1142
it can talk directly to the Pithos+ backend, without the need of providing a
1143
public URL. More details, are described in the next section. For now, the only
1144
thing we need to do, is configure snf-image to access our Pithos+ backend.
1145

    
1146
To do this, we need to set the corresponding variables in
1147
``/etc/default/snf-image``, to reflect our Pithos+ setup:
1148

    
1149
.. code-block:: console
1150

    
1151
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1152

    
1153
    PITHOS_DATA="/srv/pithos/data"
1154

    
1155
If you have installed your Ganeti cluster on different nodes than node1 and
1156
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1157

    
1158
If you would like to use Images that are also/only stored locally, you need to
1159
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1160
Pithos+.
1161

    
1162
Testing
1163
~~~~~~~
1164
You can test that snf-image is successfully installed by running on the
1165
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1166

    
1167
.. code-block:: console
1168

    
1169
   # gnt-os diagnose
1170

    
1171
This should return ``valid`` for snf-image.
1172

    
1173
If you are interested to learn more about snf-image's internals (and even use
1174
it alongside Ganeti without Synnefo), please see
1175
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information
1176
concerning installation instructions, documentation on the design and
1177
implementation, and supported Image formats.
1178

    
1179
.. _snf-image-images:
1180

    
1181
Actual Images for snf-image
1182
---------------------------
1183

    
1184
Now that snf-image is installed successfully we need to provide it with some
1185
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1186
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1187
format. For more information about snf-image Image formats see `here
1188
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1189

    
1190
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1191
above Images to be stored:
1192

    
1193
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1194
      in :file:`/etc/default/snf-image`)
1195
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1196
    * On Pithos+ (accessible natively, not only by its public URL)
1197

    
1198
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1199
on the official `snf-image page
1200
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1201
of type ``diskdump``. We will store it in our new Pithos+ installation.
1202

    
1203
To do so, do the following:
1204

    
1205
a) Download the Image from the official snf-image page.
1206

    
1207
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web
1208
   UI or the command line client `kamaki
1209
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.
1210

    
1211
Once the Image is uploaded successfully, download the Image's metadata file
1212
from the official snf-image page. You will need it, for spawning a VM from
1213
Ganeti, in the next section.
1214

    
1215
Of course, you can repeat the procedure to upload more Images, available from
1216
the `official snf-image page
1217
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1218

    
1219
.. _ganeti-with-pithos-images:
1220

    
1221
Spawning a VM from a Pithos+ Image, using Ganeti
1222
------------------------------------------------
1223

    
1224
Now, it is time to test our installation so far. So, we have Astakos and
1225
Pithos+ installed, we have a working Ganeti installation, the snf-image
1226
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1227
Pithos+. Make sure you also have the `metadata file
1228
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1229

    
1230
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1231

    
1232
.. code-block:: console
1233

    
1234
   # gnt-instance add -o snf-image+default --os-parameters \
1235
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1236
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1237
                      testvm1
1238

    
1239
In the above command:
1240

    
1241
 * ``img_passwd``: the arbitrary root password of your new instance
1242
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1243
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
1244
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1245
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1246
               * ``container``: ``pithos`` (default, if the Web UI was used)
1247
               * ``filename``: the name of file (visible also from the Web UI)
1248
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1249
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1250
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1251

    
1252
If the ``gnt-instance add`` command returns successfully, then run:
1253

    
1254
.. code-block:: console
1255

    
1256
   # gnt-instance info testvm1 | grep "console connection"
1257

    
1258
to find out where to connect using VNC. If you can connect successfully and can
1259
login to your new instance using the root password ``my_vm_example_passw0rd``,
1260
then everything works as expected and you have your new Debian Base VM up and
1261
running.
1262

    
1263
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1264
to access the Pithos+ database and the Pithos+ backend data (newer versions
1265
require UUID instead of a username). Another issue you may encounter is that in
1266
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1267
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1268
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1269
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1270
are sure everything works till this point.
1271

    
1272
If everything works, you have successfully connected Ganeti with Pithos+. Let's
1273
move on to networking now.
1274

    
1275
.. warning::
1276

    
1277
    You can bypass the networking sections and go straight to
1278
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1279
    the Cyclades Network Service, but only the Cyclades Compute Service
1280
    (recommended for now).
1281

    
1282
Networking Setup Overview
1283
-------------------------
1284

    
1285
This part is deployment-specific and must be customized based on the specific
1286
needs of the system administrator. However, to do so, the administrator needs
1287
to understand how each level handles Virtual Networks, to be able to setup the
1288
backend appropriately, before installing Cyclades. To do so, please read the
1289
:ref:`Network <networks>` section before proceeding.
1290

    
1291
Since synnefo 0.11 all network actions are managed with the snf-manage
1292
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1293
snf-network, bridges, vlans) to be already configured correctly. The only
1294
actions needed in this point are:
1295

    
1296
a) Have Ganeti with IP pool management support installed.
1297

    
1298
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1299

    
1300
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1301

    
1302
In order to test that everything is setup correctly before installing Cyclades,
1303
we will make some testing actions in this section, and the actual setup will be
1304
done afterwards with snf-manage commands.
1305

    
1306
.. _snf-network:
1307

    
1308
snf-network
1309
~~~~~~~~~~~
1310

    
1311
snf-network includes `kvm-vif-bridge` script that is invoked every time
1312
a tap (a VM's NIC) is created. Based on environment variables passed by
1313
Ganeti it issues various commands depending on the network type the NIC is
1314
connected to and sets up a corresponding dhcp lease.
1315

    
1316
Install snf-network on all Ganeti nodes:
1317

    
1318
.. code-block:: console
1319

    
1320
   # apt-get install snf-network
1321

    
1322
Then, in :file:`/etc/default/snf-network` set:
1323

    
1324
.. code-block:: console
1325

    
1326
   MAC_MASK=ff:ff:f0:00:00:00
1327

    
1328
.. _nfdhcpd:
1329

    
1330
nfdhcpd
1331
~~~~~~~
1332

    
1333
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1334
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1335
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1336
create a dhcp response.
1337

    
1338
.. code-block:: console
1339

    
1340
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1341
   # apt-get install nfdhcpd
1342

    
1343
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1344
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1345
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1346
VMs. Once you are finished, restart the server on all nodes:
1347

    
1348
.. code-block:: console
1349

    
1350
   # /etc/init.d/nfdhcpd restart
1351

    
1352
If you are using ``ferm``, then you need to run the following:
1353

    
1354
.. code-block:: console
1355

    
1356
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1357
   # /etc/init.d/ferm restart
1358

    
1359
or make sure to run after boot:
1360

    
1361
.. code-block:: console
1362

    
1363
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1364

    
1365
and if you have IPv6 enabled:
1366

    
1367
.. code-block:: console
1368

    
1369
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1370
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1371

    
1372
You can check which clients are currently served by nfdhcpd by running:
1373

    
1374
.. code-block:: console
1375

    
1376
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1377

    
1378
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1379

    
1380
Public Network Setup
1381
--------------------
1382

    
1383
To achieve basic networking the simplest way is to have a common bridge (e.g.
1384
``br0``, on the same collision domain with the router) where all VMs will
1385
connect to. Packets will be "forwarded" to the router and then to the Internet.
1386
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1387
:ref:`Network <networks>` section).
1388

    
1389
Physical Host Setup
1390
~~~~~~~~~~~~~~~~~~~
1391

    
1392
Assuming ``eth0`` on both hosts is the public interface (directly connected
1393
to the router), run on every node:
1394

    
1395
.. code-block:: console
1396

    
1397
   # apt-get install vlan
1398
   # brctl addbr br0
1399
   # ip link set br0 up
1400
   # vconfig add eth0 100
1401
   # ip link set eth0.100 up
1402
   # brctl addif br0 eth0.100
1403

    
1404

    
1405
Testing a Public Network
1406
~~~~~~~~~~~~~~~~~~~~~~~~
1407

    
1408
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1409
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1410
network by running:
1411

    
1412
.. code-block:: console
1413

    
1414
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1415

    
1416
Then, connect the network to all your nodegroups. We assume that we only have
1417
one nodegroup (``default``) in our Ganeti cluster:
1418

    
1419
.. code-block:: console
1420

    
1421
   # gnt-network connect test-net-public default bridged br0
1422

    
1423
Now, it is time to test that the backend infrastracture is correctly setup for
1424
the Public Network. We will add a new VM, the same way we did it on the
1425
previous testing section. However, now will also add one NIC, configured to be
1426
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1427

    
1428
.. code-block:: console
1429

    
1430
   # gnt-instance add -o snf-image+default --os-parameters \
1431
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1432
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1433
                      --net 0:ip=pool,network=test-net-public \
1434
                      testvm2
1435

    
1436
If the above returns successfully, connect to the new VM and run:
1437

    
1438
.. code-block:: console
1439

    
1440
   root@testvm2:~ # ip addr
1441
   root@testvm2:~ # ip route
1442
   root@testvm2:~ # cat /etc/resolv.conf
1443

    
1444
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1445
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1446
ganeti, snf-network and nfdhcpd.
1447

    
1448
Now ping the outside world. If this works too, then you have also configured
1449
correctly your physical host and router.
1450

    
1451
Make sure everything works as expected, before proceeding with the Private
1452
Networks setup.
1453

    
1454
.. _private-networks-setup:
1455

    
1456
Private Networks Setup
1457
----------------------
1458

    
1459
Synnefo supports two types of private networks:
1460

    
1461
 - based on MAC filtering
1462
 - based on physical VLANs
1463

    
1464
Both types provide Layer 2 isolation to the end-user.
1465

    
1466
For the first type a common bridge (e.g. ``prv0``) is needed while for the
1467
second a range of bridges (e.g. ``prv1..prv100``) each bridged on a different
1468
physical VLAN. To this end to assure isolation among end-users' private networks
1469
each has to have different MAC prefix (for the filtering to take place) or to be
1470
"connected" to a different bridge (VLAN actually).
1471

    
1472
Physical Host Setup
1473
~~~~~~~~~~~~~~~~~~~
1474

    
1475
In order to create the necessary VLAN/bridges, one for MAC filtered private
1476
networks and various (e.g. 20) for private networks based on physical VLANs,
1477
run on every node:
1478

    
1479
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1480
configured correctly) connected together, run on every node:
1481

    
1482
.. code-block:: console
1483

    
1484
   # modprobe 8021q
1485
   # $iface=eth0
1486
   # for prv in $(seq 0 20); do
1487
        vlan=$prv
1488
        bridge=prv$prv
1489
        vconfig add $iface $vlan
1490
        ifconfig $iface.$vlan up
1491
        brctl addbr $bridge
1492
        brctl setfd $bridge 0
1493
        brctl addif $bridge $iface.$vlan
1494
        ifconfig $bridge up
1495
      done
1496

    
1497
The above will do the following :
1498

    
1499
 * provision 21 new bridges: ``prv0`` - ``prv20``
1500
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1501
 * add the corresponding vlan to the equivalent bridge
1502

    
1503
You can run ``brctl show`` on both nodes to see if everything was setup
1504
correctly.
1505

    
1506
Testing the Private Networks
1507
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1508

    
1509
To test the Private Networks, we will create two instances and put them in the
1510
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1511
that the instances will have a second NIC connected to the ``prv0``
1512
pre-provisioned bridge and a third to ``prv1``.
1513

    
1514
We run the same command as in the Public Network testing section, but with one
1515
more argument for the second NIC:
1516

    
1517
.. code-block:: console
1518

    
1519
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1520
   # gnt-network connect test-net-prv-mac default bridged prv0
1521

    
1522
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1523
   # gnt-network connect test-net-prv-vlan default bridged prv1
1524

    
1525
   # gnt-instance add -o snf-image+default --os-parameters \
1526
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1527
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1528
                      --net 0:ip=pool,network=test-net-public \
1529
                      --net 1:ip=pool,network=test-net-prv-mac \
1530
                      --net 2:ip=none,network=test-net-prv-vlan \
1531
                      testvm3
1532

    
1533
   # gnt-instance add -o snf-image+default --os-parameters \
1534
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1535
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1536
                      --net 0:ip=pool,network=test-net-public \
1537
                      --net 1:ip=pool,network=test-net-prv-mac \
1538
                      --net 2:ip=none,network=test-net-prv-vlan \
1539
                      testvm4
1540

    
1541
Above, we create two instances with first NIC connected to the internet, their
1542
second NIC connected to a MAC filtered private Network and their third NIC
1543
connected to the first Physical VLAN Private Network. Now, connect to the
1544
instances using VNC and make sure everything works as expected:
1545

    
1546
 a) The instances have access to the public internet through their first eth
1547
    interface (``eth0``), which has been automatically assigned a public IP.
1548

    
1549
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1550

    
1551
 c) ip link set ``eth1``/``eth2`` up
1552

    
1553
 d) dhclient ``eth1``/``eth2``
1554

    
1555
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1556

    
1557
If everything works as expected, then you have finished the Network Setup at the
1558
backend for both types of Networks (Public & Private).
1559

    
1560
.. _cyclades-gtools:
1561

    
1562
Cyclades Ganeti tools
1563
---------------------
1564

    
1565
In order for Ganeti to be connected with Cyclades later on, we need the
1566
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1567
case). You can install them by running in both nodes:
1568

    
1569
.. code-block:: console
1570

    
1571
   # apt-get install snf-cyclades-gtools
1572

    
1573
This will install the following:
1574

    
1575
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1576
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1577
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1578

    
1579
Configure ``snf-cyclades-gtools``
1580
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1581

    
1582
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1583
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1584
that need it:
1585

    
1586
.. code-block:: console
1587

    
1588
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1589

    
1590
The above variables should reflect your :ref:`Message Queue setup
1591
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1592

    
1593
Connect ``snf-image`` with ``snf-progress-monitor``
1594
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1595

    
1596
Finally, we need to configure ``snf-image`` to publish progress messages during
1597
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1598
set the corresponding variable to ``snf-progress-monitor``:
1599

    
1600
.. code-block:: console
1601

    
1602
   PROGRESS_MONITOR="snf-progress-monitor"
1603

    
1604
This file should be editted in all Ganeti nodes.
1605

    
1606
.. _rapi-user:
1607

    
1608
Synnefo RAPI user
1609
-----------------
1610

    
1611
As a last step before installing Cyclades, create a new RAPI user that will
1612
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1613
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1614
You can do this, by first running:
1615

    
1616
.. code-block:: console
1617

    
1618
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1619

    
1620
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1621

    
1622
.. code-block:: console
1623

    
1624
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1625

    
1626
More about Ganeti's RAPI users `here.
1627
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1628

    
1629
You have now finished with all needed Prerequisites for Cyclades (and
1630
Plankton). Let's move on to the actual Cyclades installation.
1631

    
1632

    
1633
Installation of Cyclades (and Plankton) on node1
1634
================================================
1635

    
1636
This section describes the installation of Cyclades. Cyclades is Synnefo's
1637
Compute service. Plankton (the Image Registry service) will get installed
1638
automatically along with Cyclades, because it is contained in the same Synnefo
1639
component right now.
1640

    
1641
We will install Cyclades (and Plankton) on node1. To do so, we install the
1642
corresponding package by running on node1:
1643

    
1644
.. code-block:: console
1645

    
1646
   # apt-get install snf-cyclades-app memcached python-memcache
1647

    
1648
If all packages install successfully, then Cyclades and Plankton are installed
1649
and we proceed with their configuration.
1650

    
1651
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1652
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1653
This is achieved by storing all sensitive information to a CACHE backend and
1654
exporting it via VMAPI. The cache entries are invalidated after the first
1655
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1656
`Django <https://www.djangoproject.com/>`_ cache backend.
1657

    
1658
Configuration of Cyclades (and Plankton)
1659
========================================
1660

    
1661
Conf files
1662
----------
1663

    
1664
After installing Cyclades, a number of new configuration files will appear under
1665
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1666
only the minimal needed changes to result with a working system. In general,
1667
sane defaults have been chosen for the most of the options, to cover most of the
1668
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1669
once you get familiar with the different options.
1670

    
1671
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1672

    
1673
.. code-block:: console
1674

    
1675
   ASTAKOS_URL = 'https://node1.example.com/im/authenticate'
1676

    
1677
   # Set to False if astakos & cyclades are on the same host
1678
   CYCLADES_PROXY_USER_SERVICES = False
1679

    
1680
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set
1681
to point to Astakos (this should have the same value with Pithos+'s
1682
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`).
1683

    
1684
.. warning::
1685

    
1686
   All services must match the quotaholder token and url configured for
1687
   quotaholder.
1688

    
1689
TODO: Document the Network Options here
1690

    
1691
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1692

    
1693
.. code-block:: console
1694

    
1695
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1696
   CLOUDBAR_ACTIVE_SERVICE = '2'
1697
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
1698
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu'
1699

    
1700
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1701
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1702
used by the Cyclades Web UI to get from Astakos all the information needed to
1703
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1704
above should have the same values we put in the corresponding variables in
1705
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1706
:ref:`Pithos configuration <conf-pithos>` section.
1707

    
1708
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos
1709
service. You can see all :ref:`registered services <services-reg>` by running
1710
on the Astakos node (node1):
1711

    
1712
.. code-block:: console
1713

    
1714
   # snf-manage service-list
1715

    
1716
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's
1717
``id`` as shown by the above command, in our case ``2``.
1718

    
1719
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1720

    
1721
.. code-block:: console
1722

    
1723
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1724
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1725

    
1726
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION``
1727
denotes the Pithos+ database (where the Image files are stored). So we set that
1728
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual
1729
Pithos+ data location.
1730

    
1731
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1732

    
1733
.. code-block:: console
1734

    
1735
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1736

    
1737
The above settings denote the Message Queue. Those settings should have the same
1738
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1739
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1740

    
1741
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1742

    
1743
.. code-block:: console
1744

    
1745
   UI_LOGIN_URL = "https://node1.example.com/im/login"
1746
   UI_LOGOUT_URL = "https://node1.example.com/im/logout"
1747

    
1748
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1749
if they are not logged in. We point that to Astakos.
1750

    
1751
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1752
user when he/she logs out. We point that to Astakos, too.
1753

    
1754
Edit ``/etc/synnefo/20-snf-cyclades-app-quotas.conf``:
1755

    
1756
.. code-block:: console
1757

    
1758
   CYCLADES_USE_QUOTAHOLDER = True
1759
   CYCLADES_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
1760
   CYCLADES_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
1761

    
1762
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
1763

    
1764
.. code-block:: console
1765

    
1766
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
1767
   VMAPI_BASE_URL = "https://node1.example.com"
1768

    
1769
Edit ``/etc/default/vncauthproxy``:
1770

    
1771
.. code-block:: console
1772

    
1773
   CHUID="www-data:nogroup"
1774

    
1775
We have now finished with the basic Cyclades and Plankton configuration.
1776

    
1777
Database Initialization
1778
-----------------------
1779

    
1780
Once Cyclades is configured, we sync the database:
1781

    
1782
.. code-block:: console
1783

    
1784
   $ snf-manage syncdb
1785
   $ snf-manage migrate
1786

    
1787
and load the initial server flavors:
1788

    
1789
.. code-block:: console
1790

    
1791
   $ snf-manage loaddata flavors
1792

    
1793
If everything returns successfully, our database is ready.
1794

    
1795
Add the Ganeti backend
1796
----------------------
1797

    
1798
In our installation we assume that we only have one Ganeti cluster, the one we
1799
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1800
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1801
correctly.
1802

    
1803
.. code-block:: console
1804

    
1805
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1806

    
1807
You can see everything has been setup correctly by running:
1808

    
1809
.. code-block:: console
1810

    
1811
   $ snf-manage backend-list
1812

    
1813
Enable the new backend by running:
1814

    
1815
.. code-block::
1816

    
1817
   $ snf-manage backend-modify --drained False 1
1818

    
1819
.. warning:: Since version 0.13, the backend is set to "drained" by default.
1820
    This means that you cannot add VMs to it. The reason for this is that the
1821
    nodes should be unavailable to Synnefo until the Administrator explicitly
1822
    releases them. To change this setting, use ``snf-manage backend-modify
1823
    --drained False <backend-id>``.
1824

    
1825
If something is not set correctly, you can modify the backend with the
1826
``snf-manage backend-modify`` command. If something has gone wrong, you could
1827
modify the backend to reflect the Ganeti installation by running:
1828

    
1829
.. code-block:: console
1830

    
1831
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1832
                               --user=cyclades
1833
                               --pass=example_rapi_passw0rd
1834
                               1
1835

    
1836
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1837
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1838
can talk to Ganeti even after a Ganeti master-failover.
1839

    
1840
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1841
password.  Once we setup the first backend to point at our Ganeti cluster, we
1842
update the Cyclades backends status by running:
1843

    
1844
.. code-block:: console
1845

    
1846
   $ snf-manage backend-update-status
1847

    
1848
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1849
guide,we won't get into more detail regarding mulitple backends. If you want to
1850
learn more please see /*TODO*/.
1851

    
1852
Add a Public Network
1853
----------------------
1854

    
1855
Cyclades supports different Public Networks on different Ganeti backends.
1856
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1857
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1858
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1859
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1860

    
1861
.. code-block:: console
1862

    
1863
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1864
                               --gateway=5.6.7.1 \
1865
                               --subnet6=2001:648:2FFC:1322::/64 \
1866
                               --gateway6=2001:648:2FFC:1322::1 \
1867
                               --public --dhcp --flavor=CUSTOM \
1868
                               --link=br0 --mode=bridged \
1869
                               --name=public_network \
1870
                               --backend-id=1
1871

    
1872
This will create the Public Network on both Cyclades and the Ganeti backend. To
1873
make sure everything was setup correctly, also run:
1874

    
1875
.. code-block:: console
1876

    
1877
   $ snf-manage reconcile-networks
1878

    
1879
You can see all available networks by running:
1880

    
1881
.. code-block:: console
1882

    
1883
   $ snf-manage network-list
1884

    
1885
and inspect each network's state by running:
1886

    
1887
.. code-block:: console
1888

    
1889
   $ snf-manage network-inspect <net_id>
1890

    
1891
Finally, you can see the networks from the Ganeti perspective by running on the
1892
Ganeti MASTER:
1893

    
1894
.. code-block:: console
1895

    
1896
   $ gnt-network list
1897
   $ gnt-network info <network_name>
1898

    
1899
Create pools for Private Networks
1900
---------------------------------
1901

    
1902
To prevent duplicate assignment of resources to different private networks,
1903
Cyclades supports two types of pools:
1904

    
1905
 - MAC prefix Pool
1906
 - Bridge Pool
1907

    
1908
As long as those resourses have been provisioned, admin has to define two
1909
these pools in Synnefo:
1910

    
1911

    
1912
.. code-block:: console
1913

    
1914
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1915

    
1916
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1917

    
1918
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1919

    
1920
.. code-block:: console
1921

    
1922
   DEFAULT_MAC_FILTERED_BRIDGE = 'prv0'
1923

    
1924
Servers restart
1925
---------------
1926

    
1927
Restart gunicorn on node1:
1928

    
1929
.. code-block:: console
1930

    
1931
   # /etc/init.d/gunicorn restart
1932

    
1933
Now let's do the final connections of Cyclades with Ganeti.
1934

    
1935
``snf-dispatcher`` initialization
1936
---------------------------------
1937

    
1938
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1939
manages the Cyclades database accordingly. It also initializes all exchanges. By
1940
default it is not enabled during installation of Cyclades, so let's enable it in
1941
its configuration file ``/etc/default/snf-dispatcher``:
1942

    
1943
.. code-block:: console
1944

    
1945
   SNF_DSPTCH_ENABLE=true
1946

    
1947
and start the daemon:
1948

    
1949
.. code-block:: console
1950

    
1951
   # /etc/init.d/snf-dispatcher start
1952

    
1953
You can see that everything works correctly by tailing its log file
1954
``/var/log/synnefo/dispatcher.log``.
1955

    
1956
``snf-ganeti-eventd`` on GANETI MASTER
1957
--------------------------------------
1958

    
1959
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1960
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1961
The daemon is already installed on the GANETI MASTER (node1 in our case).
1962
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1963
installation, so we enable it in its configuration file
1964
``/etc/default/snf-ganeti-eventd``:
1965

    
1966
.. code-block:: console
1967

    
1968
   SNF_EVENTD_ENABLE=true
1969

    
1970
and start the daemon:
1971

    
1972
.. code-block:: console
1973

    
1974
   # /etc/init.d/snf-ganeti-eventd start
1975

    
1976
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1977

    
1978
Apply Quotas
1979
------------
1980

    
1981
.. code-block:: console
1982

    
1983
   node1 # snf-manage astakos-init --load-service-resources
1984
   node1 # snf-manage astakos-quota --verify
1985
   node1 # snf-manage astakos-quota --sync
1986
   node2 # snf-manage pithos-reset-usage
1987
   node1 # snf-manage cyclades-reset-usage
1988

    
1989
If all the above return successfully, then you have finished with the Cyclades
1990
and Plankton installation and setup.
1991

    
1992
Let's test our installation now.
1993

    
1994

    
1995
Testing of Cyclades (and Plankton)
1996
==================================
1997

    
1998
Cyclades Web UI
1999
---------------
2000

    
2001
First of all we need to test that our Cyclades Web UI works correctly. Open your
2002
browser and go to the Astakos home page. Login and then click 'cyclades' on the
2003
top cloud bar. This should redirect you to:
2004

    
2005
 `http://node1.example.com/ui/`
2006

    
2007
and the Cyclades home page should appear. If not, please go back and find what
2008
went wrong. Do not proceed if you don't see the Cyclades home page.
2009

    
2010
If the Cyclades home page appears, click on the orange button 'New machine'. The
2011
first step of the 'New machine wizard' will appear. This step shows all the
2012
available Images from which you can spawn new VMs. The list should be currently
2013
empty, as we haven't registered any Images yet. Close the wizard and browse the
2014
interface (not many things to see yet). If everything seems to work, let's
2015
register our first Image file.
2016

    
2017
Cyclades Images
2018
---------------
2019

    
2020
To test our Cyclades (and Plankton) installation, we will use an Image stored on
2021
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all
2022
steps, even though you may already have uploaded an Image on Pithos+ from a
2023
:ref:`previous <snf-image-images>` section:
2024

    
2025
 * Upload an Image file to Pithos+
2026
 * Register that Image file to Plankton
2027
 * Spawn a new VM from that Image from the Cyclades Web UI
2028

    
2029
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_
2030
command line client to do the uploading and registering of the Image.
2031

    
2032
Installation of `kamaki`
2033
~~~~~~~~~~~~~~~~~~~~~~~~
2034

    
2035
You can install `kamaki` anywhere you like, since it is a standalone client of
2036
the APIs and talks to the installation over `http`. For the purpose of this
2037
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2038
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2039
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2040
too. We do this by running:
2041

    
2042
.. code-block:: console
2043

    
2044
   # apt-get install kamaki
2045

    
2046
Configuration of kamaki
2047
~~~~~~~~~~~~~~~~~~~~~~~
2048

    
2049
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2050
installation. We do this by running:
2051

    
2052
.. code-block:: console
2053

    
2054
   $ kamaki config set astakos.url "https://node1.example.com"
2055
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
2056
   $ kamaki config set image.url "https://node1.example.com/plankton"
2057
   $ kamaki config set store.url "https://node2.example.com/v1"
2058
   $ kamaki config set global.account "user@example.com"
2059
   $ kamaki config set store.enable on
2060
   $ kamaki config set store.pithos_extensions on
2061
   $ kamaki config set store.url "https://node2.example.com/v1"
2062
   $ kamaki config set store.account USER_UUID
2063
   $ kamaki config set global.token USER_TOKEN
2064

    
2065
The USER_TOKEN and USER_UUID appear on the user's (``user@example.com``)
2066
`Profile` web page on the Astakos Web UI.
2067

    
2068
You can see that the new configuration options have been applied correctly, by
2069
running:
2070

    
2071
.. code-block:: console
2072

    
2073
   $ kamaki config list
2074

    
2075
Upload an Image file to Pithos+
2076
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2077

    
2078
Now, that we have set up `kamaki` we will upload the Image that we have
2079
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2080
under the root ``Pithos`` container (as you may have done when uploading the
2081
Image from the Pithos+ Web UI), we will create a new container called ``images``
2082
and store the Image under that container. We do this for two reasons:
2083

    
2084
a) To demonstrate how to create containers other than the default ``Pithos``.
2085
   This can be done only with the `kamaki` client and not through the Web UI.
2086

    
2087
b) As a best organization practise, so that you won't have your Image files
2088
   tangled along with all your other Pithos+ files and directory structures.
2089

    
2090
We create the new ``images`` container by running:
2091

    
2092
.. code-block:: console
2093

    
2094
   $ kamaki store create images
2095

    
2096
Then, we upload the Image file to that container:
2097

    
2098
.. code-block:: console
2099

    
2100
   $ kamaki store upload --container images \
2101
                         /srv/images/debian_base-6.0-7-x86_64.diskdump \
2102
                         debian_base-6.0-7-x86_64.diskdump
2103

    
2104
The first is the local path and the second is the remote path on Pithos+. If
2105
the new container and the file appears on the Pithos+ Web UI, then you have
2106
successfully created the container and uploaded the Image file.
2107

    
2108
Register an existing Image file to Plankton
2109
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2110

    
2111
Once the Image file has been successfully uploaded on Pithos+, then we register
2112
it to Plankton (so that it becomes visible to Cyclades), by running:
2113

    
2114
.. code-block:: console
2115

    
2116
   $ kamaki image register "Debian Base" \
2117
                           pithos://USER_UUID/images/debian_base-6.0-7-x86_64.diskdump \
2118
                           --public \
2119
                           --disk-format=diskdump \
2120
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2121
                           --property description="Debian Squeeze Base System" \
2122
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2123
                           --property sortorder=1 --property USERS=root --property OS=debian
2124

    
2125
This command registers the Pithos+ file
2126
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an
2127
Image in Plankton. This Image will be public (``--public``), so all users will
2128
be able to spawn VMs from it and is of type ``diskdump``. The first two
2129
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2130
properties are optional, but recommended, so that the Images appear nicely on
2131
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2132
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2133
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2134

    
2135
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2136
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see
2137
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2138
used to show information on the Cyclades UI.
2139

    
2140
Spawn a VM from the Cyclades Web UI
2141
-----------------------------------
2142

    
2143
If the registration completes successfully, then go to the Cyclades Web UI from
2144
your browser at:
2145

    
2146
 `https://node1.example.com/ui/`
2147

    
2148
Click on the 'New Machine' button and the first step of the wizard will appear.
2149
Click on 'My Images' (right after 'System' Images) on the left pane of the
2150
wizard. Your previously registered Image "Debian Base" should appear under
2151
'Available Images'. If not, something has gone wrong with the registration. Make
2152
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image
2153
register`` returns successfully with all options and properties as shown above.
2154

    
2155
If the Image appears on the list, select it and complete the wizard by selecting
2156
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2157
write down your password, because you *WON'T* be able to retrieve it later.
2158

    
2159
If everything was setup correctly, after a few minutes your new machine will go
2160
to state 'Running' and you will be able to use it. Click 'Console' to connect
2161
through VNC out of band, or click on the machine's icon to connect directly via
2162
SSH or RDP (for windows machines).
2163

    
2164
Congratulations. You have successfully installed the whole Synnefo stack and
2165
connected all components. Go ahead in the next section to test the Network
2166
functionality from inside Cyclades and discover even more features.
2167

    
2168
General Testing
2169
===============
2170

    
2171
Notes
2172
=====
2173