Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 02d94254

History | View | Annotate | Download (73.5 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
 * Identity Management (Astakos)
14
 * Object Storage Service (Pithos+)
15
 * Compute Service (Cyclades)
16
 * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the guide
24
and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42
.. note:: It is import that the two machines are under the same domain name.
43
    If they are not, you can do this by editting the file ``/etc/hosts``
44
    on both machines, and add the following lines:
45

    
46
    .. code-block:: console
47

    
48
        4.3.2.1     node1.example.com
49
        4.3.2.2     node2.example.com
50

    
51

    
52
General Prerequisites
53
=====================
54

    
55
These are the general synnefo prerequisites, that you need on node1 and node2
56
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
57

    
58
To be able to download all synnefo components you need to add the following
59
lines in your ``/etc/apt/sources.list`` file:
60

    
61
| ``deb http://apt.dev.grnet.gr squeeze main``
62
| ``deb-src http://apt.dev.grnet.gr squeeze main``
63
| ``deb http://apt.dev.grnet.gr squeeze-backports main``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos+ will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
91
system clocks (e.g. by running ntpd).
92

    
93
Node1
94
-----
95

    
96
General Synnefo dependencies
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

    
99
 * apache (http server)
100
 * gunicorn (WSGI http server)
101
 * postgresql (database)
102
 * rabbitmq (message queue)
103
 * ntp (NTP daemon)
104

    
105
You can install apache2, progresql and ntp by running:
106

    
107
.. code-block:: console
108

    
109
   # apt-get install apache2 postgresql ntp
110

    
111
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
112
the official debian backports:
113

    
114
.. code-block:: console
115

    
116
   # apt-get -t squeeze-backports install gunicorn
117

    
118
On node1, we will create our databases, so you will also need the
119
python-psycopg2 package:
120

    
121
.. code-block:: console
122

    
123
   # apt-get install python-psycopg2
124

    
125
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
126
following line to ``/etc/apt/sources.list``:
127

    
128
.. code-block:: console
129

    
130
  deb http://www.rabbitmq.com/debian testing main
131

    
132
Add RabbitMQ public key, to trusted key list:
133

    
134
.. code-block:: console
135

    
136
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
137
  # apt-key add rabbitmq-signing-key-public.asc
138

    
139
Finally, to install the package run:
140

    
141
.. code-block:: console
142

    
143
  # apt-get update
144
  # apt-get install rabbitmq-server
145

    
146
Database setup
147
~~~~~~~~~~~~~~
148

    
149
On node1, we create a database called ``snf_apps``, that will host all django
150
apps related tables. We also create the user ``synnefo`` and grant him all
151
privileges on the database. We do this by running:
152

    
153
.. code-block:: console
154

    
155
   root@node1:~ # su - postgres
156
   postgres@node1:~ $ psql
157
   postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
158
   postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
159
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
160

    
161
We also create the database ``snf_pithos`` needed by the pithos+ backend and
162
grant the ``synnefo`` user all privileges on the database. This database could
163
be created on node2 instead, but we do it on node1 for simplicity. We will
164
create all needed databases on node1 and then node2 will connect to them.
165

    
166
.. code-block:: console
167

    
168
   postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
169
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
170

    
171
Configure the database to listen to all network interfaces. You can do this by
172
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
173
``listen_addresses`` to ``'*'`` :
174

    
175
.. code-block:: console
176

    
177
   listen_addresses = '*'
178

    
179
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
180
node2 to connect to the database. Add the following lines under ``#IPv4 local
181
connections:`` :
182

    
183
.. code-block:: console
184

    
185
   host		all	all	4.3.2.1/32	md5
186
   host		all	all	4.3.2.2/32	md5
187

    
188
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
189
actual IPs. Now, restart the server to apply the changes:
190

    
191
.. code-block:: console
192

    
193
   # /etc/init.d/postgresql restart
194

    
195
Gunicorn setup
196
~~~~~~~~~~~~~~
197

    
198
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following:
199

    
200
.. code-block:: console
201

    
202
   CONFIG = {
203
    'mode': 'django',
204
    'environment': {
205
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
206
    },
207
    'working_dir': '/etc/synnefo',
208
    'user': 'www-data',
209
    'group': 'www-data',
210
    'args': (
211
      '--bind=127.0.0.1:8080',
212
      '--workers=8',
213
      '--log-level=debug',
214
    ),
215
   }
216

    
217
.. warning:: Do NOT start the server yet, because it won't find the
218
    ``synnefo.settings`` module. We will start the server after successful
219
    installation of astakos. If the server is running::
220

    
221
       # /etc/init.d/gunicorn stop
222

    
223
Apache2 setup
224
~~~~~~~~~~~~~
225

    
226
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
227
the following:
228

    
229
.. code-block:: console
230

    
231
   <VirtualHost *:80>
232
     ServerName node1.example.com
233

    
234
     RewriteEngine On
235
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
236
     RewriteRule ^(.*)$ - [F,L]
237
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
238
   </VirtualHost>
239

    
240
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
241
containing the following:
242

    
243
.. code-block:: console
244

    
245
   <IfModule mod_ssl.c>
246
   <VirtualHost _default_:443>
247
     ServerName node1.example.com
248

    
249
     Alias /static "/usr/share/synnefo/static"
250

    
251
   #  SetEnv no-gzip
252
   #  SetEnv dont-vary
253

    
254
     AllowEncodedSlashes On
255

    
256
     RequestHeader set X-Forwarded-Protocol "https"
257

    
258
     <Proxy * >
259
       Order allow,deny
260
       Allow from all
261
     </Proxy>
262

    
263
     SetEnv                proxy-sendchunked
264
     SSLProxyEngine        off
265
     ProxyErrorOverride    off
266

    
267
     ProxyPass        /static !
268
     ProxyPass        / http://localhost:8080/ retry=0
269
     ProxyPassReverse / http://localhost:8080/
270

    
271
     RewriteEngine On
272
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
273
     RewriteRule ^(.*)$ - [F,L]
274

    
275
     SSLEngine on
276
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
277
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
278
   </VirtualHost>
279
   </IfModule>
280

    
281
Now enable sites and modules by running:
282

    
283
.. code-block:: console
284

    
285
   # a2enmod ssl
286
   # a2enmod rewrite
287
   # a2dissite default
288
   # a2ensite synnefo
289
   # a2ensite synnefo-ssl
290
   # a2enmod headers
291
   # a2enmod proxy_http
292

    
293
.. warning:: Do NOT start/restart the server yet. If the server is running::
294

    
295
       # /etc/init.d/apache2 stop
296

    
297
.. _rabbitmq-setup:
298

    
299
Message Queue setup
300
~~~~~~~~~~~~~~~~~~~
301

    
302
The message queue will run on node1, so we need to create the appropriate
303
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
304
exchanges:
305

    
306
.. code-block:: console
307

    
308
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
309
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
310

    
311
We do not need to initialize the exchanges. This will be done automatically,
312
during the Cyclades setup.
313

    
314
Pithos+ data directory setup
315
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
316

    
317
As mentioned in the General Prerequisites section, there is a directory called
318
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
319
directory inside it:
320

    
321
.. code-block:: console
322

    
323
   # cd /srv/pithos
324
   # mkdir data
325
   # chown www-data:www-data data
326
   # chmod g+ws data
327

    
328
You are now ready with all general prerequisites concerning node1. Let's go to
329
node2.
330

    
331
Node2
332
-----
333

    
334
General Synnefo dependencies
335
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
336

    
337
 * apache (http server)
338
 * gunicorn (WSGI http server)
339
 * postgresql (database)
340
 * ntp (NTP daemon)
341

    
342
You can install the above by running:
343

    
344
.. code-block:: console
345

    
346
   # apt-get install apache2 postgresql ntp
347

    
348
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
349
the official debian backports:
350

    
351
.. code-block:: console
352

    
353
   # apt-get -t squeeze-backports install gunicorn
354

    
355
Node2 will connect to the databases on node1, so you will also need the
356
python-psycopg2 package:
357

    
358
.. code-block:: console
359

    
360
   # apt-get install python-psycopg2
361

    
362
Database setup
363
~~~~~~~~~~~~~~
364

    
365
All databases have been created and setup on node1, so we do not need to take
366
any action here. From node2, we will just connect to them. When you get familiar
367
with the software you may choose to run different databases on different nodes,
368
for performance/scalability/redundancy reasons, but those kind of setups are out
369
of the purpose of this guide.
370

    
371
Gunicorn setup
372
~~~~~~~~~~~~~~
373

    
374
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following
375
(same contents as in node1; you can just copy/paste the file):
376

    
377
.. code-block:: console
378

    
379
   CONFIG = {
380
    'mode': 'django',
381
    'environment': {
382
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
383
    },
384
    'working_dir': '/etc/synnefo',
385
    'user': 'www-data',
386
    'group': 'www-data',
387
    'args': (
388
      '--bind=127.0.0.1:8080',
389
      '--workers=4',
390
      '--log-level=debug',
391
      '--timeout=43200'
392
    ),
393
   }
394

    
395
.. warning:: Do NOT start the server yet, because it won't find the
396
    ``synnefo.settings`` module. We will start the server after successful
397
    installation of astakos. If the server is running::
398

    
399
       # /etc/init.d/gunicorn stop
400

    
401
Apache2 setup
402
~~~~~~~~~~~~~
403

    
404
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
405
the following:
406

    
407
.. code-block:: console
408

    
409
   <VirtualHost *:80>
410
     ServerName node2.example.com
411

    
412
     RewriteEngine On
413
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
414
     RewriteRule ^(.*)$ - [F,L]
415
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
416
   </VirtualHost>
417

    
418
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
419
containing the following:
420

    
421
.. code-block:: console
422

    
423
   <IfModule mod_ssl.c>
424
   <VirtualHost _default_:443>
425
     ServerName node2.example.com
426

    
427
     Alias /static "/usr/share/synnefo/static"
428

    
429
     SetEnv no-gzip
430
     SetEnv dont-vary
431
     AllowEncodedSlashes On
432

    
433
     RequestHeader set X-Forwarded-Protocol "https"
434

    
435
     <Proxy * >
436
       Order allow,deny
437
       Allow from all
438
     </Proxy>
439

    
440
     SetEnv                proxy-sendchunked
441
     SSLProxyEngine        off
442
     ProxyErrorOverride    off
443

    
444
     ProxyPass        /static !
445
     ProxyPass        / http://localhost:8080/ retry=0
446
     ProxyPassReverse / http://localhost:8080/
447

    
448
     SSLEngine on
449
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
450
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
451
   </VirtualHost>
452
   </IfModule>
453

    
454
As in node1, enable sites and modules by running:
455

    
456
.. code-block:: console
457

    
458
   # a2enmod ssl
459
   # a2enmod rewrite
460
   # a2dissite default
461
   # a2ensite synnefo
462
   # a2ensite synnefo-ssl
463
   # a2enmod headers
464
   # a2enmod proxy_http
465

    
466
.. warning:: Do NOT start/restart the server yet. If the server is running::
467

    
468
       # /etc/init.d/apache2 stop
469

    
470
We are now ready with all general prerequisites for node2. Now that we have
471
finished with all general prerequisites for both nodes, we can start installing
472
the services. First, let's install Astakos on node1.
473

    
474

    
475
Installation of Astakos on node1
476
================================
477

    
478
To install astakos, grab the package from our repository (make sure  you made
479
the additions needed in your ``/etc/apt/sources.list`` file, as described
480
previously), by running:
481

    
482
.. code-block:: console
483

    
484
   # apt-get install snf-astakos-app
485

    
486
After successful installation of snf-astakos-app, make sure that also
487
snf-webproject has been installed (marked as "Recommended" package). By default
488
Debian installs "Recommended" packages, but if you have changed your
489
configuration and the package didn't install automatically, you should
490
explicitly install it manually running:
491

    
492
.. code-block:: console
493

    
494
   # apt-get install snf-webproject
495

    
496
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
497
the experienced administrator the ability to install synnefo in a custom made
498
django project. This corner case concerns only very advanced users that know
499
what they are doing and want to experiment with synnefo.
500

    
501

    
502
.. _conf-astakos:
503

    
504
Configuration of Astakos
505
========================
506

    
507
Conf Files
508
----------
509

    
510
After astakos is successfully installed, you will find the directory
511
``/etc/synnefo`` and some configuration files inside it. The files contain
512
commented configuration options, which are the default options. While installing
513
new snf-* components, new configuration files will appear inside the directory.
514
In this guide (and for all services), we will edit only the minimum necessary
515
configuration options, to reflect our setup. Everything else will remain as is.
516

    
517
After getting familiar with synnefo, you will be able to customize the software
518
as you wish and fits your needs. Many options are available, to empower the
519
administrator with extensively customizable setups.
520

    
521
For the snf-webproject component (installed as an astakos dependency), we
522
need the following:
523

    
524
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
525
uncomment and edit the ``DATABASES`` block to reflect our database:
526

    
527
.. code-block:: console
528

    
529
   DATABASES = {
530
    'default': {
531
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
532
        'ENGINE': 'postgresql_psycopg2',
533
         # ATTENTION: This *must* be the absolute path if using sqlite3.
534
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
535
        'NAME': 'snf_apps',
536
        'USER': 'synnefo',                      # Not used with sqlite3.
537
        'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
538
        # Set to empty string for localhost. Not used with sqlite3.
539
        'HOST': '4.3.2.1',
540
        # Set to empty string for default. Not used with sqlite3.
541
        'PORT': '5432',
542
    }
543
   }
544

    
545
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
546
``SECRET_KEY``. This is a django specific setting which is used to provide a
547
seed in secret-key hashing algorithms. Set this to a random string of your
548
choise and keep it private:
549

    
550
.. code-block:: console
551

    
552
   SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
553

    
554
For astakos specific configuration, edit the following options in
555
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
556

    
557
.. code-block:: console
558

    
559
   ASTAKOS_DEFAULT_ADMIN_EMAIL = None
560

    
561
   ASTAKOS_COOKIE_DOMAIN = '.example.com'
562

    
563
   ASTAKOS_BASEURL = 'https://node1.example.com'
564

    
565
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
566
services). ``ASTAKOS_BASEURL`` is the astakos home page.
567

    
568
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
569
Every time a new account is created a notification is sent to this email.
570
For this we need access to a running mail server, so we have disabled
571
it for now by setting its value to None. For more informations on this,
572
read the relative :ref:`section <mail-server>`.
573

    
574
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
575
    If you would like to enable it, you have to edit the following options:
576

    
577
    .. code-block:: console
578

    
579
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
580
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
581
        ASTAKOS_RECAPTCHA_USE_SSL = True
582
        ASTAKOS_RECAPTCHA_ENABLED = True
583

    
584
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
585
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
586

    
587
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
588

    
589
.. code-block:: console
590

    
591
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
592

    
593
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
594

    
595
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
596

    
597
Those settings have to do with the black cloudbar endpoints and will be described
598
in more detail later on in this guide. For now, just edit the domain to point at
599
node1 which is where we have installed Astakos.
600

    
601
If you are an advanced user and want to use the Shibboleth Authentication method,
602
read the relative :ref:`section <shibboleth-auth>`.
603

    
604
.. note:: Because Cyclades and Astakos are running on the same machine
605
    in our example, we have to deactivate the CSRF verification. We can do so
606
    by adding to
607
    ``/etc/synnefo/99-local.conf``:
608

    
609
    .. code-block:: console
610

    
611
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
612
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
613

    
614
Enable Pooling
615
--------------
616

    
617
This section can be bypassed, but we strongly recommend you apply the following,
618
since they result in a significant performance boost.
619

    
620
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
621
around Psycopg2. This allows independent Django requests to reuse pooled DB
622
connections, with significant performance gains.
623

    
624
To use, first monkey-patch psycopg2. For Django, run this before the
625
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
626

    
627
.. code-block:: console
628

    
629
   from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
630
   monkey_patch_psycopg2()
631

    
632
If running with greenlets, we should modify psycopg2 behavior, so it works
633
properly in a greenlet context:
634

    
635
.. code-block:: console
636

    
637
   from synnefo.lib.db.psyco_gevent import make_psycopg_green
638
   make_psycopg_green()
639

    
640
Use the Psycopg2 driver as usual. For Django, this means using
641
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
642
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
643
driver, through ``DATABASES.OPTIONS`` in django.
644

    
645
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
646
file that looks like this:
647

    
648
.. code-block:: console
649

    
650
   # Monkey-patch psycopg2
651
   from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
652
   monkey_patch_psycopg2()
653

    
654
   # If running with greenlets
655
   from synnefo.lib.db.psyco_gevent import make_psycopg_green
656
   make_psycopg_green()
657

    
658
   DATABASES = {
659
    'default': {
660
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
661
        'ENGINE': 'postgresql_psycopg2',
662
        'OPTIONS': {'synnefo_poolsize': 8},
663

    
664
         # ATTENTION: This *must* be the absolute path if using sqlite3.
665
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
666
        'NAME': 'snf_apps',
667
        'USER': 'synnefo',                      # Not used with sqlite3.
668
        'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
669
        # Set to empty string for localhost. Not used with sqlite3.
670
        'HOST': '4.3.2.1',
671
        # Set to empty string for default. Not used with sqlite3.
672
        'PORT': '5432',
673
    }
674
   }
675

    
676
Database Initialization
677
-----------------------
678

    
679
After configuration is done, we initialize the database by running:
680

    
681
.. code-block:: console
682

    
683
   # snf-manage syncdb
684

    
685
At this example we don't need to create a django superuser, so we select
686
``[no]`` to the question. After a successful sync, we run the migration needed
687
for astakos:
688

    
689
.. code-block:: console
690

    
691
   # snf-manage migrate im
692

    
693
Then, we load the pre-defined user groups
694

    
695
.. code-block:: console
696

    
697
   # snf-manage loaddata groups
698

    
699
.. _services-reg:
700

    
701
Services Registration
702
---------------------
703

    
704
When the database is ready, we configure the elements of the Astakos cloudbar,
705
to point to our future services:
706

    
707
.. code-block:: console
708

    
709
   # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
710
   # snf-manage service-add "cyclades" https://node1.example.com/ui/
711
   # snf-manage service-add "pithos+" https://node2.example.com/ui/
712

    
713
Servers Initialization
714
----------------------
715

    
716
Finally, we initialize the servers on node1:
717

    
718
.. code-block:: console
719

    
720
   root@node1:~ # /etc/init.d/gunicorn restart
721
   root@node1:~ # /etc/init.d/apache2 restart
722

    
723
We have now finished the Astakos setup. Let's test it now.
724

    
725

    
726
Testing of Astakos
727
==================
728

    
729
Open your favorite browser and go to:
730

    
731
``http://node1.example.com/im``
732

    
733
If this redirects you to ``https://node1.example.com/im`` and you can see
734
the "welcome" door of Astakos, then you have successfully setup Astakos.
735

    
736
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
737
and fill all your data at the sign up form. Then click "SUBMIT". You should now
738
see a green box on the top, which informs you that you made a successful request
739
and the request has been sent to the administrators. So far so good, let's assume
740
that you created the user with username ``user@example.com``.
741

    
742
Now we need to activate that user. Return to a command prompt at node1 and run:
743

    
744
.. code-block:: console
745

    
746
   root@node1:~ # snf-manage user-list
747

    
748
This command should show you a list with only one user; the one we just created.
749
This user should have an id with a value of ``1``. It should also have an
750
"active" status with the value of ``0`` (inactive). Now run:
751

    
752
.. code-block:: console
753

    
754
   root@node1:~ # snf-manage user-modify --set-active 1
755

    
756
This modifies the active value to ``1``, and actually activates the user.
757
When running in production, the activation is done automatically with different
758
types of moderation, that Astakos supports. You can see the moderation methods
759
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
760
documentation. In production, you can also manually activate a user, by sending
761
him/her an activation email. See how to do this at the :ref:`User
762
activation <user_activation>` section.
763

    
764
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with
765
your browser again. Try to sign in using your new credentials. If the astakos
766
menu appears and you can see your profile, then you have successfully setup
767
Astakos.
768

    
769
Let's continue to install Pithos+ now.
770

    
771

    
772
Installation of Pithos+ on node2
773
================================
774

    
775
To install pithos+, grab the packages from our repository (make sure  you made
776
the additions needed in your ``/etc/apt/sources.list`` file, as described
777
previously), by running:
778

    
779
.. code-block:: console
780

    
781
   # apt-get install snf-pithos-app
782

    
783
After successful installation of snf-pithos-app, make sure that also
784
snf-webproject has been installed (marked as "Recommended" package). Refer to
785
the "Installation of Astakos on node1" section, if you don't remember why this
786
should happen. Now, install the pithos web interface:
787

    
788
.. code-block:: console
789

    
790
   # apt-get install snf-pithos-webclient
791

    
792
This package provides the standalone pithos web client. The web client is the
793
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
794
interface's cloudbar, at the top of the Astakos homepage.
795

    
796

    
797
.. _conf-pithos:
798

    
799
Configuration of Pithos+
800
========================
801

    
802
Conf Files
803
----------
804

    
805
After pithos+ is successfully installed, you will find the directory
806
``/etc/synnefo`` and some configuration files inside it, as you did in node1
807
after installation of astakos. Here, you will not have to change anything that
808
has to do with snf-common or snf-webproject. Everything is set at node1. You
809
only need to change settings that have to do with pithos+. Specifically:
810

    
811
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
812
this options:
813

    
814
.. code-block:: console
815

    
816
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
817

    
818
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
819

    
820
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
821
   PITHOS_AUTHENTICATION_USERS = None
822

    
823
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
824
   PITHOS_USER_CATALOG_URL = 'http://node1.example.com/user_catalogs'
825
   PITHOS_USER_FEEDBACK_URL = 'http://node1.example.com/feedback'
826
   PITHOS_USER_LOGIN_URL = 'http://node1.example.com/login'
827

    
828

    
829
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
830
find the pithos+ backend database. Above we tell pithos+ that its database is
831
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
832
``example_passw0rd``.  All those settings where setup during node1's "Database
833
setup" section.
834

    
835
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
836
the pithos+ backend data. Above we tell pithos+ to store its data under
837
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
838
directory at node1's "Pithos+ data directory setup" section.
839

    
840
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
841
is available the astakos authentication api. If not set, pithos+ tries to
842
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
843

    
844
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on
845
the Astakos node (node1 in our case):
846

    
847
.. code-block:: console
848

    
849
   # snf-manage service-list
850

    
851
The token has been generated automatically during the :ref:`Pithos+ service
852
registration <services-reg>`.
853

    
854
Then we need to setup the web UI and connect it to astakos. To do so, edit
855
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
856

    
857
.. code-block:: console
858

    
859
   PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
860
   PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
861

    
862
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
863
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
864
pithos+ feedback form. Astakos already provides a generic feedback form for all
865
services, so we use this one.
866

    
867
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
868
pithos+ web UI with the astakos web UI (through the top cloudbar):
869

    
870
.. code-block:: console
871

    
872
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
873
   PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
874
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
875
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
876

    
877
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
878
cloudbar.
879

    
880
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
881
Astakos service. You can see all :ref:`registered services <services-reg>` by
882
running on the Astakos node (node1):
883

    
884
.. code-block:: console
885

    
886
   # snf-manage service-list
887

    
888
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos service's
889
``id`` as shown by the above command, in our case ``3``.
890

    
891
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
892
pithos+ web client to get from astakos all the information needed to fill its
893
own cloudbar. So we put our astakos deployment urls there.
894

    
895
Pooling and Greenlets
896
---------------------
897

    
898
Pithos is pooling-ready without the need of further configuration, because it
899
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
900
backend objects for access to the Pithos DB.
901

    
902
However, as in Astakos, if running with Greenlets, it is also recommended to
903
modify psycopg2 behavior so it works properly in a greenlet context. This means
904
adding the following lines at the top of your
905
``/etc/synnefo/10-snf-webproject-database.conf`` file:
906

    
907
.. code-block:: console
908

    
909
   from synnefo.lib.db.psyco_gevent import make_psycopg_green
910
   make_psycopg_green()
911

    
912
Furthermore, add the ``--worker-class=gevent`` argument on your
913
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something like
914
this:
915

    
916
.. code-block:: console
917

    
918
   CONFIG = {
919
    'mode': 'django',
920
    'environment': {
921
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
922
    },
923
    'working_dir': '/etc/synnefo',
924
    'user': 'www-data',
925
    'group': 'www-data',
926
    'args': (
927
      '--bind=127.0.0.1:8080',
928
      '--workers=4',
929
      '--worker-class=gevent',
930
      '--log-level=debug',
931
      '--timeout=43200'
932
    ),
933
   }
934

    
935
Servers Initialization
936
----------------------
937

    
938
After configuration is done, we initialize the servers on node2:
939

    
940
.. code-block:: console
941

    
942
   root@node2:~ # /etc/init.d/gunicorn restart
943
   root@node2:~ # /etc/init.d/apache2 restart
944

    
945
You have now finished the Pithos+ setup. Let's test it now.
946

    
947

    
948
Testing of Pithos+
949
==================
950

    
951
Open your browser and go to the Astakos homepage:
952

    
953
``http://node1.example.com/im``
954

    
955
Login, and you will see your profile page. Now, click the "pithos+" link on the
956
top black cloudbar. If everything was setup correctly, this will redirect you
957
to:
958

    
959
``https://node2.example.com/ui``
960

    
961
and you will see the blue interface of the Pithos+ application.  Click the
962
orange "Upload" button and upload your first file. If the file gets uploaded
963
successfully, then this is your first sign of a successful Pithos+ installation.
964
Go ahead and experiment with the interface to make sure everything works
965
correctly.
966

    
967
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
968

    
969
If you don't stumble on any problems, then you have successfully installed
970
Pithos+, which you can use as a standalone File Storage Service.
971

    
972
If you would like to do more, such as:
973

    
974
 * Spawning VMs
975
 * Spawning VMs from Images stored on Pithos+
976
 * Uploading your custom Images to Pithos+
977
 * Spawning VMs from those custom Images
978
 * Registering existing Pithos+ files as Images
979
 * Connect VMs to the Internet
980
 * Create Private Networks
981
 * Add VMs to Private Networks
982

    
983
please continue with the rest of the guide.
984

    
985

    
986
Cyclades (and Plankton) Prerequisites
987
=====================================
988

    
989
Before proceeding with the Cyclades (and Plankton) installation, make sure you
990
have successfully set up Astakos and Pithos+ first, because Cyclades depends
991
on them. If you don't have a working Astakos and Pithos+ installation yet,
992
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
993

    
994
Besides Astakos and Pithos+, you will also need a number of additional working
995
prerequisites, before you start the Cyclades installation.
996

    
997
Ganeti
998
------
999

    
1000
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1001
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1002
Please refer to the
1003
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1004
gory details. A successful Ganeti installation concludes with a working
1005
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1006
<GANETI_NODES>`.
1007

    
1008
The above Ganeti cluster can run on different physical machines than node1 and
1009
node2 and can scale independently, according to your needs.
1010

    
1011
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1012
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1013
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1014

    
1015
We highly recommend that you read the official Ganeti documentation, if you are
1016
not familiar with Ganeti. If you are extremely impatient, you can result with
1017
the above assumed setup by running on both nodes:
1018

    
1019
.. code-block:: console
1020

    
1021
   # apt-get install -t squeeze-backports ganeti2 ganeti-htools
1022
   # modprobe drbd minor_count=255 usermode_helper=/bin/true
1023

    
1024
Unfortunatelly, stock Ganeti doesn't support IP pool management yet (we are
1025
working hard to merge it upstream for Ganeti 2.7). Synnefo depends on the IP
1026
pool functionality of Ganeti, so you have to use GRNET's patches for now. To
1027
do so you have to build your own package from source. Please clone our local
1028
repo:
1029

    
1030
.. code-block:: console
1031

    
1032
   # git clone https://code.grnet.gr/git/ganeti-local
1033
   # cd ganeti-local
1034
   # git checkout stable-2.6-ippool-hotplug-esi
1035
   # git checkout debian-2.6
1036

    
1037
Then please check if you can complile ganeti:
1038

    
1039
.. code-block:: console
1040

    
1041
   # cd ganeti-local
1042
   # ./automake.sh
1043
   # ./configure
1044
   # make
1045

    
1046
To do so you must have a correct build environment. Please refer to INSTALL
1047
file in the source tree. Most of the packages needed are refered here:
1048

    
1049
.. code-block:: console
1050

    
1051
   #  apt-get install graphviz automake lvm2 ssh bridge-utils iproute iputils-arping \
1052
                      ndisc6 python python-pyopenssl openssl \
1053
                      python-pyparsing python-simplejson \
1054
                      python-pyinotify python-pycurl socat \
1055
                      python-elementtree kvm qemu-kvm \
1056
                      ghc6 libghc6-json-dev libghc6-network-dev \
1057
                      libghc6-parallel-dev libghc6-curl-dev \
1058
                      libghc-quickcheck2-dev hscolour hlint
1059
                      python-support python-paramiko \
1060
                      python-fdsend python-ipaddr python-bitarray libjs-jquery fping
1061

    
1062
Now let try to build the package:
1063

    
1064
.. code-block:: console
1065

    
1066
   # apt-get install git-buildpackage
1067
   # mkdir ../build-area
1068
   # git-buildpackage --git-upstream-branch=stable-2.6-ippool-hotplug-esi \
1069
                   --git-debian-branch=debian-2.6 \
1070
                   --git-export=INDEX \
1071
                   --git-ignore-new
1072

    
1073
This will create two deb packages in build-area. You should then run in both
1074
nodes:
1075

    
1076
.. code-block:: console
1077

    
1078
   # dpkg -i ../build-area/snf-ganeti.*deb
1079
   # dpkg -i ../build-area/ganeti-htools.*deb
1080
   # apt-get install -f
1081

    
1082
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1083
both nodes, choose a domain name that resolves to a valid floating IP (let's say
1084
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access
1085
between each other using ssh keys and not passwords. Also, make sure there is an
1086
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup
1087
a bridge interface on the host machines (e.g: br0). Then run on node1:
1088

    
1089
.. code-block:: console
1090

    
1091
   root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1092
                                   --no-etc-hosts --vg-name=ganeti \
1093
                                   --nic-parameters link=br0 --master-netdev eth0 \
1094
                                   ganeti.node1.example.com
1095
   root@node1:~ # gnt-cluster modify --default-iallocator hail
1096
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
1097
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1098

    
1099
   root@node1:~ # gnt-node add --no-node-setup --master-capable=yes \
1100
                               --vm-capable=yes node2.example.com
1101
   root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1102
   root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1103

    
1104
For any problems you may stumble upon installing Ganeti, please refer to the
1105
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1106
of Ganeti is out of the scope of this guide.
1107

    
1108
.. _cyclades-install-snfimage:
1109

    
1110
snf-image
1111
---------
1112

    
1113
Installation
1114
~~~~~~~~~~~~
1115
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1116
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1117
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1118
node1 and node2. You can do this by running on *both* nodes:
1119

    
1120
.. code-block:: console
1121

    
1122
   # apt-get install snf-image-host snf-pithos-backend python-psycopg2
1123

    
1124
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able to
1125
handle image files stored on Pithos+. It also needs `python-psycopg2` to be able
1126
to access the Pithos+ database. This is why, we also install them on *all*
1127
VM-capable Ganeti nodes.
1128

    
1129
Now, you need to download and save the corresponding helper package. Please see
1130
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's
1131
assume that you installed snf-image-host version 0.4.4-1. Then, you need
1132
snf-image-helper v0.4.4-1 on *both* nodes:
1133

    
1134
.. code-block:: console
1135

    
1136
   # cd /var/lib/snf-image/helper/
1137
   # wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.4.4-1_all.deb
1138

    
1139
.. warning:: Be careful: Do NOT install the snf-image-helper debian package.
1140
             Just put it under /var/lib/snf-image/helper/
1141

    
1142
Once, you have downloaded the snf-image-helper package, create the helper VM by
1143
running on *both* nodes:
1144

    
1145
.. code-block:: console
1146

    
1147
   # ln -s snf-image-helper_0.4.4-1_all.deb snf-image-helper.deb
1148
   # snf-image-update-helper
1149

    
1150
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1151
snf-image-host to run successfully.
1152

    
1153
Configuration
1154
~~~~~~~~~~~~~
1155
snf-image supports native access to Images stored on Pithos+. This means that
1156
snf-image can talk directly to the Pithos+ backend, without the need of providing
1157
a public URL. More details, are described in the next section. For now, the only
1158
thing we need to do, is configure snf-image to access our Pithos+ backend.
1159

    
1160
To do this, we need to set the corresponding variables in
1161
``/etc/default/snf-image``, to reflect our Pithos+ setup:
1162

    
1163
.. code-block:: console
1164

    
1165
   PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1166

    
1167
   PITHOS_DATA="/srv/pithos/data"
1168

    
1169
If you have installed your Ganeti cluster on different nodes than node1 and node2 make
1170
sure that ``/srv/pithos/data`` is visible by all of them.
1171

    
1172
If you would like to use Images that are also/only stored locally, you need to
1173
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1174
Pithos+.
1175

    
1176
Testing
1177
~~~~~~~
1178
You can test that snf-image is successfully installed by running on the
1179
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1180

    
1181
.. code-block:: console
1182

    
1183
   # gnt-os diagnose
1184

    
1185
This should return ``valid`` for snf-image.
1186

    
1187
If you are interested to learn more about snf-image's internals (and even use
1188
it alongside Ganeti without Synnefo), please see
1189
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning
1190
installation instructions, documentation on the design and implementation, and
1191
supported Image formats.
1192

    
1193
.. _snf-image-images:
1194

    
1195
snf-image's actual Images
1196
-------------------------
1197

    
1198
Now that snf-image is installed successfully we need to provide it with some
1199
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1200
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1201
format. For more information about snf-image's Image formats see `here
1202
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1203

    
1204
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1205
above Images to be stored:
1206

    
1207
 * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in
1208
   :file:`/etc/default/snf-image`)
1209
 * On a remote host (accessible via a public URL e.g: http://... or ftp://...)
1210
 * On Pithos+ (accessible natively, not only by its public URL)
1211

    
1212
For the purpose of this guide, we will use the `Debian Squeeze Base Image
1213
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official
1214
`snf-image page
1215
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1216
of type ``diskdump``. We will store it in our new Pithos+ installation.
1217

    
1218
To do so, do the following:
1219

    
1220
a) Download the Image from the official snf-image page (`image link
1221
   <https://pithos.okeanos.grnet.gr/public/9epgb>`_).
1222

    
1223
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI
1224
   or the command line client `kamaki
1225
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.
1226

    
1227
Once the Image is uploaded successfully, download the Image's metadata file
1228
from the official snf-image page (`image_metadata link
1229
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for
1230
spawning a VM from Ganeti, in the next section.
1231

    
1232
Of course, you can repeat the procedure to upload more Images, available from the
1233
`official snf-image page
1234
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1235

    
1236
.. _ganeti-with-pithos-images:
1237

    
1238
Spawning a VM from a Pithos+ Image, using Ganeti
1239
------------------------------------------------
1240

    
1241
Now, it is time to test our installation so far. So, we have Astakos and
1242
Pithos+ installed, we have a working Ganeti installation, the snf-image
1243
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1244
Pithos+. Make sure you also have the `metadata file
1245
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1246

    
1247
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1248

    
1249
.. code-block:: console
1250

    
1251
   # gnt-instance add -o snf-image+default --os-parameters \
1252
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1253
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1254
                      testvm1
1255

    
1256
In the above command:
1257

    
1258
 * ``img_passwd``: the arbitrary root password of your new instance
1259
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1260
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
1261
               should have the format
1262
               ``pithos://<username>/<container>/<filename>``:
1263
                * ``username``: ``user@example.com`` (defined during Astakos sign up)
1264
                * ``container``: ``pithos`` (default, if the Web UI was used)
1265
                * ``filename``: the name of file (visible also from the Web UI)
1266
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1267
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1268
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1269

    
1270
If the ``gnt-instance add`` command returns successfully, then run:
1271

    
1272
.. code-block:: console
1273

    
1274
   # gnt-instance info testvm1 | grep "console connection"
1275

    
1276
to find out where to connect using VNC. If you can connect successfully and can
1277
login to your new instance using the root password ``my_vm_example_passw0rd``,
1278
then everything works as expected and you have your new Debian Base VM up and
1279
running.
1280

    
1281
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1282
to access the Pithos+ database and the Pithos+ backend data. Also, make sure
1283
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add``
1284
succeeds but you cannot connect, again find out what went wrong. Do *NOT*
1285
proceed to the next steps unless you are sure everything works till this point.
1286

    
1287
If everything works, you have successfully connected Ganeti with Pithos+. Let's
1288
move on to networking now.
1289

    
1290
.. warning::
1291
    You can bypass the networking sections and go straight to
1292
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1293
    the Cyclades Network Service, but only the Cyclades Compute Service
1294
    (recommended for now).
1295

    
1296
Networking Setup Overview
1297
-------------------------
1298

    
1299
This part is deployment-specific and must be customized based on the specific
1300
needs of the system administrator. However, to do so, the administrator needs
1301
to understand how each level handles Virtual Networks, to be able to setup the
1302
backend appropriately, before installing Cyclades. To do so, please read the
1303
:ref:`Network <networks>` section before proceeding.
1304

    
1305
Since synnefo 0.11 all network actions are managed with the snf-manage
1306
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1307
snf-network, bridges, vlans) to be already configured correctly. The only
1308
actions needed in this point are:
1309

    
1310
a) Have Ganeti with IP pool management support installed.
1311

    
1312
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1313

    
1314
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1315

    
1316
In order to test that everything is setup correctly before installing Cyclades,
1317
we will make some testing actions in this section, and the actual setup will be
1318
done afterwards with snf-manage commands.
1319

    
1320
.. _snf-network:
1321

    
1322
snf-network
1323
~~~~~~~~~~~
1324

    
1325
snf-network includes `kvm-vif-bridge` script that is invoked every time
1326
a tap (a VM's NIC) is created. Based on environment variables passed by
1327
Ganeti it issues various commands depending on the network type the NIC is
1328
connected to and sets up a corresponding dhcp lease.
1329

    
1330
Install snf-network on all Ganeti nodes:
1331

    
1332
.. code-block:: console
1333

    
1334
   # apt-get install snf-network
1335

    
1336
Then, in :file:`/etc/default/snf-network` set:
1337

    
1338
.. code-block:: console
1339

    
1340
   MAC_MASK=ff:ff:f0:00:00:00
1341

    
1342
.. _nfdhcpd:
1343

    
1344
nfdhcpd
1345
~~~~~~~
1346

    
1347
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1348
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1349
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1350
create a dhcp response.
1351

    
1352
.. code-block:: console
1353

    
1354
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1355
   # apt-get install nfdhcpd
1356

    
1357
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1358
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1359
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1360
VMs. Once you are finished, restart the server on all nodes:
1361

    
1362
.. code-block:: console
1363

    
1364
   # /etc/init.d/nfdhcpd restart
1365

    
1366
If you are using ``ferm``, then you need to run the following:
1367

    
1368
.. code-block:: console
1369

    
1370
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1371
   # /etc/init.d/ferm restart
1372

    
1373
or make sure to run after boot:
1374

    
1375
.. code-block:: console
1376

    
1377
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1378

    
1379
and if you have IPv6 enabled:
1380

    
1381
.. code-block:: console
1382

    
1383
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1384
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1385

    
1386
You can check which clients are currently served by nfdhcpd by running:
1387

    
1388
.. code-block:: console
1389

    
1390
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1391

    
1392
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1393

    
1394
Public Network Setup
1395
--------------------
1396

    
1397
To achieve basic networking the simplest way is to have a common bridge (e.g.
1398
``br0``, on the same collision domain with the router) where all VMs will connect
1399
to. Packets will be "forwarded" to the router and then to the Internet. If
1400
you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1401
:ref:`Network <networks>` section).
1402

    
1403
Physical Host Setup
1404
~~~~~~~~~~~~~~~~~~~
1405

    
1406
Assuming ``eth0`` on both hosts is the public interface (directly connected
1407
to the router), run on every node:
1408

    
1409
.. code-block:: console
1410

    
1411
   # brctl addbr br0
1412
   # ip link set br0 up
1413
   # vconfig add eth0 100
1414
   # ip link set eth0.100 up
1415
   # brctl addif br0 eth0.100
1416

    
1417

    
1418
Testing a Public Network
1419
~~~~~~~~~~~~~~~~~~~~~~~~
1420

    
1421
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1422
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1423
network by running:
1424

    
1425
.. code-block:: console
1426

    
1427
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1428

    
1429
Then, connect the network to all your nodegroups. We assume that we only have
1430
one nodegroup (``default``) in our Ganeti cluster:
1431

    
1432
.. code-block:: console
1433

    
1434
   # gnt-network connect test-net-public default bridged br0
1435

    
1436
Now, it is time to test that the backend infrastracture is correctly setup for
1437
the Public Network. We will add a new VM, the same way we did it on the
1438
previous testing section. However, now will also add one NIC, configured to be
1439
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1440

    
1441
.. code-block:: console
1442

    
1443
   # gnt-instance add -o snf-image+default --os-parameters \
1444
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1445
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1446
                      --net 0:ip=pool,network=test-net-public \
1447
                      testvm2
1448

    
1449
If the above returns successfully, connect to the new VM and run:
1450

    
1451
.. code-block:: console
1452

    
1453
   root@testvm2:~ # ip addr
1454
   root@testvm2:~ # ip route
1455
   root@testvm2:~ # cat /etc/resolv.conf
1456

    
1457
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1458
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1459
ganeti, snf-network and nfdhcpd.
1460

    
1461
Now ping the outside world. If this works too, then you have also configured
1462
correctly your physical host and router.
1463

    
1464
Make sure everything works as expected, before proceeding with the Private
1465
Networks setup.
1466

    
1467
.. _private-networks-setup:
1468

    
1469
Private Networks Setup
1470
----------------------
1471

    
1472
Synnefo supports two types of private networks:
1473

    
1474
 - based on MAC filtering
1475
 - based on physical VLANs
1476

    
1477
Both types provide Layer 2 isolation to the end-user.
1478

    
1479
For the first type a common bridge (e.g. ``prv0``) is needed while for the second a
1480
range of bridges (e.g. ``prv1..prv100``) each bridged on a different physical
1481
VLAN. To this end to assure isolation among end-users' private networks each
1482
has to have different MAC prefix (for the filtering to take place) or to be
1483
"connected" to a different bridge (VLAN actually).
1484

    
1485
Physical Host Setup
1486
~~~~~~~~~~~~~~~~~~~
1487

    
1488
In order to create the necessary VLAN/bridges, one for MAC filtered private
1489
networks and various (e.g. 20) for private networks based on physical VLANs,
1490
run on every node:
1491

    
1492
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1493
configured correctly) connected together, run on every node:
1494

    
1495
.. code-block:: console
1496

    
1497
   # apt-get install vlan
1498
   # modprobe 8021q
1499
   # $iface=eth0
1500
   # for prv in $(seq 0 20); do
1501
	vlan=$prv
1502
	bridge=prv$prv
1503
	vconfig add $iface $vlan
1504
	ifconfig $iface.$vlan up
1505
	brctl addbr $bridge
1506
	brctl setfd $bridge 0
1507
	brctl addif $bridge $iface.$vlan
1508
	ifconfig $bridge up
1509
      done
1510

    
1511
The above will do the following :
1512

    
1513
 * provision 21 new bridges: ``prv0`` - ``prv20``
1514
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1515
 * add the corresponding vlan to the equivalent bridge
1516

    
1517
You can run ``brctl show`` on both nodes to see if everything was setup
1518
correctly.
1519

    
1520
Testing the Private Networks
1521
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1522

    
1523
To test the Private Networks, we will create two instances and put them in the
1524
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1525
that the instances will have a second NIC connected to the ``prv0``
1526
pre-provisioned bridge and a third to ``prv1``.
1527

    
1528
We run the same command as in the Public Network testing section, but with one
1529
more argument for the second NIC:
1530

    
1531
.. code-block:: console
1532

    
1533
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1534
   # gnt-network connect test-net-prv-mac default bridged prv0
1535

    
1536
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1537
   # gnt-network connect test-net-prv-vlan default bridged prv1
1538

    
1539
   # gnt-instance add -o snf-image+default --os-parameters \
1540
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1541
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1542
                      --net 0:ip=pool,network=test-net-public \
1543
                      --net 1:ip=pool,network=test-net-prv-mac \
1544
                      --net 2:ip=none,network=test-net-prv-vlan \
1545
                      testvm3
1546

    
1547
   # gnt-instance add -o snf-image+default --os-parameters \
1548
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1549
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1550
                      --net 0:ip=pool,network=test-net-public \
1551
                      --net 1:ip=pool,network=test-net-prv-mac \
1552
                      --net 2:ip=none,network=test-net-prv-vlan \
1553
                      testvm4
1554

    
1555
Above, we create two instances with first NIC connected to the internet, their
1556
second NIC connected to a MAC filtered private Network and their third NIC
1557
connected to the first Physical VLAN Private Network. Now, connect to the
1558
instances using VNC and make sure everything works as expected:
1559

    
1560
 a) The instances have access to the public internet through their first eth
1561
    interface (``eth0``), which has been automatically assigned a public IP.
1562

    
1563
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1564

    
1565
 c) ip link set ``eth1``/``eth2`` up
1566

    
1567
 d) dhclient ``eth1``/``eth2``
1568

    
1569
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1570

    
1571
If everything works as expected, then you have finished the Network Setup at the
1572
backend for both types of Networks (Public & Private).
1573

    
1574
.. _cyclades-gtools:
1575

    
1576
Cyclades Ganeti tools
1577
---------------------
1578

    
1579
In order for Ganeti to be connected with Cyclades later on, we need the
1580
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1581
case). You can install them by running in both nodes:
1582

    
1583
.. code-block:: console
1584

    
1585
   # apt-get install snf-cyclades-gtools
1586

    
1587
This will install the following:
1588

    
1589
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1590
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1591
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1592

    
1593
Configure ``snf-cyclades-gtools``
1594
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1595

    
1596
The package will install the ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf``
1597
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1598
that need it:
1599

    
1600
.. code-block:: console
1601

    
1602
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1603

    
1604
The above variables should reflect your :ref:`Message Queue setup
1605
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1606

    
1607
Connect ``snf-image`` with ``snf-progress-monitor``
1608
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1609

    
1610
Finally, we need to configure ``snf-image`` to publish progress messages during
1611
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1612
set the corresponding variable to ``snf-progress-monitor``:
1613

    
1614
.. code-block:: console
1615

    
1616
   PROGRESS_MONITOR="snf-progress-monitor"
1617

    
1618
This file should be editted in all Ganeti nodes.
1619

    
1620
.. _rapi-user:
1621

    
1622
Synnefo RAPI user
1623
-----------------
1624

    
1625
As a last step before installing Cyclades, create a new RAPI user that will
1626
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1627
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1628
You can do this, by first running:
1629

    
1630
.. code-block:: console
1631

    
1632
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1633

    
1634
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1635

    
1636
.. code-block:: console
1637

    
1638
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1639

    
1640
More about Ganeti's RAPI users `here.
1641
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1642

    
1643
You have now finished with all needed Prerequisites for Cyclades (and
1644
Plankton). Let's move on to the actual Cyclades installation.
1645

    
1646

    
1647
Installation of Cyclades (and Plankton) on node1
1648
================================================
1649

    
1650
This section describes the installation of Cyclades. Cyclades is Synnefo's
1651
Compute service. Plankton (the Image Registry service) will get installed
1652
automatically along with Cyclades, because it is contained in the same Synnefo
1653
component right now.
1654

    
1655
We will install Cyclades (and Plankton) on node1. To do so, we install the
1656
corresponding package by running on node1:
1657

    
1658
.. code-block:: console
1659

    
1660
   # apt-get install snf-cyclades-app
1661

    
1662
.. warning:: Make sure you have installed ``python-gevent`` version >= 0.13.6.
1663
    This version is available at squeeze-backports and can be installed by
1664
    running: ``apt-get install -t squeeze-backports python-gevent``
1665

    
1666
If all packages install successfully, then Cyclades and Plankton are installed
1667
and we proceed with their configuration.
1668

    
1669

    
1670
Configuration of Cyclades (and Plankton)
1671
========================================
1672

    
1673
Conf files
1674
----------
1675

    
1676
After installing Cyclades, a number of new configuration files will appear under
1677
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will descibe here
1678
only the minimal needed changes to result with a working system. In general, sane
1679
defaults have been chosen for the most of the options, to cover most of the
1680
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1681
once you get familiar with the different options.
1682

    
1683
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1684

    
1685
.. code-block:: console
1686

    
1687
   ASTAKOS_URL = 'https://node1.example.com/im/authenticate'
1688

    
1689
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set
1690
to point to Astakos (this should have the same value with Pithos+'s
1691
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`).
1692

    
1693
TODO: Document the Network Options here
1694

    
1695
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1696

    
1697
.. code-block:: console
1698

    
1699
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1700
   CLOUDBAR_ACTIVE_SERVICE = '2'
1701
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
1702
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu'
1703

    
1704
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1705
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1706
used by the Cyclades Web UI to get from Astakos all the information needed to
1707
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1708
above should have the same values we put in the corresponding variables in
1709
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1710
:ref:`Pithos configuration <conf-pithos>` section.
1711

    
1712
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos
1713
service. You can see all :ref:`registered services <services-reg>` by running
1714
on the Astakos node (node1):
1715

    
1716
.. code-block:: console
1717

    
1718
   # snf-manage service-list
1719

    
1720
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's
1721
``id`` as shown by the above command, in our case ``2``.
1722

    
1723
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1724

    
1725
.. code-block:: console
1726

    
1727
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1728
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1729

    
1730
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION``
1731
denotes the Pithos+ database (where the Image files are stored). So we set that
1732
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual
1733
Pithos+ data location.
1734

    
1735
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1736

    
1737
.. code-block:: console
1738

    
1739
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1740

    
1741
The above settings denote the Message Queue. Those settings should have the same
1742
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1743
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1744

    
1745
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1746

    
1747
.. code-block:: console
1748

    
1749
   UI_LOGIN_URL = "https://node1.example.com/im/login"
1750
   UI_LOGOUT_URL = "https://node1.example.com/im/logout"
1751

    
1752
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1753
if they are not logged in. We point that to Astakos.
1754

    
1755
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1756
user when he/she logs out. We point that to Astakos, too.
1757

    
1758
Edit ``/etc/default/vncauthproxy``:
1759

    
1760
.. code-block:: console
1761

    
1762
   CHUID="www-data:nogroup"
1763

    
1764
We have now finished with the basic Cyclades and Plankton configuration.
1765

    
1766
Database Initialization
1767
-----------------------
1768

    
1769
Once Cyclades is configured, we sync the database:
1770

    
1771
.. code-block:: console
1772

    
1773
   $ snf-manage syncdb
1774
   $ snf-manage migrate
1775

    
1776
and load the initial server flavors:
1777

    
1778
.. code-block:: console
1779

    
1780
   $ snf-manage loaddata flavors
1781

    
1782
If everything returns successfully, our database is ready.
1783

    
1784
Add the Ganeti backend
1785
----------------------
1786

    
1787
In our installation we assume that we only have one Ganeti cluster, the one we
1788
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1789
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1790
correctly.
1791

    
1792
.. code-block:: console
1793

    
1794
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1795

    
1796
You can see everything has been setup correctly by running:
1797

    
1798
.. code-block:: console
1799

    
1800
   $ snf-manage backend-list
1801

    
1802
If something is not set correctly, you can modify the backend with the
1803
``snf-manage backend-modify`` command. If something has gone wrong, you could
1804
modify the backend to reflect the Ganeti installation by running:
1805

    
1806
.. code-block:: console
1807

    
1808
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1809
                               --user=cyclades
1810
                               --pass=example_rapi_passw0rd
1811
                               1
1812

    
1813
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1814
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1815
can talk to Ganeti even after a Ganeti master-failover.
1816

    
1817
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1818
password.  Once we setup the first backend to point at our Ganeti cluster, we
1819
update the Cyclades backends status by running:
1820

    
1821
.. code-block:: console
1822

    
1823
   $ snf-manage backend-update-status
1824

    
1825
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1826
guide,we won't get into more detail regarding mulitple backends. If you want to
1827
learn more please see /*TODO*/.
1828

    
1829
Add a Public Network
1830
----------------------
1831

    
1832
Cyclades supports different Public Networks on different Ganeti backends.
1833
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1834
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1835
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1836
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1837

    
1838
.. code-block:: console
1839

    
1840
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1841
                               --gateway=5.6.7.1 \
1842
                               --subnet6=2001:648:2FFC:1322::/64 \
1843
                               --gateway6=2001:648:2FFC:1322::1 \
1844
                               --public --dhcp --type=CUSTOM_BRIDGED \
1845
                               --name=public_network \
1846
                               --backend-id=1
1847

    
1848
This will create the Public Network on both Cyclades and the Ganeti backend. To
1849
make sure everything was setup correctly, also run:
1850

    
1851
.. code-block:: console
1852

    
1853
   $ snf-manage reconcile-networks
1854

    
1855
You can see all available networks by running:
1856

    
1857
.. code-block:: console
1858

    
1859
   $ snf-manage network-list
1860

    
1861
and inspect each network's state by running:
1862

    
1863
.. code-block:: console
1864

    
1865
   $ snf-manage network-inspect <net_id>
1866

    
1867
Finally, you can see the networks from the Ganeti perspective by running on the
1868
Ganeti MASTER:
1869

    
1870
.. code-block:: console
1871

    
1872
   $ gnt-network list
1873
   $ gnt-network info <network_name>
1874

    
1875

    
1876
Create pools for Private Networks
1877
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1878

    
1879
To prevent duplicate assignment of resources to different private networks,
1880
Cyclades supports two types of pools:
1881

    
1882
 - MAC prefix Pool
1883
 - Bridge Pool
1884

    
1885
As long as those resourses have been provisioned, admin has to define two
1886
these pools in Synnefo:
1887

    
1888

    
1889
.. code-block:: console
1890

    
1891
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1892

    
1893
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1894

    
1895
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1896

    
1897
.. code-block:: console
1898

    
1899
   PRIVATE_MAC_FILTERED_BRIDGE = 'prv0'
1900

    
1901
Servers restart
1902
---------------
1903

    
1904
Restart gunicorn on node1:
1905

    
1906
.. code-block:: console
1907

    
1908
   # /etc/init.d/gunicorn restart
1909

    
1910
Now let's do the final connections of Cyclades with Ganeti.
1911

    
1912
``snf-dispatcher`` initialization
1913
---------------------------------
1914

    
1915
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1916
manages the Cyclades database accordingly. It also initializes all exchanges. By
1917
default it is not enabled during installation of Cyclades, so let's enable it in
1918
its configuration file ``/etc/default/snf-dispatcher``:
1919

    
1920
.. code-block:: console
1921

    
1922
   SNF_DSPTCH_ENABLE=true
1923

    
1924
and start the daemon:
1925

    
1926
.. code-block:: console
1927

    
1928
   # /etc/init.d/snf-dispatcher start
1929

    
1930
You can see that everything works correctly by tailing its log file
1931
``/var/log/synnefo/dispatcher.log``.
1932

    
1933
``snf-ganeti-eventd`` on GANETI MASTER
1934
--------------------------------------
1935

    
1936
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1937
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1938
The daemon is already installed on the GANETI MASTER (node1 in our case).
1939
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1940
installation, so we enable it in its configuration file
1941
``/etc/default/snf-ganeti-eventd``:
1942

    
1943
.. code-block:: console
1944

    
1945
   SNF_EVENTD_ENABLE=true
1946

    
1947
and start the daemon:
1948

    
1949
.. code-block:: console
1950

    
1951
   # /etc/init.d/snf-ganeti-eventd start
1952

    
1953
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1954

    
1955
If all the above return successfully, then you have finished with the Cyclades
1956
and Plankton installation and setup. Let's test our installation now.
1957

    
1958

    
1959
Testing of Cyclades (and Plankton)
1960
==================================
1961

    
1962
Cyclades Web UI
1963
---------------
1964

    
1965
First of all we need to test that our Cyclades Web UI works correctly. Open your
1966
browser and go to the Astakos home page. Login and then click 'cyclades' on the
1967
top cloud bar. This should redirect you to:
1968

    
1969
 `http://node1.example.com/ui/`
1970

    
1971
and the Cyclades home page should appear. If not, please go back and find what
1972
went wrong. Do not proceed if you don't see the Cyclades home page.
1973

    
1974
If the Cyclades home page appears, click on the orange button 'New machine'. The
1975
first step of the 'New machine wizard' will appear. This step shows all the
1976
available Images from which you can spawn new VMs. The list should be currently
1977
empty, as we haven't registered any Images yet. Close the wizard and browse the
1978
interface (not many things to see yet). If everything seems to work, let's
1979
register our first Image file.
1980

    
1981
Cyclades Images
1982
---------------
1983

    
1984
To test our Cyclades (and Plankton) installation, we will use an Image stored on
1985
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all
1986
steps, even though you may already have uploaded an Image on Pithos+ from a
1987
:ref:`previous <snf-image-images>` section:
1988

    
1989
 * Upload an Image file to Pithos+
1990
 * Register that Image file to Plankton
1991
 * Spawn a new VM from that Image from the Cyclades Web UI
1992

    
1993
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_
1994
command line client to do the uploading and registering of the Image.
1995

    
1996
Installation of `kamaki`
1997
~~~~~~~~~~~~~~~~~~~~~~~~
1998

    
1999
You can install `kamaki` anywhere you like, since it is a standalone client of
2000
the APIs and talks to the installation over `http`. For the purpose of this
2001
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2002
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2003
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2004
too. We do this by running:
2005

    
2006
.. code-block:: console
2007

    
2008
   # apt-get install kamaki
2009

    
2010
Configuration of kamaki
2011
~~~~~~~~~~~~~~~~~~~~~~~
2012

    
2013
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2014
installation. We do this by running:
2015

    
2016
.. code-block:: console
2017

    
2018
   $ kamaki config set astakos.url "https://node1.example.com"
2019
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
2020
   $ kamaki config set image.url "https://node1.example.com/plankton"
2021
   $ kamaki config set store.url "https://node2.example.com/v1"
2022
   $ kamaki config set global.account "user@example.com"
2023
   $ kamaki config set global.token "bdY_example_user_tokenYUff=="
2024

    
2025
The token at the last kamaki command is our user's (``user@example.com``) token,
2026
as it appears on the user's `Profile` web page on the Astakos Web UI.
2027

    
2028
You can see that the new configuration options have been applied correctly, by
2029
running:
2030

    
2031
.. code-block:: console
2032

    
2033
   $ kamaki config list
2034

    
2035
Upload an Image file to Pithos+
2036
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2037

    
2038
Now, that we have set up `kamaki` we will upload the Image that we have
2039
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2040
under the root ``Pithos`` container (as you may have done when uploading the
2041
Image from the Pithos+ Web UI), we will create a new container called ``images``
2042
and store the Image under that container. We do this for two reasons:
2043

    
2044
a) To demonstrate how to create containers other than the default ``Pithos``.
2045
   This can be done only with the `kamaki` client and not through the Web UI.
2046

    
2047
b) As a best organization practise, so that you won't have your Image files
2048
   tangled along with all your other Pithos+ files and directory structures.
2049

    
2050
We create the new ``images`` container by running:
2051

    
2052
.. code-block:: console
2053

    
2054
   $ kamaki store create images
2055

    
2056
Then, we upload the Image file to that container:
2057

    
2058
.. code-block:: console
2059

    
2060
   $ kamaki store upload --container images \
2061
                         /srv/images/debian_base-6.0-7-x86_64.diskdump \
2062
                         debian_base-6.0-7-x86_64.diskdump
2063

    
2064
The first is the local path and the second is the remote path on Pithos+. If
2065
the new container and the file appears on the Pithos+ Web UI, then you have
2066
successfully created the container and uploaded the Image file.
2067

    
2068
Register an existing Image file to Plankton
2069
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2070

    
2071
Once the Image file has been successfully uploaded on Pithos+, then we register
2072
it to Plankton (so that it becomes visible to Cyclades), by running:
2073

    
2074
.. code-block:: console
2075

    
2076
   $ kamaki image register "Debian Base" \
2077
                           pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump \
2078
                           --public \
2079
                           --disk-format=diskdump \
2080
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2081
                           --property description="Debian Squeeze Base System" \
2082
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2083
                           --property sortorder=1 --property USERS=root --property OS=debian
2084

    
2085
This command registers the Pithos+ file
2086
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an
2087
Image in Plankton. This Image will be public (``--public``), so all users will
2088
be able to spawn VMs from it and is of type ``diskdump``. The first two
2089
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
2090
properties are optional, but recommended, so that the Images appear nicely on
2091
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
2092
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
2093
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
2094

    
2095
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
2096
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see
2097
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
2098
used to show information on the Cyclades UI.
2099

    
2100
Spawn a VM from the Cyclades Web UI
2101
-----------------------------------
2102

    
2103
If the registration completes successfully, then go to the Cyclades Web UI from
2104
your browser at:
2105

    
2106
 `https://node1.example.com/ui/`
2107

    
2108
Click on the 'New Machine' button and the first step of the wizard will appear.
2109
Click on 'My Images' (right after 'System' Images) on the left pane of the
2110
wizard. Your previously registered Image "Debian Base" should appear under
2111
'Available Images'. If not, something has gone wrong with the registration. Make
2112
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image
2113
register`` returns successfully with all options and properties as shown above.
2114

    
2115
If the Image appears on the list, select it and complete the wizard by selecting
2116
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2117
write down your password, because you *WON'T* be able to retrieve it later.
2118

    
2119
If everything was setup correctly, after a few minutes your new machine will go
2120
to state 'Running' and you will be able to use it. Click 'Console' to connect
2121
through VNC out of band, or click on the machine's icon to connect directly via
2122
SSH or RDP (for windows machines).
2123

    
2124
Congratulations. You have successfully installed the whole Synnefo stack and
2125
connected all components. Go ahead in the next section to test the Network
2126
functionality from inside Cyclades and discover even more features.
2127

    
2128

    
2129
General Testing
2130
===============
2131

    
2132

    
2133
Notes
2134
=====