Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 19425707

History | View | Annotate | Download (70.1 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
 * Identity Management (Astakos)
14
 * Object Storage Service (Pithos+)
15
 * Compute Service (Cyclades)
16
 * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the guide
24
and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42
.. note:: It is import that the two machines are under the same domain name.
43
    If they are not, you can do this by editting the file ``/etc/hosts``
44
    on both machines, and add the following lines:
45

    
46
    .. code-block:: console
47

    
48
        4.3.2.1     node1.example.com
49
        4.3.2.2     node2.example.com
50

    
51

    
52
General Prerequisites
53
=====================
54

    
55
These are the general synnefo prerequisites, that you need on node1 and node2
56
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
57

    
58
To be able to download all synnefo components you need to add the following
59
lines in your ``/etc/apt/sources.list`` file:
60

    
61
| ``deb http://apt.dev.grnet.gr squeeze main``
62
| ``deb-src http://apt.dev.grnet.gr squeeze main``
63
| ``deb http://apt.dev.grnet.gr squeeze-backports main``
64

    
65
and import the repo's GPG key:
66

    
67
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
68

    
69
Also add the following line to enable the ``squeeze-backports`` repository,
70
which may provide more recent versions of certain packages. The repository
71
is deactivated by default and must be specified expicitly in ``apt-get``
72
operations:
73

    
74
| ``deb http://backports.debian.org/debian-backports squeeze-backports main``
75

    
76
You also need a shared directory visible by both nodes. Pithos+ will save all
77
data inside this directory. By 'all data', we mean files, images, and pithos
78
specific mapping data. If you plan to upload more than one basic image, this
79
directory should have at least 50GB of free space. During this guide, we will
80
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
to node2 (be sure to set no_root_squash flag). Node2 has this directory
82
mounted under ``/srv/pithos``, too.
83

    
84
Before starting the synnefo installation, you will need basic third party
85
software to be installed and configured on the physical nodes. We will describe
86
each node's general prerequisites separately. Any additional configuration,
87
specific to a synnefo service for each node, will be described at the service's
88
section.
89

    
90
Node1
91
-----
92

    
93
General Synnefo dependencies
94
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
95

    
96
 * apache (http server)
97
 * gunicorn (WSGI http server)
98
 * postgresql (database)
99
 * rabbitmq (message queue)
100

    
101
You can install apache2 and progresql by running:
102

    
103
.. code-block:: console
104

    
105
   # apt-get install apache2 postgresql
106

    
107
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
108
the official debian backports:
109

    
110
.. code-block:: console
111

    
112
   # apt-get -t squeeze-backports install gunicorn
113

    
114
On node1, we will create our databases, so you will also need the
115
python-psycopg2 package:
116

    
117
.. code-block:: console
118

    
119
   # apt-get install python-psycopg2
120

    
121
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
122
following line to ``/etc/apt/sources.list``:
123

    
124
.. code-block:: console
125

    
126
  deb http://www.rabbitmq.com/debian testing main
127

    
128
Add RabbitMQ public key, to trusted key list:
129

    
130
.. code-block:: console
131

    
132
  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
133
  # apt-key add rabbitmq-signing-key-public.asc
134

    
135
Finally, to install the package run:
136

    
137
.. code-block:: console
138

    
139
  # apt-get update
140
  # apt-get install rabbitmq-server
141

    
142
Database setup
143
~~~~~~~~~~~~~~
144

    
145
On node1, we create a database called ``snf_apps``, that will host all django
146
apps related tables. We also create the user ``synnefo`` and grant him all
147
privileges on the database. We do this by running:
148

    
149
.. code-block:: console
150

    
151
   root@node1:~ # su - postgres
152
   postgres@node1:~ $ psql
153
   postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
154
   postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
155
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
156

    
157
We also create the database ``snf_pithos`` needed by the pithos+ backend and
158
grant the ``synnefo`` user all privileges on the database. This database could
159
be created on node2 instead, but we do it on node1 for simplicity. We will
160
create all needed databases on node1 and then node2 will connect to them.
161

    
162
.. code-block:: console
163

    
164
   postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
165
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
166

    
167
Configure the database to listen to all network interfaces. You can do this by
168
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
169
``listen_addresses`` to ``'*'`` :
170

    
171
.. code-block:: console
172

    
173
   listen_addresses = '*'
174

    
175
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
176
node2 to connect to the database. Add the following lines under ``#IPv4 local
177
connections:`` :
178

    
179
.. code-block:: console
180

    
181
   host		all	all	4.3.2.1/32	md5
182
   host		all	all	4.3.2.2/32	md5
183

    
184
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
185
actual IPs. Now, restart the server to apply the changes:
186

    
187
.. code-block:: console
188

    
189
   # /etc/init.d/postgresql restart
190

    
191
Gunicorn setup
192
~~~~~~~~~~~~~~
193

    
194
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following:
195

    
196
.. code-block:: console
197

    
198
   CONFIG = {
199
    'mode': 'django',
200
    'environment': {
201
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
202
    },
203
    'working_dir': '/etc/synnefo',
204
    'user': 'www-data',
205
    'group': 'www-data',
206
    'args': (
207
      '--bind=127.0.0.1:8080',
208
      '--workers=4',
209
      '--log-level=debug',
210
    ),
211
   }
212

    
213
.. warning:: Do NOT start the server yet, because it won't find the
214
    ``synnefo.settings`` module. We will start the server after successful
215
    installation of astakos. If the server is running::
216

    
217
       # /etc/init.d/gunicorn stop
218

    
219
Apache2 setup
220
~~~~~~~~~~~~~
221

    
222
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
223
the following:
224

    
225
.. code-block:: console
226

    
227
   <VirtualHost *:80>
228
     ServerName node1.example.com
229

    
230
     RewriteEngine On
231
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
232
     RewriteRule ^(.*)$ - [F,L]
233
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
234
   </VirtualHost>
235

    
236
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
237
containing the following:
238

    
239
.. code-block:: console
240

    
241
   <IfModule mod_ssl.c>
242
   <VirtualHost _default_:443>
243
     ServerName node1.example.com
244

    
245
     Alias /static "/usr/share/synnefo/static"
246

    
247
   #  SetEnv no-gzip
248
   #  SetEnv dont-vary
249

    
250
     AllowEncodedSlashes On
251

    
252
     RequestHeader set X-Forwarded-Protocol "https"
253

    
254
     <Proxy * >
255
       Order allow,deny
256
       Allow from all
257
     </Proxy>
258

    
259
     SetEnv                proxy-sendchunked
260
     SSLProxyEngine        off
261
     ProxyErrorOverride    off
262

    
263
     ProxyPass        /static !
264
     ProxyPass        / http://localhost:8080/ retry=0
265
     ProxyPassReverse / http://localhost:8080/
266

    
267
     RewriteEngine On
268
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
269
     RewriteRule ^(.*)$ - [F,L]
270
     RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE]
271

    
272
     SSLEngine on
273
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
274
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
275
   </VirtualHost>
276
   </IfModule>
277

    
278
Now enable sites and modules by running:
279

    
280
.. code-block:: console
281

    
282
   # a2enmod ssl
283
   # a2enmod rewrite
284
   # a2dissite default
285
   # a2ensite synnefo
286
   # a2ensite synnefo-ssl
287
   # a2enmod headers
288
   # a2enmod proxy_http
289

    
290
.. warning:: Do NOT start/restart the server yet. If the server is running::
291

    
292
       # /etc/init.d/apache2 stop
293

    
294
.. _rabbitmq-setup:
295

    
296
Message Queue setup
297
~~~~~~~~~~~~~~~~~~~
298

    
299
The message queue will run on node1, so we need to create the appropriate
300
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
301
exchanges:
302

    
303
.. code-block:: console
304

    
305
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
306
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
307

    
308
We do not need to initialize the exchanges. This will be done automatically,
309
during the Cyclades setup.
310

    
311
Pithos+ data directory setup
312
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
313

    
314
As mentioned in the General Prerequisites section, there is a directory called
315
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
316
directory inside it:
317

    
318
.. code-block:: console
319

    
320
   # cd /srv/pithos
321
   # mkdir data
322
   # chown www-data:www-data data
323
   # chmod g+ws data
324

    
325
You are now ready with all general prerequisites concerning node1. Let's go to
326
node2.
327

    
328
Node2
329
-----
330

    
331
General Synnefo dependencies
332
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333

    
334
 * apache (http server)
335
 * gunicorn (WSGI http server)
336
 * postgresql (database)
337

    
338
You can install the above by running:
339

    
340
.. code-block:: console
341

    
342
   # apt-get install apache2 postgresql
343

    
344
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
345
the official debian backports:
346

    
347
.. code-block:: console
348

    
349
   # apt-get -t squeeze-backports install gunicorn
350

    
351
Node2 will connect to the databases on node1, so you will also need the
352
python-psycopg2 package:
353

    
354
.. code-block:: console
355

    
356
   # apt-get install python-psycopg2
357

    
358
Database setup
359
~~~~~~~~~~~~~~
360

    
361
All databases have been created and setup on node1, so we do not need to take
362
any action here. From node2, we will just connect to them. When you get familiar
363
with the software you may choose to run different databases on different nodes,
364
for performance/scalability/redundancy reasons, but those kind of setups are out
365
of the purpose of this guide.
366

    
367
Gunicorn setup
368
~~~~~~~~~~~~~~
369

    
370
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following
371
(same contents as in node1; you can just copy/paste the file):
372

    
373
.. code-block:: console
374

    
375
   CONFIG = {
376
    'mode': 'django',
377
    'environment': {
378
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
379
    },
380
    'working_dir': '/etc/synnefo',
381
    'user': 'www-data',
382
    'group': 'www-data',
383
    'args': (
384
      '--bind=127.0.0.1:8080',
385
      '--workers=4',
386
      '--log-level=debug',
387
      '--timeout=43200'
388
    ),
389
   }
390

    
391
.. warning:: Do NOT start the server yet, because it won't find the
392
    ``synnefo.settings`` module. We will start the server after successful
393
    installation of astakos. If the server is running::
394

    
395
       # /etc/init.d/gunicorn stop
396

    
397
Apache2 setup
398
~~~~~~~~~~~~~
399

    
400
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
401
the following:
402

    
403
.. code-block:: console
404

    
405
   <VirtualHost *:80>
406
     ServerName node2.example.com
407

    
408
     RewriteEngine On
409
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
410
     RewriteRule ^(.*)$ - [F,L]
411
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
412
   </VirtualHost>
413

    
414
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
415
containing the following:
416

    
417
.. code-block:: console
418

    
419
   <IfModule mod_ssl.c>
420
   <VirtualHost _default_:443>
421
     ServerName node2.example.com
422

    
423
     Alias /static "/usr/share/synnefo/static"
424

    
425
     SetEnv no-gzip
426
     SetEnv dont-vary
427
     AllowEncodedSlashes On
428

    
429
     RequestHeader set X-Forwarded-Protocol "https"
430

    
431
     <Proxy * >
432
       Order allow,deny
433
       Allow from all
434
     </Proxy>
435

    
436
     SetEnv                proxy-sendchunked
437
     SSLProxyEngine        off
438
     ProxyErrorOverride    off
439

    
440
     ProxyPass        /static !
441
     ProxyPass        / http://localhost:8080/ retry=0
442
     ProxyPassReverse / http://localhost:8080/
443

    
444
     SSLEngine on
445
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
446
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
447
   </VirtualHost>
448
   </IfModule>
449

    
450
As in node1, enable sites and modules by running:
451

    
452
.. code-block:: console
453

    
454
   # a2enmod ssl
455
   # a2enmod rewrite
456
   # a2dissite default
457
   # a2ensite synnefo
458
   # a2ensite synnefo-ssl
459
   # a2enmod headers
460
   # a2enmod proxy_http
461

    
462
.. warning:: Do NOT start/restart the server yet. If the server is running::
463

    
464
       # /etc/init.d/apache2 stop
465

    
466
We are now ready with all general prerequisites for node2. Now that we have
467
finished with all general prerequisites for both nodes, we can start installing
468
the services. First, let's install Astakos on node1.
469

    
470

    
471
Installation of Astakos on node1
472
================================
473

    
474
To install astakos, grab the package from our repository (make sure  you made
475
the additions needed in your ``/etc/apt/sources.list`` file, as described
476
previously), by running:
477

    
478
.. code-block:: console
479

    
480
   # apt-get install snf-astakos-app
481

    
482
After successful installation of snf-astakos-app, make sure that also
483
snf-webproject has been installed (marked as "Recommended" package). By default
484
Debian installs "Recommended" packages, but if you have changed your
485
configuration and the package didn't install automatically, you should
486
explicitly install it manually running:
487

    
488
.. code-block:: console
489

    
490
   # apt-get install snf-webproject
491

    
492
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
493
the experienced administrator the ability to install synnefo in a custom made
494
django project. This corner case concerns only very advanced users that know
495
what they are doing and want to experiment with synnefo.
496

    
497

    
498
.. _conf-astakos:
499

    
500
Configuration of Astakos
501
========================
502

    
503
Conf Files
504
----------
505

    
506
After astakos is successfully installed, you will find the directory
507
``/etc/synnefo`` and some configuration files inside it. The files contain
508
commented configuration options, which are the default options. While installing
509
new snf-* components, new configuration files will appear inside the directory.
510
In this guide (and for all services), we will edit only the minimum necessary
511
configuration options, to reflect our setup. Everything else will remain as is.
512

    
513
After getting familiar with synnefo, you will be able to customize the software
514
as you wish and fits your needs. Many options are available, to empower the
515
administrator with extensively customizable setups.
516

    
517
For the snf-webproject component (installed as an astakos dependency), we
518
need the following:
519

    
520
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
521
uncomment and edit the ``DATABASES`` block to reflect our database:
522

    
523
.. code-block:: console
524

    
525
   DATABASES = {
526
    'default': {
527
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
528
        'ENGINE': 'postgresql_psycopg2',
529
         # ATTENTION: This *must* be the absolute path if using sqlite3.
530
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
531
        'NAME': 'snf_apps',
532
        'USER': 'synnefo',                      # Not used with sqlite3.
533
        'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
534
        # Set to empty string for localhost. Not used with sqlite3.
535
        'HOST': '4.3.2.1',
536
        # Set to empty string for default. Not used with sqlite3.
537
        'PORT': '5432',
538
    }
539
   }
540

    
541
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
542
``SECRET_KEY``. This is a django specific setting which is used to provide a
543
seed in secret-key hashing algorithms. Set this to a random string of your
544
choise and keep it private:
545

    
546
.. code-block:: console
547

    
548
   SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
549

    
550
For astakos specific configuration, edit the following options in
551
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
552

    
553
.. code-block:: console
554

    
555
   ASTAKOS_DEFAULT_ADMIN_EMAIL = None
556

    
557
   ASTAKOS_IM_MODULES = ['local']
558

    
559
   ASTAKOS_COOKIE_DOMAIN = '.example.com'
560

    
561
   ASTAKOS_BASEURL = 'https://node1.example.com'
562

    
563
   ASTAKOS_SITENAME = '~okeanos demo example'
564

    
565
   ASTAKOS_RECAPTCHA_ENABLED = False
566

    
567
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local
568
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our
569
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page.
570

    
571
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
572
Every time a new account is created a notification is sent to this email.
573
For this we need access to a running mail server, so we have disabled
574
it for now by setting its value to None. For more informations on this,
575
read the relative :ref:`section <mail-server>`.
576

    
577
.. note:: For the purpose of this guide, we have disabled recaptcha authentication.
578
    If you would like to enable it you have to edit the following options:
579

    
580
    .. code-block:: console
581

    
582
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
583
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
584
        ASTAKOS_RECAPTCHA_USE_SSL = True
585
        ASTAKOS_RECAPTCHA_ENABLED = True
586

    
587
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
588
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
589

    
590
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
591

    
592
.. code-block:: console
593

    
594
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
595

    
596
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
597

    
598
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
599

    
600
Those settings have to do with the black cloudbar endpoints and will be described
601
in more detail later on in this guide. For now, just edit the domain to point at
602
node1 which is where we have installed Astakos.
603

    
604
If you are an advanced user and want to use the Shibboleth Authentication method,
605
read the relative :ref:`section <shibboleth-auth>`.
606

    
607
.. note:: Because Cyclades and Astakos are running on the same machine
608
    in our example, we have to deactivate the CSRF verification. We can do so
609
    by adding to
610
    ``/etc/synnefo/99-local.conf``:
611

    
612
    .. code-block:: console
613

    
614
        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
615
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')
616

    
617

    
618
Database Initialization
619
-----------------------
620

    
621
After configuration is done, we initialize the database by running:
622

    
623
.. code-block:: console
624

    
625
   # snf-manage syncdb
626

    
627
At this example we don't need to create a django superuser, so we select
628
``[no]`` to the question. After a successful sync, we run the migration needed
629
for astakos:
630

    
631
.. code-block:: console
632

    
633
   # snf-manage migrate im
634

    
635
Then, we load the pre-defined user groups
636

    
637
.. code-block:: console
638

    
639
   # snf-manage loaddata groups
640

    
641
.. _services-reg:
642

    
643
Services Registration
644
---------------------
645

    
646
When the database is ready, we configure the elements of the Astakos cloudbar,
647
to point to our future services:
648

    
649
.. code-block:: console
650

    
651
   # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
652
   # snf-manage service-add "cyclades" https://node1.example.com/ui/
653
   # snf-manage service-add "pithos+" https://node2.example.com/ui/
654

    
655
Servers Initialization
656
----------------------
657

    
658
Finally, we initialize the servers on node1:
659

    
660
.. code-block:: console
661

    
662
   root@node1:~ # /etc/init.d/gunicorn restart
663
   root@node1:~ # /etc/init.d/apache2 restart
664

    
665
We have now finished the Astakos setup. Let's test it now.
666

    
667

    
668
Testing of Astakos
669
==================
670

    
671
Open your favorite browser and go to:
672

    
673
``http://node1.example.com/im``
674

    
675
If this redirects you to ``https://node1.example.com/im`` and you can see
676
the "welcome" door of Astakos, then you have successfully setup Astakos.
677

    
678
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
679
and fill all your data at the sign up form. Then click "SUBMIT". You should now
680
see a green box on the top, which informs you that you made a successful request
681
and the request has been sent to the administrators. So far so good, let's assume
682
that you created the user with username ``user@example.com``.
683

    
684
Now we need to activate that user. Return to a command prompt at node1 and run:
685

    
686
.. code-block:: console
687

    
688
   root@node1:~ # snf-manage user-list
689

    
690
This command should show you a list with only one user; the one we just created.
691
This user should have an id with a value of ``1``. It should also have an
692
"active" status with the value of ``0`` (inactive). Now run:
693

    
694
.. code-block:: console
695

    
696
   root@node1:~ # snf-manage user-modify --set-active 1
697

    
698
This modifies the active value to ``1``, and actually activates the user.
699
When running in production, the activation is done automatically with different
700
types of moderation, that Astakos supports. You can see the moderation methods
701
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
702
documentation. In production, you can also manually activate a user, by sending
703
him/her an activation email. See how to do this at the :ref:`User
704
activation <user_activation>` section.
705

    
706
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with
707
your browser again. Try to sign in using your new credentials. If the astakos
708
menu appears and you can see your profile, then you have successfully setup
709
Astakos.
710

    
711
Let's continue to install Pithos+ now.
712

    
713

    
714
Installation of Pithos+ on node2
715
================================
716

    
717
To install pithos+, grab the packages from our repository (make sure  you made
718
the additions needed in your ``/etc/apt/sources.list`` file, as described
719
previously), by running:
720

    
721
.. code-block:: console
722

    
723
   # apt-get install snf-pithos-app
724

    
725
After successful installation of snf-pithos-app, make sure that also
726
snf-webproject has been installed (marked as "Recommended" package). Refer to
727
the "Installation of Astakos on node1" section, if you don't remember why this
728
should happen. Now, install the pithos web interface:
729

    
730
.. code-block:: console
731

    
732
   # apt-get install snf-pithos-webclient
733

    
734
This package provides the standalone pithos web client. The web client is the
735
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
736
interface's cloudbar, at the top of the Astakos homepage.
737

    
738

    
739
.. _conf-pithos:
740

    
741
Configuration of Pithos+
742
========================
743

    
744
Conf Files
745
----------
746

    
747
After pithos+ is successfully installed, you will find the directory
748
``/etc/synnefo`` and some configuration files inside it, as you did in node1
749
after installation of astakos. Here, you will not have to change anything that
750
has to do with snf-common or snf-webproject. Everything is set at node1. You
751
only need to change settings that have to do with pithos+. Specifically:
752

    
753
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
754
this options:
755

    
756
.. code-block:: console
757

    
758
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
759

    
760
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
761

    
762
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
763
   PITHOS_AUTHENTICATION_USERS = None
764

    
765
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
766

    
767
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
768
find the pithos+ backend database. Above we tell pithos+ that its database is
769
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
770
``example_passw0rd``.  All those settings where setup during node1's "Database
771
setup" section.
772

    
773
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
774
the pithos+ backend data. Above we tell pithos+ to store its data under
775
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
776
directory at node1's "Pithos+ data directory setup" section.
777

    
778
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
779
is available the astakos authentication api. If not set, pithos+ tries to
780
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
781

    
782
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on
783
the Astakos node (node1 in our case):
784

    
785
.. code-block:: console
786

    
787
   # snf-manage service-list
788

    
789
The token has been generated automatically during the :ref:`Pithos+ service
790
registration <services-reg>`.
791

    
792
Then we need to setup the web UI and connect it to astakos. To do so, edit
793
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
794

    
795
.. code-block:: console
796

    
797
   PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
798
   PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback"
799

    
800
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
801
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
802
pithos+ feedback form. Astakos already provides a generic feedback form for all
803
services, so we use this one.
804

    
805
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
806
pithos+ web UI with the astakos web UI (through the top cloudbar):
807

    
808
.. code-block:: console
809

    
810
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
811
   PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
812
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
813
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
814

    
815
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
816
cloudbar.
817

    
818
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
819
Astakos service. You can see all :ref:`registered services <services-reg>` by
820
running on the Astakos node (node1):
821

    
822
.. code-block:: console
823

    
824
   # snf-manage service-list
825

    
826
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos service's
827
``id`` as shown by the above command, in our case ``3``.
828

    
829
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
830
pithos+ web client to get from astakos all the information needed to fill its
831
own cloudbar. So we put our astakos deployment urls there.
832

    
833
Servers Initialization
834
----------------------
835

    
836
After configuration is done, we initialize the servers on node2:
837

    
838
.. code-block:: console
839

    
840
   root@node2:~ # /etc/init.d/gunicorn restart
841
   root@node2:~ # /etc/init.d/apache2 restart
842

    
843
You have now finished the Pithos+ setup. Let's test it now.
844

    
845

    
846
Testing of Pithos+
847
==================
848

    
849
Open your browser and go to the Astakos homepage:
850

    
851
``http://node1.example.com/im``
852

    
853
Login, and you will see your profile page. Now, click the "pithos+" link on the
854
top black cloudbar. If everything was setup correctly, this will redirect you
855
to:
856

    
857
``https://node2.example.com/ui``
858

    
859
and you will see the blue interface of the Pithos+ application.  Click the
860
orange "Upload" button and upload your first file. If the file gets uploaded
861
successfully, then this is your first sign of a successful Pithos+ installation.
862
Go ahead and experiment with the interface to make sure everything works
863
correctly.
864

    
865
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
866

    
867
If you don't stumble on any problems, then you have successfully installed
868
Pithos+, which you can use as a standalone File Storage Service.
869

    
870
If you would like to do more, such as:
871

    
872
 * Spawning VMs
873
 * Spawning VMs from Images stored on Pithos+
874
 * Uploading your custom Images to Pithos+
875
 * Spawning VMs from those custom Images
876
 * Registering existing Pithos+ files as Images
877
 * Connect VMs to the Internet
878
 * Create Private Networks
879
 * Add VMs to Private Networks
880

    
881
please continue with the rest of the guide.
882

    
883

    
884
Cyclades (and Plankton) Prerequisites
885
=====================================
886

    
887
Before proceeding with the Cyclades (and Plankton) installation, make sure you
888
have successfully set up Astakos and Pithos+ first, because Cyclades depends
889
on them. If you don't have a working Astakos and Pithos+ installation yet,
890
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
891

    
892
Besides Astakos and Pithos+, you will also need a number of additional working
893
prerequisites, before you start the Cyclades installation.
894

    
895
Ganeti
896
------
897

    
898
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
899
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
900
Please refer to the
901
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
902
gory details. A successful Ganeti installation concludes with a working
903
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
904
<GANETI_NODES>`.
905

    
906
The above Ganeti cluster can run on different physical machines than node1 and
907
node2 and can scale independently, according to your needs.
908

    
909
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
910
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
911
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
912

    
913
We highly recommend that you read the official Ganeti documentation, if you are
914
not familiar with Ganeti. If you are extremely impatient, you can result with
915
the above assumed setup by running on both nodes:
916

    
917
.. code-block:: console
918

    
919
   # apt-get install -t squeeze-backports ganeti2 ganeti-htools
920
   # modprobe drbd minor_count=255 usermode_helper=/bin/true
921

    
922
Unfortunatelly, stock Ganeti doesn't support IP pool management yet (we are
923
working hard to merge it upstream for Ganeti 2.7). Synnefo depends on the IP
924
pool functionality of Ganeti, so you have to use GRNET's patches for now. To
925
do so you have to build your own package from source. Please clone our local
926
repo:
927

    
928
.. code-block:: console
929

    
930
   # git clone https://code.grnet.gr/git/ganeti-local
931
   # cd ganeti-local
932
   # git checkout stable-2.6-ippool-hotplug-esi
933
   # git checkout debian-2.6
934

    
935
Then please check if you can complile ganeti:
936

    
937
.. code-block:: console
938

    
939
   # cd ganeti-local
940
   # ./automake.sh
941
   # ./configure
942
   # make
943

    
944
To do so you must have a correct build environment. Please refer to INSTALL
945
file in the source tree. Most of the packages needed are refered here:
946

    
947
.. code-block:: console
948

    
949
   #  apt-get install graphviz automake lvm2 ssh bridge-utils iproute iputils-arping \
950
                      ndisc6 python python-pyopenssl openssl \
951
                      python-pyparsing python-simplejson \
952
                      python-pyinotify python-pycurl socat \
953
                      python-elementtree kvm qemu-kvm \
954
                      ghc6 libghc6-json-dev libghc6-network-dev \
955
                      libghc6-parallel-dev libghc6-curl-dev \
956
                      libghc-quickcheck2-dev hscolour hlint
957
                      python-support python-paramiko \
958
                      python-fdsend python-ipaddr python-bitarray libjs-jquery fping
959

    
960
Now let try to build the package:
961

    
962
.. code-block:: console
963

    
964
   # apt-get install git-buildpackage
965
   # mkdir ../build-area
966
   # git-buildpackage --git-upstream-branch=stable-2.6-ippool-hotplug-esi \
967
                   --git-debian-branch=debian-2.6 \
968
                   --git-export=INDEX \
969
                   --git-ignore-new
970

    
971
This will create two deb packages in build-area. You should then run in both
972
nodes:
973

    
974
.. code-block:: console
975

    
976
   # dpkg -i ../build-area/snf-ganeti.*deb
977
   # dpkg -i ../build-area/ganeti-htools.*deb
978
   # apt-get install -f
979

    
980
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
981
both nodes, choose a domain name that resolves to a valid floating IP (let's say
982
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access
983
between each other using ssh keys and not passwords. Also, make sure there is an
984
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup
985
a bridge interface on the host machines (e.g: br0). Then run on node1:
986

    
987
.. code-block:: console
988

    
989
   root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
990
                                   --no-etc-hosts --vg-name=ganeti \
991
                                   --nic-parameters link=br0 --master-netdev eth0 \
992
                                   ganeti.node1.example.com
993
   root@node1:~ # gnt-cluster modify --default-iallocator hail
994
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
995
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
996

    
997
   root@node1:~ # gnt-node add --no-node-setup --master-capable=yes \
998
                               --vm-capable=yes node2.example.com
999
   root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1000
   root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1001

    
1002
For any problems you may stumble upon installing Ganeti, please refer to the
1003
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
1004
of Ganeti is out of the scope of this guide.
1005

    
1006
.. _cyclades-install-snfimage:
1007

    
1008
snf-image
1009
---------
1010

    
1011
Installation
1012
~~~~~~~~~~~~
1013
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1014
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
1015
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
1016
node1 and node2. You can do this by running on *both* nodes:
1017

    
1018
.. code-block:: console
1019

    
1020
   # apt-get install snf-image-host snf-pithos-backend python-psycopg2
1021

    
1022
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able to
1023
handle image files stored on Pithos+. It also needs `python-psycopg2` to be able
1024
to access the Pithos+ database. This is why, we also install them on *all*
1025
VM-capable Ganeti nodes.
1026

    
1027
Now, you need to download and save the corresponding helper package. Please see
1028
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's
1029
assume that you installed snf-image-host version 0.4.4-1. Then, you need
1030
snf-image-helper v0.4.4-1 on *both* nodes:
1031

    
1032
.. code-block:: console
1033

    
1034
   # cd /var/lib/snf-image/helper/
1035
   # wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.4.4-1_all.deb
1036

    
1037
.. warning:: Be careful: Do NOT install the snf-image-helper debian package.
1038
             Just put it under /var/lib/snf-image/helper/
1039

    
1040
Once, you have downloaded the snf-image-helper package, create the helper VM by
1041
running on *both* nodes:
1042

    
1043
.. code-block:: console
1044

    
1045
   # ln -s snf-image-helper_0.4.4-1_all.deb snf-image-helper.deb
1046
   # snf-image-update-helper
1047

    
1048
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1049
snf-image-host to run successfully.
1050

    
1051
Configuration
1052
~~~~~~~~~~~~~
1053
snf-image supports native access to Images stored on Pithos+. This means that
1054
snf-image can talk directly to the Pithos+ backend, without the need of providing
1055
a public URL. More details, are described in the next section. For now, the only
1056
thing we need to do, is configure snf-image to access our Pithos+ backend.
1057

    
1058
To do this, we need to set the corresponding variables in
1059
``/etc/default/snf-image``, to reflect our Pithos+ setup:
1060

    
1061
.. code-block:: console
1062

    
1063
   PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1064

    
1065
   PITHOS_DATA="/srv/pithos/data"
1066

    
1067
If you have installed your Ganeti cluster on different nodes than node1 and node2 make
1068
sure that ``/srv/pithos/data`` is visible by all of them.
1069

    
1070
If you would like to use Images that are also/only stored locally, you need to
1071
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1072
Pithos+.
1073

    
1074
Testing
1075
~~~~~~~
1076
You can test that snf-image is successfully installed by running on the
1077
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1078

    
1079
.. code-block:: console
1080

    
1081
   # gnt-os diagnose
1082

    
1083
This should return ``valid`` for snf-image.
1084

    
1085
If you are interested to learn more about snf-image's internals (and even use
1086
it alongside Ganeti without Synnefo), please see
1087
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning
1088
installation instructions, documentation on the design and implementation, and
1089
supported Image formats.
1090

    
1091
.. _snf-image-images:
1092

    
1093
snf-image's actual Images
1094
-------------------------
1095

    
1096
Now that snf-image is installed successfully we need to provide it with some
1097
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
1098
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1099
format. For more information about snf-image's Image formats see `here
1100
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
1101

    
1102
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
1103
above Images to be stored:
1104

    
1105
 * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in
1106
   :file:`/etc/default/snf-image`)
1107
 * On a remote host (accessible via a public URL e.g: http://... or ftp://...)
1108
 * On Pithos+ (accessible natively, not only by its public URL)
1109

    
1110
For the purpose of this guide, we will use the `Debian Squeeze Base Image
1111
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official
1112
`snf-image page
1113
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
1114
of type ``diskdump``. We will store it in our new Pithos+ installation.
1115

    
1116
To do so, do the following:
1117

    
1118
a) Download the Image from the official snf-image page (`image link
1119
   <https://pithos.okeanos.grnet.gr/public/9epgb>`_).
1120

    
1121
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI
1122
   or the command line client `kamaki
1123
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.
1124

    
1125
Once the Image is uploaded successfully, download the Image's metadata file
1126
from the official snf-image page (`image_metadata link
1127
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for
1128
spawning a VM from Ganeti, in the next section.
1129

    
1130
Of course, you can repeat the procedure to upload more Images, available from the
1131
`official snf-image page
1132
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
1133

    
1134
.. _ganeti-with-pithos-images:
1135

    
1136
Spawning a VM from a Pithos+ Image, using Ganeti
1137
------------------------------------------------
1138

    
1139
Now, it is time to test our installation so far. So, we have Astakos and
1140
Pithos+ installed, we have a working Ganeti installation, the snf-image
1141
definition installed on all VM-capable nodes and a Debian Squeeze Image on
1142
Pithos+. Make sure you also have the `metadata file
1143
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
1144

    
1145
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1146

    
1147
.. code-block:: console
1148

    
1149
   # gnt-instance add -o snf-image+default --os-parameters \
1150
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1151
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1152
                      testvm1
1153

    
1154
In the above command:
1155

    
1156
 * ``img_passwd``: the arbitrary root password of your new instance
1157
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1158
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
1159
               should have the format
1160
               ``pithos://<username>/<container>/<filename>``:
1161
                * ``username``: ``user@example.com`` (defined during Astakos sign up)
1162
                * ``container``: ``pithos`` (default, if the Web UI was used)
1163
                * ``filename``: the name of file (visible also from the Web UI)
1164
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1165
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1166
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
1167

    
1168
If the ``gnt-instance add`` command returns successfully, then run:
1169

    
1170
.. code-block:: console
1171

    
1172
   # gnt-instance info testvm1 | grep "console connection"
1173

    
1174
to find out where to connect using VNC. If you can connect successfully and can
1175
login to your new instance using the root password ``my_vm_example_passw0rd``,
1176
then everything works as expected and you have your new Debian Base VM up and
1177
running.
1178

    
1179
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1180
to access the Pithos+ database and the Pithos+ backend data. Also, make sure
1181
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add``
1182
succeeds but you cannot connect, again find out what went wrong. Do *NOT*
1183
proceed to the next steps unless you are sure everything works till this point.
1184

    
1185
If everything works, you have successfully connected Ganeti with Pithos+. Let's
1186
move on to networking now.
1187

    
1188
.. warning::
1189
    You can bypass the networking sections and go straight to
1190
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1191
    the Cyclades Network Service, but only the Cyclades Compute Service
1192
    (recommended for now).
1193

    
1194
Networking Setup Overview
1195
-------------------------
1196

    
1197
This part is deployment-specific and must be customized based on the specific
1198
needs of the system administrator. However, to do so, the administrator needs
1199
to understand how each level handles Virtual Networks, to be able to setup the
1200
backend appropriately, before installing Cyclades. To do so, please read the
1201
:ref:`Network <networks>` section before proceeding.
1202

    
1203
Since synnefo 0.11 all network actions are managed with the snf-manage
1204
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
1205
snf-network, bridges, vlans) to be already configured correctly. The only
1206
actions needed in this point are:
1207

    
1208
a) Have Ganeti with IP pool management support installed.
1209

    
1210
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1211

    
1212
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1213

    
1214
In order to test that everything is setup correctly before installing Cyclades,
1215
we will make some testing actions in this section, and the actual setup will be
1216
done afterwards with snf-manage commands.
1217

    
1218
.. _snf-network:
1219

    
1220
snf-network
1221
~~~~~~~~~~~
1222

    
1223
snf-network includes `kvm-vif-bridge` script that is invoked every time
1224
a tap (a VM's NIC) is created. Based on environment variables passed by
1225
Ganeti it issues various commands depending on the network type the NIC is
1226
connected to and sets up a corresponding dhcp lease.
1227

    
1228
Install snf-network on all Ganeti nodes:
1229

    
1230
.. code-block:: console
1231

    
1232
   # apt-get install snf-network
1233

    
1234
Then, in :file:`/etc/default/snf-network` set:
1235

    
1236
.. code-block:: console
1237

    
1238
   MAC_MASK=ff:ff:f0:00:00:00
1239

    
1240
.. _nfdhcpd:
1241

    
1242
nfdhcpd
1243
~~~~~~~
1244

    
1245
Each NIC's IP is chosen by Ganeti (with IP pool management support).
1246
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
1247
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
1248
create a dhcp response.
1249

    
1250
.. code-block:: console
1251

    
1252
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1253
   # apt-get install nfdhcpd
1254

    
1255
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1256
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1257
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1258
VMs. Once you are finished, restart the server on all nodes:
1259

    
1260
.. code-block:: console
1261

    
1262
   # /etc/init.d/nfdhcpd restart
1263

    
1264
If you are using ``ferm``, then you need to run the following:
1265

    
1266
.. code-block:: console
1267

    
1268
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1269
   # /etc/init.d/ferm restart
1270

    
1271
or make sure to run after boot:
1272

    
1273
.. code-block:: console
1274

    
1275
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1276

    
1277
and if you have IPv6 enabled:
1278

    
1279
.. code-block:: console
1280

    
1281
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
1282
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44
1283

    
1284
You can check which clients are currently served by nfdhcpd by running:
1285

    
1286
.. code-block:: console
1287

    
1288
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1289

    
1290
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1291

    
1292
Public Network Setup
1293
--------------------
1294

    
1295
To achieve basic networking the simplest way is to have a common bridge (e.g.
1296
``br0``, on the same collision domain with the router) where all VMs will connect
1297
to. Packets will be "forwarded" to the router and then to the Internet. If
1298
you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1299
:ref:`Network <networks>` section).
1300

    
1301
Physical Host Setup
1302
~~~~~~~~~~~~~~~~~~~
1303

    
1304
Assuming ``eth0`` on both hosts is the public interface (directly connected
1305
to the router), run on every node:
1306

    
1307
.. code-block:: console
1308

    
1309
   # brctl addbr br0
1310
   # ip link set br0 up
1311
   # vconfig add eth0 100
1312
   # ip link set eth0.100 up
1313
   # brctl addif br0 eth0.100
1314

    
1315

    
1316
Testing a Public Network
1317
~~~~~~~~~~~~~~~~~~~~~~~~
1318

    
1319
Let's assume, that you want to assign IPs from the ``5.6.7.0/27`` range to you
1320
new VMs, with ``5.6.7.1`` as the router's gateway. In Ganeti you can add the
1321
network by running:
1322

    
1323
.. code-block:: console
1324

    
1325
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 --network-type=public --tags=nfdhcpd test-net-public
1326

    
1327
Then, connect the network to all your nodegroups. We assume that we only have
1328
one nodegroup (``default``) in our Ganeti cluster:
1329

    
1330
.. code-block:: console
1331

    
1332
   # gnt-network connect test-net-public default bridged br0
1333

    
1334
Now, it is time to test that the backend infrastracture is correctly setup for
1335
the Public Network. We will add a new VM, the same way we did it on the
1336
previous testing section. However, now will also add one NIC, configured to be
1337
managed from our previously defined network. Run on the GANETI-MASTER (node1):
1338

    
1339
.. code-block:: console
1340

    
1341
   # gnt-instance add -o snf-image+default --os-parameters \
1342
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1343
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1344
                      --net 0:ip=pool,network=test-net-public \
1345
                      testvm2
1346

    
1347
If the above returns successfully, connect to the new VM and run:
1348

    
1349
.. code-block:: console
1350

    
1351
   root@testvm2:~ # ip addr
1352
   root@testvm2:~ # ip route
1353
   root@testvm2:~ # cat /etc/resolv.conf
1354

    
1355
to check IP address (5.6.7.2), IP routes (default via 5.6.7.1) and DNS config
1356
(nameserver option in nfdhcpd.conf). This shows correct configuration of
1357
ganeti, snf-network and nfdhcpd.
1358

    
1359
Now ping the outside world. If this works too, then you have also configured
1360
correctly your physical host and router.
1361

    
1362
Make sure everything works as expected, before proceeding with the Private
1363
Networks setup.
1364

    
1365
.. _private-networks-setup:
1366

    
1367
Private Networks Setup
1368
----------------------
1369

    
1370
Synnefo supports two types of private networks:
1371

    
1372
 - based on MAC filtering
1373
 - based on physical VLANs
1374

    
1375
Both types provide Layer 2 isolation to the end-user.
1376

    
1377
For the first type a common bridge (e.g. ``prv0``) is needed while for the second a
1378
range of bridges (e.g. ``prv1..prv100``) each bridged on a different physical
1379
VLAN. To this end to assure isolation among end-users' private networks each
1380
has to have different MAC prefix (for the filtering to take place) or to be
1381
"connected" to a different bridge (VLAN actually).
1382

    
1383
Physical Host Setup
1384
~~~~~~~~~~~~~~~~~~~
1385

    
1386
In order to create the necessary VLAN/bridges, one for MAC filtered private
1387
networks and various (e.g. 20) for private networks based on physical VLANs,
1388
run on every node:
1389

    
1390
Assuming ``eth0`` of both hosts are somehow (via cable/switch with VLANs
1391
configured correctly) connected together, run on every node:
1392

    
1393
.. code-block:: console
1394

    
1395
   # apt-get install vlan
1396
   # modprobe 8021q
1397
   # $iface=eth0
1398
   # for prv in $(seq 0 20); do
1399
	vlan=$prv
1400
	bridge=prv$prv
1401
	vconfig add $iface $vlan
1402
	ifconfig $iface.$vlan up
1403
	brctl addbr $bridge
1404
	brctl setfd $bridge 0
1405
	brctl addif $bridge $iface.$vlan
1406
	ifconfig $bridge up
1407
      done
1408

    
1409
The above will do the following :
1410

    
1411
 * provision 21 new bridges: ``prv0`` - ``prv20``
1412
 * provision 21 new vlans: ``eth0.0`` - ``eth0.20``
1413
 * add the corresponding vlan to the equivalent bridge
1414

    
1415
You can run ``brctl show`` on both nodes to see if everything was setup
1416
correctly.
1417

    
1418
Testing the Private Networks
1419
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1420

    
1421
To test the Private Networks, we will create two instances and put them in the
1422
same Private Networks (one MAC Filtered and one Physical VLAN). This means
1423
that the instances will have a second NIC connected to the ``prv0``
1424
pre-provisioned bridge and a third to ``prv1``.
1425

    
1426
We run the same command as in the Public Network testing section, but with one
1427
more argument for the second NIC:
1428

    
1429
.. code-block:: console
1430

    
1431
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --network-type=private --tags=nfdhcpd,private-filtered test-net-prv-mac
1432
   # gnt-network connect test-net-prv-mac default bridged prv0
1433

    
1434
   # gnt-network add --network=10.0.0.0/24 --tags=nfdhcpd --network-type=private test-net-prv-vlan
1435
   # gnt-network connect test-net-prv-vlan default bridged prv1
1436

    
1437
   # gnt-instance add -o snf-image+default --os-parameters \
1438
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1439
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1440
                      --net 0:ip=pool,network=test-net-public \
1441
                      --net 1:ip=pool,network=test-net-prv-mac \
1442
                      --net 2:ip=none,network=test-net-prv-vlan \
1443
                      testvm3
1444

    
1445
   # gnt-instance add -o snf-image+default --os-parameters \
1446
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1447
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1448
                      --net 0:ip=pool,network=test-net-public \
1449
                      --net 1:ip=pool,network=test-net-prv-mac \
1450
                      --net 2:ip=none,network=test-net-prv-vlan \
1451
                      testvm4
1452

    
1453
Above, we create two instances with first NIC connected to the internet, their
1454
second NIC connected to a MAC filtered private Network and their third NIC
1455
connected to the first Physical VLAN Private Network. Now, connect to the
1456
instances using VNC and make sure everything works as expected:
1457

    
1458
 a) The instances have access to the public internet through their first eth
1459
    interface (``eth0``), which has been automatically assigned a public IP.
1460

    
1461
 b) ``eth1`` will have mac prefix ``aa:00:55``, while ``eth2`` default one (``aa:00:00``)
1462

    
1463
 c) ip link set ``eth1``/``eth2`` up
1464

    
1465
 d) dhclient ``eth1``/``eth2``
1466

    
1467
 e) On testvm3  ping 192.168.1.2/10.0.0.2
1468

    
1469
If everything works as expected, then you have finished the Network Setup at the
1470
backend for both types of Networks (Public & Private).
1471

    
1472
.. _cyclades-gtools:
1473

    
1474
Cyclades Ganeti tools
1475
---------------------
1476

    
1477
In order for Ganeti to be connected with Cyclades later on, we need the
1478
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1479
case). You can install them by running in both nodes:
1480

    
1481
.. code-block:: console
1482

    
1483
   # apt-get install snf-cyclades-gtools
1484

    
1485
This will install the following:
1486

    
1487
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1488
 * ``snf-ganeti-hook`` (all necessary hooks under ``/etc/ganeti/hooks``)
1489
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1490

    
1491
Configure ``snf-cyclades-gtools``
1492
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1493

    
1494
The package will install the ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf``
1495
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1496
that need it:
1497

    
1498
.. code-block:: console
1499

    
1500
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1501

    
1502
The above variables should reflect your :ref:`Message Queue setup
1503
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1504

    
1505
Connect ``snf-image`` with ``snf-progress-monitor``
1506
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1507

    
1508
Finally, we need to configure ``snf-image`` to publish progress messages during
1509
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1510
set the corresponding variable to ``snf-progress-monitor``:
1511

    
1512
.. code-block:: console
1513

    
1514
   PROGRESS_MONITOR="snf-progress-monitor"
1515

    
1516
This file should be editted in all Ganeti nodes.
1517

    
1518
.. _rapi-user:
1519

    
1520
Synnefo RAPI user
1521
-----------------
1522

    
1523
As a last step before installing Cyclades, create a new RAPI user that will
1524
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1525
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1526
You can do this, by first running:
1527

    
1528
.. code-block:: console
1529

    
1530
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1531

    
1532
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1533

    
1534
.. code-block:: console
1535

    
1536
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1537

    
1538
More about Ganeti's RAPI users `here.
1539
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1540

    
1541
You have now finished with all needed Prerequisites for Cyclades (and
1542
Plankton). Let's move on to the actual Cyclades installation.
1543

    
1544

    
1545
Installation of Cyclades (and Plankton) on node1
1546
================================================
1547

    
1548
This section describes the installation of Cyclades. Cyclades is Synnefo's
1549
Compute service. Plankton (the Image Registry service) will get installed
1550
automatically along with Cyclades, because it is contained in the same Synnefo
1551
component right now.
1552

    
1553
We will install Cyclades (and Plankton) on node1. To do so, we install the
1554
corresponding package by running on node1:
1555

    
1556
.. code-block:: console
1557

    
1558
   # apt-get install snf-cyclades-app
1559

    
1560
.. warning:: Make sure you have installed ``python-gevent`` version >= 0.13.6.
1561
    This version is available at squeeze-backports and can be installed by
1562
    running: ``apt-get install -t squeeze-backports python-gevent``
1563

    
1564
If all packages install successfully, then Cyclades and Plankton are installed
1565
and we proceed with their configuration.
1566

    
1567

    
1568
Configuration of Cyclades (and Plankton)
1569
========================================
1570

    
1571
Conf files
1572
----------
1573

    
1574
After installing Cyclades, a number of new configuration files will appear under
1575
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will descibe here
1576
only the minimal needed changes to result with a working system. In general, sane
1577
defaults have been chosen for the most of the options, to cover most of the
1578
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1579
once you get familiar with the different options.
1580

    
1581
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1582

    
1583
.. code-block:: console
1584

    
1585
   ASTAKOS_URL = 'https://node1.example.com/im/authenticate'
1586

    
1587
The ``ASTAKOS_URL`` denotes the authentication endpoint for Cyclades and is set
1588
to point to Astakos (this should have the same value with Pithos+'s
1589
``PITHOS_AUTHENTICATION_URL``, setup :ref:`previously <conf-pithos>`).
1590

    
1591
TODO: Document the Network Options here
1592

    
1593
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1594

    
1595
.. code-block:: console
1596

    
1597
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1598
   CLOUDBAR_ACTIVE_SERVICE = '2'
1599
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
1600
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/im/get_menu'
1601

    
1602
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1603
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1604
used by the Cyclades Web UI to get from Astakos all the information needed to
1605
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1606
above should have the same values we put in the corresponding variables in
1607
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1608
:ref:`Pithos configuration <conf-pithos>` section.
1609

    
1610
The ``CLOUDBAR_ACTIVE_SERVICE`` points to an already registered Astakos
1611
service. You can see all :ref:`registered services <services-reg>` by running
1612
on the Astakos node (node1):
1613

    
1614
.. code-block:: console
1615

    
1616
   # snf-manage service-list
1617

    
1618
The value of ``CLOUDBAR_ACTIVE_SERVICE`` should be the cyclades service's
1619
``id`` as shown by the above command, in our case ``2``.
1620

    
1621
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1622

    
1623
.. code-block:: console
1624

    
1625
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1626
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1627

    
1628
In this file we configure the Plankton Service. ``BACKEND_DB_CONNECTION``
1629
denotes the Pithos+ database (where the Image files are stored). So we set that
1630
to point to our Pithos+ database. ``BACKEND_BLOCK_PATH`` denotes the actual
1631
Pithos+ data location.
1632

    
1633
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
1634

    
1635
.. code-block:: console
1636

    
1637
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1638

    
1639
The above settings denote the Message Queue. Those settings should have the same
1640
values as in ``/etc/synnefo/10-snf-cyclades-gtools-backend.conf`` file, and
1641
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
1642

    
1643
Edit ``/etc/synnefo/20-snf-cyclades-app-ui.conf``:
1644

    
1645
.. code-block:: console
1646

    
1647
   UI_LOGIN_URL = "https://node1.example.com/im/login"
1648
   UI_LOGOUT_URL = "https://node1.example.com/im/logout"
1649

    
1650
The ``UI_LOGIN_URL`` option tells the Cyclades Web UI where to redirect users,
1651
if they are not logged in. We point that to Astakos.
1652

    
1653
The ``UI_LOGOUT_URL`` option tells the Cyclades Web UI where to redirect the
1654
user when he/she logs out. We point that to Astakos, too.
1655

    
1656
Edit ``/etc/default/vncauthproxy``:
1657

    
1658
.. code-block:: console
1659

    
1660
   CHUID="www-data:nogroup"
1661

    
1662
We have now finished with the basic Cyclades and Plankton configuration.
1663

    
1664
Database Initialization
1665
-----------------------
1666

    
1667
Once Cyclades is configured, we sync the database:
1668

    
1669
.. code-block:: console
1670

    
1671
   $ snf-manage syncdb
1672
   $ snf-manage migrate
1673

    
1674
and load the initial server flavors:
1675

    
1676
.. code-block:: console
1677

    
1678
   $ snf-manage loaddata flavors
1679

    
1680
If everything returns successfully, our database is ready.
1681

    
1682
Add the Ganeti backend
1683
----------------------
1684

    
1685
In our installation we assume that we only have one Ganeti cluster, the one we
1686
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
1687
cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
1688
correctly.
1689

    
1690
.. code-block:: console
1691

    
1692
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
1693

    
1694
You can see everything has been setup correctly by running:
1695

    
1696
.. code-block:: console
1697

    
1698
   $ snf-manage backend-list
1699

    
1700
If something is not set correctly, you can modify the backend with the
1701
``snf-manage backend-modify`` command. If something has gone wrong, you could
1702
modify the backend to reflect the Ganeti installation by running:
1703

    
1704
.. code-block:: console
1705

    
1706
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
1707
                               --user=cyclades
1708
                               --pass=example_rapi_passw0rd
1709
                               1
1710

    
1711
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
1712
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
1713
can talk to Ganeti even after a Ganeti master-failover.
1714

    
1715
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
1716
password.  Once we setup the first backend to point at our Ganeti cluster, we
1717
update the Cyclades backends status by running:
1718

    
1719
.. code-block:: console
1720

    
1721
   $ snf-manage backend-update-status
1722

    
1723
Cyclades can manage multiple Ganeti backends, but for the purpose of this
1724
guide,we won't get into more detail regarding mulitple backends. If you want to
1725
learn more please see /*TODO*/.
1726

    
1727
Add a Public Network
1728
----------------------
1729

    
1730
Cyclades supports different Public Networks on different Ganeti backends.
1731
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
1732
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
1733
created NIC on a bridge. After having a bridge (e.g. br0) created in every
1734
backend node edit Synnefo setting CUSTOM_BRIDGED_BRIDGE to 'br0':
1735

    
1736
.. code-block:: console
1737

    
1738
   $ snf-manage network-create --subnet=5.6.7.0/27 \
1739
                               --gateway=5.6.7.1 \
1740
                               --subnet6=2001:648:2FFC:1322::/64 \
1741
                               --gateway6=2001:648:2FFC:1322::1 \
1742
                               --public --dhcp --type=CUSTOM_BRIDGED \
1743
                               --name=public_network \
1744
                               --backend-id=1
1745

    
1746
This will create the Public Network on both Cyclades and the Ganeti backend. To
1747
make sure everything was setup correctly, also run:
1748

    
1749
.. code-block:: console
1750

    
1751
   $ snf-manage reconcile-networks
1752

    
1753
You can see all available networks by running:
1754

    
1755
.. code-block:: console
1756

    
1757
   $ snf-manage network-list
1758

    
1759
and inspect each network's state by running:
1760

    
1761
.. code-block:: console
1762

    
1763
   $ snf-manage network-inspect <net_id>
1764

    
1765
Finally, you can see the networks from the Ganeti perspective by running on the
1766
Ganeti MASTER:
1767

    
1768
.. code-block:: console
1769

    
1770
   $ gnt-network list
1771
   $ gnt-network info <network_name>
1772

    
1773

    
1774
Create pools for Private Networks
1775
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1776

    
1777
To prevent duplicate assignment of resources to different private networks,
1778
Cyclades supports two types of pools:
1779

    
1780
 - MAC prefix Pool
1781
 - Bridge Pool
1782

    
1783
As long as those resourses have been provisioned, admin has to define two
1784
these pools in Synnefo:
1785

    
1786

    
1787
.. code-block:: console
1788

    
1789
   root@testvm1:~ # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
1790

    
1791
   root@testvm1:~ # snf-manage pool-create --type=bridge --base=prv --size=20
1792

    
1793
Also, change the Synnefo setting in :file:`20-snf-cyclades-app-api.conf`:
1794

    
1795
.. code-block:: console
1796

    
1797
   PRIVATE_MAC_FILTERED_BRIDGE = 'prv0'
1798

    
1799
Servers restart
1800
---------------
1801

    
1802
Restart gunicorn on node1:
1803

    
1804
.. code-block:: console
1805

    
1806
   # /etc/init.d/gunicorn restart
1807

    
1808
Now let's do the final connections of Cyclades with Ganeti.
1809

    
1810
``snf-dispatcher`` initialization
1811
---------------------------------
1812

    
1813
``snf-dispatcher`` dispatches all messages published to the Message Queue and
1814
manages the Cyclades database accordingly. It also initializes all exchanges. By
1815
default it is not enabled during installation of Cyclades, so let's enable it in
1816
its configuration file ``/etc/default/snf-dispatcher``:
1817

    
1818
.. code-block:: console
1819

    
1820
   SNF_DSPTCH_ENABLE=true
1821

    
1822
and start the daemon:
1823

    
1824
.. code-block:: console
1825

    
1826
   # /etc/init.d/snf-dispatcher start
1827

    
1828
You can see that everything works correctly by tailing its log file
1829
``/var/log/synnefo/dispatcher.log``.
1830

    
1831
``snf-ganeti-eventd`` on GANETI MASTER
1832
--------------------------------------
1833

    
1834
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
1835
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
1836
The daemon is already installed on the GANETI MASTER (node1 in our case).
1837
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
1838
installation, so we enable it in its configuration file
1839
``/etc/default/snf-ganeti-eventd``:
1840

    
1841
.. code-block:: console
1842

    
1843
   SNF_EVENTD_ENABLE=true
1844

    
1845
and start the daemon:
1846

    
1847
.. code-block:: console
1848

    
1849
   # /etc/init.d/snf-ganeti-eventd start
1850

    
1851
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
1852

    
1853
If all the above return successfully, then you have finished with the Cyclades
1854
and Plankton installation and setup. Let's test our installation now.
1855

    
1856

    
1857
Testing of Cyclades (and Plankton)
1858
==================================
1859

    
1860
Cyclades Web UI
1861
---------------
1862

    
1863
First of all we need to test that our Cyclades Web UI works correctly. Open your
1864
browser and go to the Astakos home page. Login and then click 'cyclades' on the
1865
top cloud bar. This should redirect you to:
1866

    
1867
 `http://node1.example.com/ui/`
1868

    
1869
and the Cyclades home page should appear. If not, please go back and find what
1870
went wrong. Do not proceed if you don't see the Cyclades home page.
1871

    
1872
If the Cyclades home page appears, click on the orange button 'New machine'. The
1873
first step of the 'New machine wizard' will appear. This step shows all the
1874
available Images from which you can spawn new VMs. The list should be currently
1875
empty, as we haven't registered any Images yet. Close the wizard and browse the
1876
interface (not many things to see yet). If everything seems to work, let's
1877
register our first Image file.
1878

    
1879
Cyclades Images
1880
---------------
1881

    
1882
To test our Cyclades (and Plankton) installation, we will use an Image stored on
1883
Pithos+ to spawn a new VM from the Cyclades interface. We will describe all
1884
steps, even though you may already have uploaded an Image on Pithos+ from a
1885
:ref:`previous <snf-image-images>` section:
1886

    
1887
 * Upload an Image file to Pithos+
1888
 * Register that Image file to Plankton
1889
 * Spawn a new VM from that Image from the Cyclades Web UI
1890

    
1891
We will use the `kamaki <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_
1892
command line client to do the uploading and registering of the Image.
1893

    
1894
Installation of `kamaki`
1895
~~~~~~~~~~~~~~~~~~~~~~~~
1896

    
1897
You can install `kamaki` anywhere you like, since it is a standalone client of
1898
the APIs and talks to the installation over `http`. For the purpose of this
1899
guide we will assume that we have downloaded the `Debian Squeeze Base Image
1900
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
1901
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
1902
too. We do this by running:
1903

    
1904
.. code-block:: console
1905

    
1906
   # apt-get install kamaki
1907

    
1908
Configuration of kamaki
1909
~~~~~~~~~~~~~~~~~~~~~~~
1910

    
1911
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
1912
installation. We do this by running:
1913

    
1914
.. code-block:: console
1915

    
1916
   $ kamaki config set astakos.url "https://node1.example.com"
1917
   $ kamaki config set compute.url "https://node1.example.com/api/v1.1"
1918
   $ kamaki config set image.url "https://node1.example.com/plankton"
1919
   $ kamaki config set store.url "https://node2.example.com/v1"
1920
   $ kamaki config set global.account "user@example.com"
1921
   $ kamaki config set global.token "bdY_example_user_tokenYUff=="
1922

    
1923
The token at the last kamaki command is our user's (``user@example.com``) token,
1924
as it appears on the user's `Profile` web page on the Astakos Web UI.
1925

    
1926
You can see that the new configuration options have been applied correctly, by
1927
running:
1928

    
1929
.. code-block:: console
1930

    
1931
   $ kamaki config list
1932

    
1933
Upload an Image file to Pithos+
1934
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1935

    
1936
Now, that we have set up `kamaki` we will upload the Image that we have
1937
downloaded and stored under ``/srv/images/``. Although we can upload the Image
1938
under the root ``Pithos`` container (as you may have done when uploading the
1939
Image from the Pithos+ Web UI), we will create a new container called ``images``
1940
and store the Image under that container. We do this for two reasons:
1941

    
1942
a) To demonstrate how to create containers other than the default ``Pithos``.
1943
   This can be done only with the `kamaki` client and not through the Web UI.
1944

    
1945
b) As a best organization practise, so that you won't have your Image files
1946
   tangled along with all your other Pithos+ files and directory structures.
1947

    
1948
We create the new ``images`` container by running:
1949

    
1950
.. code-block:: console
1951

    
1952
   $ kamaki store create images
1953

    
1954
Then, we upload the Image file to that container:
1955

    
1956
.. code-block:: console
1957

    
1958
   $ kamaki store upload --container images \
1959
                         /srv/images/debian_base-6.0-7-x86_64.diskdump \
1960
                         debian_base-6.0-7-x86_64.diskdump
1961

    
1962
The first is the local path and the second is the remote path on Pithos+. If
1963
the new container and the file appears on the Pithos+ Web UI, then you have
1964
successfully created the container and uploaded the Image file.
1965

    
1966
Register an existing Image file to Plankton
1967
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1968

    
1969
Once the Image file has been successfully uploaded on Pithos+, then we register
1970
it to Plankton (so that it becomes visible to Cyclades), by running:
1971

    
1972
.. code-block:: console
1973

    
1974
   $ kamaki image register "Debian Base" \
1975
                           pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump \
1976
                           --public \
1977
                           --disk-format=diskdump \
1978
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
1979
                           --property description="Debian Squeeze Base System" \
1980
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
1981
                           --property sortorder=1 --property USERS=root --property OS=debian
1982

    
1983
This command registers the Pithos+ file
1984
``pithos://user@example.com/images/debian_base-6.0-7-x86_64.diskdump`` as an
1985
Image in Plankton. This Image will be public (``--public``), so all users will
1986
be able to spawn VMs from it and is of type ``diskdump``. The first two
1987
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
1988
properties are optional, but recommended, so that the Images appear nicely on
1989
the Cyclades Web UI. ``Debian Base`` will appear as the name of this Image. The
1990
``OS`` property's valid values may be found in the ``IMAGE_ICONS`` variable
1991
inside the ``20-snf-cyclades-app-ui.conf`` configuration file.
1992

    
1993
``OSFAMILY`` and ``ROOT_PARTITION`` are mandatory because they will be passed
1994
from Plankton to Cyclades and then to Ganeti and `snf-image` (also see
1995
:ref:`previous section <ganeti-with-pithos-images>`). All other properties are
1996
used to show information on the Cyclades UI.
1997

    
1998
Spawn a VM from the Cyclades Web UI
1999
-----------------------------------
2000

    
2001
If the registration completes successfully, then go to the Cyclades Web UI from
2002
your browser at:
2003

    
2004
 `https://node1.example.com/ui/`
2005

    
2006
Click on the 'New Machine' button and the first step of the wizard will appear.
2007
Click on 'My Images' (right after 'System' Images) on the left pane of the
2008
wizard. Your previously registered Image "Debian Base" should appear under
2009
'Available Images'. If not, something has gone wrong with the registration. Make
2010
sure you can see your Image file on the Pithos+ Web UI and ``kamaki image
2011
register`` returns successfully with all options and properties as shown above.
2012

    
2013
If the Image appears on the list, select it and complete the wizard by selecting
2014
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2015
write down your password, because you *WON'T* be able to retrieve it later.
2016

    
2017
If everything was setup correctly, after a few minutes your new machine will go
2018
to state 'Running' and you will be able to use it. Click 'Console' to connect
2019
through VNC out of band, or click on the machine's icon to connect directly via
2020
SSH or RDP (for windows machines).
2021

    
2022
Congratulations. You have successfully installed the whole Synnefo stack and
2023
connected all components. Go ahead in the next section to test the Network
2024
functionality from inside Cyclades and discover even more features.
2025

    
2026

    
2027
General Testing
2028
===============
2029

    
2030

    
2031
Notes
2032
=====