Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 2f6143c9

History | View | Annotate | Download (47.8 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
 * Identity Management (Astakos)
14
 * Object Storage Service (Pithos+)
15
 * Compute Service (Cyclades)
16
 * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the guide
24
and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr squeeze main``
53
| ``deb-src http://apt.dev.grnet.gr squeeze main``
54

    
55
You also need a shared directory visible by both nodes. Pithos+ will save all
56
data inside this directory. By 'all data', we mean files, images, and pithos
57
specific mapping data. If you plan to upload more than one basic image, this
58
directory should have at least 50GB of free space. During this guide, we will
59
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
60
to node2. Node2 has this directory mounted under ``/srv/pithos``, too.
61

    
62
Before starting the synnefo installation, you will need basic third party
63
software to be installed and configured on the physical nodes. We will describe
64
each node's general prerequisites separately. Any additional configuration,
65
specific to a synnefo service for each node, will be described at the service's
66
section.
67

    
68
Node1
69
-----
70

    
71
General Synnefo dependencies
72
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73

    
74
 * apache (http server)
75
 * gunicorn (WSGI http server)
76
 * postgresql (database)
77
 * rabbitmq (message queue)
78

    
79
You can install the above by running:
80

    
81
.. code-block:: console
82

    
83
   # apt-get install apache2 postgresql rabbitmq-server
84

    
85
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
86
the official debian backports:
87

    
88
.. code-block:: console
89

    
90
   # apt-get -t squeeze-backports install gunicorn
91

    
92
On node1, we will create our databases, so you will also need the
93
python-psycopg2 package:
94

    
95
.. code-block:: console
96

    
97
   # apt-get install python-psycopg2
98

    
99
Database setup
100
~~~~~~~~~~~~~~
101

    
102
On node1, we create a database called ``snf_apps``, that will host all django
103
apps related tables. We also create the user ``synnefo`` and grant him all
104
privileges on the database. We do this by running:
105

    
106
.. code-block:: console
107

    
108
   root@node1:~ # su - postgres
109
   postgres@node1:~ $ psql
110
   postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
111
   postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
112
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
113

    
114
We also create the database ``snf_pithos`` needed by the pithos+ backend and
115
grant the ``synnefo`` user all privileges on the database. This database could
116
be created on node2 instead, but we do it on node1 for simplicity. We will
117
create all needed databases on node1 and then node2 will connect to them.
118

    
119
.. code-block:: console
120

    
121
   postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
122
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
123

    
124
Configure the database to listen to all network interfaces. You can do this by
125
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
126
``listen_addresses`` to ``'*'`` :
127

    
128
.. code-block:: console
129

    
130
   listen_addresses = '*'
131

    
132
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
133
node2 to connect to the database. Add the following lines under ``#IPv4 local
134
connections:`` :
135

    
136
.. code-block:: console
137

    
138
   host		all	all	4.3.2.1/32	md5
139
   host		all	all	4.3.2.2/32	md5
140

    
141
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
142
actual IPs. Now, restart the server to apply the changes:
143

    
144
.. code-block:: console
145

    
146
   # /etc/init.d/postgresql restart
147

    
148
Gunicorn setup
149
~~~~~~~~~~~~~~
150

    
151
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following:
152

    
153
.. code-block:: console
154

    
155
   CONFIG = {
156
    'mode': 'django',
157
    'environment': {
158
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
159
    },
160
    'working_dir': '/etc/synnefo',
161
    'user': 'www-data',
162
    'group': 'www-data',
163
    'args': (
164
      '--bind=127.0.0.1:8080',
165
      '--workers=4',
166
      '--log-level=debug',
167
    ),
168
   }
169

    
170
.. warning:: Do NOT start the server yet, because it won't find the
171
    ``synnefo.settings`` module. We will start the server after successful
172
    installation of astakos. If the server is running::
173

    
174
       # /etc/init.d/gunicorn stop
175

    
176
Apache2 setup
177
~~~~~~~~~~~~~
178

    
179
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
180
the following:
181

    
182
.. code-block:: console
183

    
184
   <VirtualHost *:80>
185
     ServerName node1.example.com
186

    
187
     RewriteEngine On
188
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
189
   </VirtualHost>
190

    
191
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
192
containing the following:
193

    
194
.. code-block:: console
195

    
196
   <IfModule mod_ssl.c>
197
   <VirtualHost _default_:443>
198
     ServerName node1.example.com
199

    
200
     Alias /static "/usr/share/synnefo/static"
201

    
202
   #  SetEnv no-gzip
203
   #  SetEnv dont-vary
204

    
205
     AllowEncodedSlashes On
206

    
207
     RequestHeader set X-Forwarded-Protocol "https"
208

    
209
     <Proxy * >
210
       Order allow,deny
211
       Allow from all
212
     </Proxy>
213

    
214
     SetEnv                proxy-sendchunked
215
     SSLProxyEngine        off
216
     ProxyErrorOverride    off
217

    
218
     ProxyPass        /static !
219
     ProxyPass        / http://localhost:8080/ retry=0
220
     ProxyPassReverse / http://localhost:8080/
221

    
222
     RewriteEngine On
223
     RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE]
224

    
225
     SSLEngine on
226
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
227
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
228
   </VirtualHost>
229
   </IfModule>
230

    
231
Now enable sites and modules by running:
232

    
233
.. code-block:: console
234

    
235
   # a2enmod ssl
236
   # a2enmod rewrite
237
   # a2dissite default
238
   # a2ensite synnefo
239
   # a2ensite synnefo-ssl
240
   # a2enmod headers
241
   # a2enmod proxy_http
242

    
243
.. warning:: Do NOT start/restart the server yet. If the server is running::
244

    
245
       # /etc/init.d/apache2 stop
246

    
247
Message Queue setup
248
~~~~~~~~~~~~~~~~~~~
249

    
250
The message queue will run on node1, so we need to create the appropriate
251
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
252
exchanges:
253

    
254
.. code-block:: console
255

    
256
   # rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd"
257
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
258

    
259
We do not need to initialize the exchanges. This will be done automatically,
260
during the Cyclades setup.
261

    
262
Pithos+ data directory setup
263
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
264

    
265
As mentioned in the General Prerequisites section, there is a directory called
266
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
267
directory inside it:
268

    
269
.. code-block:: console
270

    
271
   # cd /srv/pithos
272
   # mkdir data
273
   # chown www-data:www-data data
274
   # chmod g+ws data
275

    
276
You are now ready with all general prerequisites concerning node1. Let's go to
277
node2.
278

    
279
Node2
280
-----
281

    
282
General Synnefo dependencies
283
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
284

    
285
 * apache (http server)
286
 * gunicorn (WSGI http server)
287
 * postgresql (database)
288

    
289
You can install the above by running:
290

    
291
.. code-block:: console
292

    
293
   # apt-get install apache2 postgresql
294

    
295
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
296
the official debian backports:
297

    
298
.. code-block:: console
299

    
300
   # apt-get -t squeeze-backports install gunicorn
301

    
302
Node2 will connect to the databases on node1, so you will also need the
303
python-psycopg2 package:
304

    
305
.. code-block:: console
306

    
307
   # apt-get install python-psycopg2
308

    
309
Database setup
310
~~~~~~~~~~~~~~
311

    
312
All databases have been created and setup on node1, so we do not need to take
313
any action here. From node2, we will just connect to them. When you get familiar
314
with the software you may choose to run different databases on different nodes,
315
for performance/scalability/redundancy reasons, but those kind of setups are out
316
of the purpose of this guide.
317

    
318
Gunicorn setup
319
~~~~~~~~~~~~~~
320

    
321
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following
322
(same contents as in node1; you can just copy/paste the file):
323

    
324
.. code-block:: console
325

    
326
   CONFIG = {
327
    'mode': 'django',
328
    'environment': {
329
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
330
    },
331
    'working_dir': '/etc/synnefo',
332
    'user': 'www-data',
333
    'group': 'www-data',
334
    'args': (
335
      '--bind=127.0.0.1:8080',
336
      '--workers=4',
337
      '--log-level=debug',
338
      '--timeout=43200'
339
    ),
340
   }
341

    
342
.. warning:: Do NOT start the server yet, because it won't find the
343
    ``synnefo.settings`` module. We will start the server after successful
344
    installation of astakos. If the server is running::
345

    
346
       # /etc/init.d/gunicorn stop
347

    
348
Apache2 setup
349
~~~~~~~~~~~~~
350

    
351
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
352
the following:
353

    
354
.. code-block:: console
355

    
356
   <VirtualHost *:80>
357
     ServerName node2.example.com
358

    
359
     RewriteEngine On
360
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
361
   </VirtualHost>
362

    
363
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
364
containing the following:
365

    
366
.. code-block:: console
367

    
368
   <IfModule mod_ssl.c>
369
   <VirtualHost _default_:443>
370
     ServerName node2.example.com
371

    
372
     Alias /static "/usr/share/synnefo/static"
373

    
374
     SetEnv no-gzip
375
     SetEnv dont-vary
376
     AllowEncodedSlashes On
377

    
378
     RequestHeader set X-Forwarded-Protocol "https"
379

    
380
     <Proxy * >
381
       Order allow,deny
382
       Allow from all
383
     </Proxy>
384

    
385
     SetEnv                proxy-sendchunked
386
     SSLProxyEngine        off
387
     ProxyErrorOverride    off
388

    
389
     ProxyPass        /static !
390
     ProxyPass        / http://localhost:8080/ retry=0
391
     ProxyPassReverse / http://localhost:8080/
392

    
393
     SSLEngine on
394
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
395
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
396
   </VirtualHost>
397
   </IfModule>
398

    
399
As in node1, enable sites and modules by running:
400

    
401
.. code-block:: console
402

    
403
   # a2enmod ssl
404
   # a2enmod rewrite
405
   # a2dissite default
406
   # a2ensite synnefo
407
   # a2ensite synnefo-ssl
408
   # a2enmod headers
409
   # a2enmod proxy_http
410

    
411
.. warning:: Do NOT start/restart the server yet. If the server is running::
412

    
413
       # /etc/init.d/apache2 stop
414

    
415
We are now ready with all general prerequisites for node2. Now that we have
416
finished with all general prerequisites for both nodes, we can start installing
417
the services. First, let's install Astakos on node1.
418

    
419

    
420
Installation of Astakos on node1
421
================================
422

    
423
To install astakos, grab the package from our repository (make sure  you made
424
the additions needed in your ``/etc/apt/sources.list`` file, as described
425
previously), by running:
426

    
427
.. code-block:: console
428

    
429
   # apt-get install snf-astakos-app
430

    
431
After successful installation of snf-astakos-app, make sure that also
432
snf-webproject has been installed (marked as "Recommended" package). By default
433
Debian installs "Recommended" packages, but if you have changed your
434
configuration and the package didn't install automatically, you should
435
explicitly install it manually running:
436

    
437
.. code-block:: console
438

    
439
   # apt-get install snf-webproject
440

    
441
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
442
the experienced administrator the ability to install synnefo in a custom made
443
django project. This corner case concerns only very advanced users that know
444
what they are doing and want to experiment with synnefo.
445

    
446

    
447
Configuration of Astakos
448
========================
449

    
450
Conf Files
451
----------
452

    
453
After astakos is successfully installed, you will find the directory
454
``/etc/synnefo`` and some configuration files inside it. The files contain
455
commented configuration options, which are the default options. While installing
456
new snf-* components, new configuration files will appear inside the directory.
457
In this guide (and for all services), we will edit only the minimum necessary
458
configuration options, to reflect our setup. Everything else will remain as is.
459

    
460
After getting familiar with synnefo, you will be able to customize the software
461
as you wish and fits your needs. Many options are available, to empower the
462
administrator with extensively customizable setups.
463

    
464
For the snf-webproject component (installed as an astakos dependency), we
465
need the following:
466

    
467
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
468
uncomment and edit the ``DATABASES`` block to reflect our database:
469

    
470
.. code-block:: console
471

    
472
   DATABASES = {
473
    'default': {
474
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
475
        'ENGINE': 'postgresql_psycopg2',
476
         # ATTENTION: This *must* be the absolute path if using sqlite3.
477
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
478
        'NAME': 'snf_apps',
479
        'USER': 'synnefo',                      # Not used with sqlite3.
480
        'PASSWORD': 'examle_passw0rd',          # Not used with sqlite3.
481
        # Set to empty string for localhost. Not used with sqlite3.
482
        'HOST': '4.3.2.1',
483
        # Set to empty string for default. Not used with sqlite3.
484
        'PORT': '5432',
485
    }
486
   }
487

    
488
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
489
``SECRET_KEY``. This is a django specific setting which is used to provide a
490
seed in secret-key hashing algorithms. Set this to a random string of your
491
choise and keep it private:
492

    
493
.. code-block:: console
494

    
495
   SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
496

    
497
For astakos specific configuration, edit the following options in
498
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
499

    
500
.. code-block:: console
501

    
502
   ASTAKOS_IM_MODULES = ['local']
503

    
504
   ASTAKOS_COOKIE_DOMAIN = '.example.com'
505

    
506
   ASTAKOS_BASEURL = 'https://node1.example.com'
507

    
508
   ASTAKOS_SITENAME = '~okeanos demo example'
509

    
510
   ASTAKOS_CLOUD_SERVICES = (
511
           { 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' },
512
           { 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' },
513
           { 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' })
514

    
515
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
516
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
517

    
518
   ASTAKOS_RECAPTCHA_USE_SSL = True
519

    
520
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local
521
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our
522
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page.
523
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by
524
astakos. The first element of the dictionary is used to point to a generic
525
landing page for your services (cyclades, pithos). If you don't have such a
526
page it can be omitted. The second and third element point to our services
527
themselves (the apps) and should be set as above.
528

    
529
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
530
go to https://www.google.com/recaptcha/admin/create and create your own pair.
531

    
532
Servers Initialization
533
----------------------
534

    
535
After configuration is done, we initialize the servers on node1:
536

    
537
.. code-block:: console
538

    
539
   root@node1:~ # /etc/init.d/gunicorn restart
540
   root@node1:~ # /etc/init.d/apache2 restart
541

    
542
Database Initialization
543
-----------------------
544

    
545
Then, we initialize the database by running:
546

    
547
.. code-block:: console
548

    
549
   # snf-manage syncdb
550

    
551
At this example we don't need to create a django superuser, so we select
552
``[no]`` to the question. After a successful sync, we run the migration needed
553
for astakos:
554

    
555
.. code-block:: console
556

    
557
   # snf-manage migrate im
558

    
559
You have now finished the Astakos setup. Let's test it now.
560

    
561

    
562
Testing of Astakos
563
==================
564

    
565
Open your favorite browser and go to:
566

    
567
``http://node1.example.com/im``
568

    
569
If this redirects you to ``https://node1.example.com/im`` and you can see
570
the "welcome" door of Astakos, then you have successfully setup Astakos.
571

    
572
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
573
and fill all your data at the sign up form. Then click "SUBMIT". You should now
574
see a green box on the top, which informs you that you made a successful request
575
and the request has been sent to the administrators. So far so good, let's assume
576
that you created the user with username ``user@example.com``.
577

    
578
Now we need to activate that user. Return to a command prompt at node1 and run:
579

    
580
.. code-block:: console
581

    
582
   root@node1:~ # snf-manage listusers
583

    
584
This command should show you a list with only one user; the one we just created.
585
This user should have an id with a value of ``1``. It should also have an
586
"active" status with the value of ``0`` (inactive). Now run:
587

    
588
.. code-block:: console
589

    
590
   root@node1:~ # snf-manage modifyuser --set-active 1
591

    
592
This modifies the active value to ``1``, and actually activates the user.
593
When running in production, the activation is done automatically with different
594
types of moderation, that Astakos supports. You can see the moderation methods
595
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
596
documentation. In production, you can also manually activate a user, by sending
597
him/her an activation email. See how to do this at the :ref:`User
598
activation <user_activation>` section.
599

    
600
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with
601
your browser again. Try to sign in using your new credentials. If the astakos
602
menu appears and you can see your profile, then you have successfully setup
603
Astakos.
604

    
605
Let's continue to install Pithos+ now.
606

    
607

    
608
Installation of Pithos+ on node2
609
================================
610

    
611
To install pithos+, grab the packages from our repository (make sure  you made
612
the additions needed in your ``/etc/apt/sources.list`` file, as described
613
previously), by running:
614

    
615
.. code-block:: console
616

    
617
   # apt-get install snf-pithos-app
618

    
619
After successful installation of snf-pithos-app, make sure that also
620
snf-webproject has been installed (marked as "Recommended" package). Refer to
621
the "Installation of Astakos on node1" section, if you don't remember why this
622
should happen. Now, install the pithos web interface:
623

    
624
.. code-block:: console
625

    
626
   # apt-get install snf-pithos-webclient
627

    
628
This package provides the standalone pithos web client. The web client is the
629
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
630
interface's cloudbar, at the top of the Astakos homepage.
631

    
632

    
633
Configuration of Pithos+
634
========================
635

    
636
Conf Files
637
----------
638

    
639
After pithos+ is successfully installed, you will find the directory
640
``/etc/synnefo`` and some configuration files inside it, as you did in node1
641
after installation of astakos. Here, you will not have to change anything that
642
has to do with snf-common or snf-webproject. Everything is set at node1. You
643
only need to change settings that have to do with pithos+. Specifically:
644

    
645
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
646
only the two options:
647

    
648
.. code-block:: console
649

    
650
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
651

    
652
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
653

    
654
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
655
   PITHOS_AUTHENTICATION_USERS = None
656

    
657
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
658
find the pithos+ backend database. Above we tell pithos+ that its database is
659
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
660
``example_passw0rd``.  All those settings where setup during node1's "Database
661
setup" section.
662

    
663
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
664
the pithos+ backend data. Above we tell pithos+ to store its data under
665
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
666
directory at node1's "Pithos+ data directory setup" section.
667

    
668
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
669
is available the astakos authentication api. If not set, pithos+ tries to
670
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
671

    
672
Then we need to setup the web UI and connect it to astakos. To do so, edit
673
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
674

    
675
.. code-block:: console
676

    
677
   PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
678
   PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback"
679

    
680
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
681
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
682
pithos+ feedback form. Astakos already provides a generic feedback form for all
683
services, so we use this one.
684

    
685
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
686
pithos+ web UI with the astakos web UI (through the top cloudbar):
687

    
688
.. code-block:: console
689

    
690
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
691
   PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = 'pithos'
692
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
693
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
694

    
695
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
696
cloudbar.
697

    
698
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service
699
served by astakos. It's name should be identical with the ``id`` name given at
700
the astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf
701
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES``
702
list, to the dictionary: ``{ 'url':'https://nod...', 'name':'pithos+',
703
'id':'pithos }``. This item represents the pithos+ service. The ``id`` we set
704
there, is the ``id`` we want here.
705

    
706
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
707
pithos+ web client to get from astakos all the information needed to fill its
708
own cloudbar. So we put our astakos deployment urls there.
709

    
710
Servers Initialization
711
----------------------
712

    
713
After configuration is done, we initialize the servers on node2:
714

    
715
.. code-block:: console
716

    
717
   root@node2:~ # /etc/init.d/gunicorn restart
718
   root@node2:~ # /etc/init.d/apache2 restart
719

    
720
You have now finished the Pithos+ setup. Let's test it now.
721

    
722

    
723
Testing of Pithos+
724
==================
725

    
726
Open your browser and go to the Astakos homepage:
727

    
728
``http://node1.example.com/im``
729

    
730
Login, and you will see your profile page. Now, click the "pithos+" link on the
731
top black cloudbar. If everything was setup correctly, this will redirect you
732
to:
733

    
734
``https://node2.example.com/ui``
735

    
736
and you will see the blue interface of the Pithos+ application.  Click the
737
orange "Upload" button and upload your first file. If the file gets uploaded
738
successfully, then this is your first sign of a successful Pithos+ installation.
739
Go ahead and experiment with the interface to make sure everything works
740
correctly.
741

    
742
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
743

    
744
If you don't stumble on any problems, then you have successfully installed
745
Pithos+, which you can use as a standalone File Storage Service.
746

    
747
If you would like to do more, such as:
748

    
749
 * Spawning VMs
750
 * Spawning VMs from Images stored on Pithos+
751
 * Uploading your custom Images to Pithos+
752
 * Spawning VMs from those custom Images
753
 * Registering existing Pithos+ files as Images
754

    
755
please continue with the rest of the guide.
756

    
757

    
758
Installation of Cyclades (and Plankton) on node1
759
================================================
760

    
761
This section describes the installation of Cyclades. Cyclades is Synnefo's
762
Compute service. Plankton (the Image Registry service) will get installed
763
automatically along with Cyclades, because it is contained in the same Synnefo
764
component right now.
765

    
766
Before proceeding with the Cyclades (and Plankton) installation, make sure you
767
have successfully set up Astakos and Pithos+ first, because Cyclades depends
768
on them. If you don't have a working Astakos and Pithos+ installation yet,
769
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
770

    
771
Besides Astakos and Pithos+, you will also need a number of additional working
772
prerequisites, before you start the Cyclades installation.
773

    
774
Cyclades Prerequisites
775
----------------------
776

    
777
Ganeti
778
~~~~~~
779

    
780
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
781
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
782
Please refer to the
783
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
784
gory details. A successful Ganeti installation concludes with a working
785
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
786
<GANETI_NODES>`.
787

    
788
The above Ganeti cluster can run on different physical machines than node1 and
789
node2 and can scale independently, according to your needs.
790

    
791
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
792
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
793
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
794

    
795
We highly recommend that you read the official Ganeti documentation, if you are
796
not familiar with Ganeti. If you are extremely impatient, you can result with
797
the above assumed setup by running:
798

    
799
.. code-block:: console
800

    
801
   root@node1:~ # apt-get install ganeti2
802
   root@node1:~ # apt-get install ganeti-htools
803
   root@node2:~ # apt-get install ganeti2
804
   root@node2:~ # apt-get install ganeti-htools
805

    
806
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
807
both nodes, choose a domain name that resolves to a valid floating IP (let's say
808
it's ``ganeti.node1.example.com``). Make sure node1 and node2 have root access
809
between each other using ssh keys and not passwords. Also, make sure there is an
810
lvm volume group named ``ganeti`` that will host your VMs' disks. Finally, setup
811
a bridge interface on the host machines (e.g:: br0). Then run on node1:
812

    
813
.. code-block:: console
814

    
815
   root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init
816
                                   --no-etc-hosts --vg-name=ganeti
817
                                   --nic-parameters link=br0 --master-netdev eth0
818
                                   ganeti.node1.example.com
819
   root@node1:~ # gnt-cluster modify --default-iallocator hail
820
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
821
   root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
822

    
823
   root@node1:~ # gnt-node add --no-node-setup --master-capable=yes
824
                               --vm-capable=yes node2.example.com
825

    
826
For any problems you may stumble upon installing Ganeti, please refer to the
827
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
828
of Ganeti is out of the scope of this guide.
829

    
830
.. _cyclades-install-snfimage:
831

    
832
snf-image
833
~~~~~~~~~
834

    
835
Installation
836
````````````
837
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
838
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
839
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
840
node1 and node2. You can do this by running on *both* nodes:
841

    
842
.. code-block:: console
843

    
844
   # apt-get install snf-image-host
845

    
846
Now, you need to download and save the corresponding helper package. Please see
847
`here <https://code.grnet.gr/projects/snf-image/files>`_ for the latest package. Let's
848
assume that you installed snf-image-host version 0.3.5-1. Then, you need
849
snf-image-helper v0.3.5-1 on *both* nodes:
850

    
851
.. code-block:: console
852

    
853
   # cd /var/lib/snf-image/helper/
854
   # wget https://code.grnet.gr/attachments/download/1058/snf-image-helper_0.3.5-1_all.deb
855

    
856
.. warning:: Be careful: Do NOT install the snf-image-helper debian package.
857
             Just put it under /var/lib/snf-image/helper/
858

    
859
Once, you have downloaded the snf-image-helper package, create the helper VM by
860
running on *both* nodes:
861

    
862
.. code-block:: console
863

    
864
   # ln -s snf-image-helper_0.3.5-1_all.deb snf-image-helper.deb
865
   # snf-image-update-helper
866

    
867
This will create all the needed files under ``/var/lib/snf-image/helper/`` for
868
snf-image-host to run successfully.
869

    
870
Configuration
871
`````````````
872
snf-image supports native access to Images stored on Pithos+. This means that
873
snf-image can talk directly to the Pithos+ backend, without the need of providing
874
a public URL. More details, are described in the next section. For now, the only
875
thing we need to do, is configure snf-image to access our Pithos+ backend.
876

    
877
To do this, we need to set the corresponding variables in
878
``/etc/default/snf-image``, to reflect our Pithos+ setup:
879

    
880
.. code-block:: console
881

    
882
   PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
883

    
884
   PITHOS_DATA="/srv/pithos/data"
885

    
886
If you have installed your Ganeti cluster on different nodes than node1 and node2 make
887
sure that ``/srv/pithos/data`` is visible by all of them.
888

    
889
If you would like to use Images that are also/only stored locally, you need to
890
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
891
Pithos+.
892

    
893
Testing
894
```````
895

    
896
You can test that snf-image is successfully installed by running on the
897
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
898

    
899
.. code-block:: console
900

    
901
   # gnt-os diagnose
902

    
903
This should return ``valid`` for snf-image.
904

    
905
If you are interested to learn more about snf-image's internals (and even use
906
it alongside Ganeti without Synnefo), please see
907
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information concerning
908
installation instructions, documentation on the design and implementation, and
909
supported Image formats.
910

    
911
snf-image's actual Images
912
~~~~~~~~~~~~~~~~~~~~~~~~~
913

    
914
Now that snf-image is installed successfully we need to provide it with some
915
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
916
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
917
format. For more information about snf-image's Image formats see `here
918
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.
919

    
920
:ref:`snf-image <snf-image>` also supports three (3) different locations for the
921
above Images to be stored:
922

    
923
 * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR`` in
924
   :file:`/etc/default/snf-image`)
925
 * On a remote host (accessible via a public URL e.g: http://... or ftp://...)
926
 * On Pithos+ (accessible natively, not only by its public URL)
927

    
928
For the purpose of this guide, we will use the `Debian Squeeze Base Image
929
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ found on the official
930
`snf-image page
931
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
932
of type ``diskdump``. We will store it in our new Pithos+ installation.
933

    
934
To do so, do the following:
935

    
936
a) Download the Image from the official snf-image page (`image link
937
   <https://pithos.okeanos.grnet.gr/public/9epgb>`_).
938

    
939
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web UI
940
   or the command line client `kamaki
941
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.
942

    
943
Once the Image is uploaded successfully, download the Image's metadata file
944
from the official snf-image page (`image_metadata link
945
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_). You will need it, for
946
spawning a VM from Ganeti, in the next section.
947

    
948
Of course, you can repeat the procedure to upload more Images, available from the
949
`official snf-image page
950
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.
951

    
952
Spawning a VM from a Pithos+ Image, using Ganeti
953
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
954

    
955
Now, it is time to test our installation so far. So, we have Astakos and
956
Pithos+ installed, we have a working Ganeti installation, the snf-image
957
definition installed on all VM-capable nodes and a Debian Squeeze Image on
958
Pithos+. Make sure you also have the `metadata file
959
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.
960

    
961
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
962

    
963
.. code-block:: console
964

    
965
   # gnt-instance add -o snf-image+default --os-parameters
966
                      img_passwd=my_vm_example_passw0rd,
967
                      img_format=diskdump,
968
                      img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",
969
                      img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}'
970
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check
971
                      testvm1
972

    
973
In the above command:
974

    
975
 * ``img_passwd``: the arbitrary root password of your new instance
976
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
977
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
978
               should have the format
979
               ``pithos://<username>/<container>/<filename>``:
980
                * ``username``: ``user@example.com`` (defined during Astakos sign up)
981
                * ``container``: ``pithos`` (default, if the Web UI was used)
982
                * ``filename``: the name of file (visible also from the Web UI)
983
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
984
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
985
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_
986

    
987
If the ``gnt-instance add`` command returns successfully, then run:
988

    
989
.. code-block:: console
990

    
991
   # gnt-instance info testvm1 | grep "console connection"
992

    
993
to find out where to connect using VNC. If you can connect successfully and can
994
login to your new instance using the root password ``my_vm_example_passw0rd``,
995
then everything works as expected and you have your new Debian Base VM up and
996
running.
997

    
998
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
999
to access the Pithos+ database and the Pithos+ backend data. Also, make sure
1000
you gave the correct ``img_id`` and ``img_properties``. If ``gnt-instance add``
1001
succeeds but you cannot connect, again find out what went wrong. Do *NOT*
1002
proceed to the next steps unless you are sure everything works till this point.
1003

    
1004
If everything works, you have successfully connected Ganeti with Pithos+. Let's
1005
move on to networking now.
1006

    
1007
.. warning::
1008
    You can bypass the networking sections and go straight to `FIXME`, if you do
1009
    not want to setup the Cyclades Network Service, but only the Cyclades Compute
1010
    Service (recommended for now).
1011

    
1012
Network setup overview
1013
~~~~~~~~~~~~~~~~~~~~~~
1014

    
1015
This part is deployment-specific and must be customized based on the specific
1016
needs of the system administrator. However, to do so, the administrator needs
1017
to understand how each level handles Virtual Networks, to be able to setup the
1018
backend appropriately, before installing Cyclades.
1019

    
1020
Network @ Cyclades level
1021
````````````````````````
1022

    
1023
Cyclades understands two types of Virtual Networks:
1024

    
1025
a) One common Public Network (Internet)
1026
b) One or more distinct Private Networks (L2)
1027

    
1028
a) When a new VM is created, it instantly gets connected to the Public Network
1029
   (Internet). This means it gets a public IPv4 and IPv6 and has access to the
1030
   public Internet.
1031

    
1032
b) Then each user, is able to create one or more Private Networks manually and
1033
   add VMs inside those Private Networks. Private Networks provide Layer 2
1034
   connectivity. All VMs inside a Private Network are completely isolated.
1035

    
1036
From the VM perspective, every Network corresponds to a distinct NIC. So, the
1037
above are translated as follows:
1038

    
1039
a) Every newly created VM, needs at least one NIC. This NIC, connects the VM
1040
   to the Public Network and thus should get a public IPv4 and IPv6.
1041

    
1042
b) For every Private Network, the VM gets a new NIC, which is added during the
1043
   connection of the VM to the Private Network (without an IP). This NIC should
1044
   have L2 connectivity with all other NICs connected to this Private Network.
1045

    
1046
To achieve the above, first of all, we need Network and IP Pool management support
1047
at Ganeti level, for Cyclades to be able to issue the corresponding commands.
1048

    
1049
Network @ Ganeti level
1050
``````````````````````
1051

    
1052
Currently, Ganeti does not support IP Pool management. However, we've been
1053
actively in touch with the official Ganeti team, who are reviewing a relatively
1054
big patchset that implements this functionality (you can find it at the
1055
ganeti-devel mailing list). We hope that the functionality will be merged to
1056
the Ganeti master branch soon and appear on Ganeti 2.7.
1057

    
1058
Furthermore, currently the `~okeanos service <http://okeanos.grnet.gr>`_ uses
1059
the same patchset with slight differencies on top of Ganeti 2.4.5. Cyclades
1060
0.9 are compatible with this old patchset and we do not guarantee that will
1061
work with the updated patchset sent to ganeti-devel.
1062

    
1063
We do *NOT* recommend you to apply the patchset yourself on the current Ganeti
1064
master, unless you are an experienced Cyclades and Ganeti integrator and you
1065
really know what you are doing.
1066

    
1067
Instead, be a little patient and we hope that everything will work out of the
1068
box, once the patchset makes it into the Ganeti master. When so, Cyclades will
1069
get updated to become compatible with that Ganeti version.
1070

    
1071
Network @ Physical host level
1072
`````````````````````````````
1073

    
1074
We talked about the two types of Network from the Cyclades perspective, from the
1075
VMs perspective and from Ganeti's perspective. Finally, we need to talk about
1076
the Networks from the physical (VM container) host's perspective.
1077

    
1078
If your version of Ganeti supports IP pool management, then you need to setup
1079
your physical hosts for the two types of Networks. For the second type
1080
(Private Networks), our reference installation uses a number of pre-provisioned
1081
bridges (one for each Network), which are connected to the corresponding number
1082
of pre-provisioned vlans on each physical host (node1 and node2). For the first
1083
type (Public Network), our reference installation uses routing over one
1084
preprovisioned vlan on each host (node1 and node2). It also uses the `NFDHCPD`
1085
package for dynamically serving specific public IPs managed by Ganeti.
1086

    
1087
Public Network setup
1088
~~~~~~~~~~~~~~~~~~~~
1089

    
1090
Physical hosts' public network setup
1091
````````````````````````````````````
1092

    
1093
The physical hosts' setup is out of the scope of this guide.
1094

    
1095
However, two common cases that you may want to consider (and choose from) are:
1096

    
1097
a) One public bridge, where all VMs' public tap interfaces will connect.
1098
b) IP-less routing over the same vlan on every host.
1099

    
1100
When you setup your physical hosts (node1 and node2) for the Public Network,
1101
then you need to inform Ganeti about the Network's IP range.
1102

    
1103
Add the public network to Ganeti
1104
````````````````````````````````
1105

    
1106
Once you have Ganeti with IP pool management up and running, you need to choose
1107
the public network for your VMs and add it to Ganeti. Let's assume, that you
1108
want to assign IPs from the ``5.6.7.0/27`` range to your new VMs, with
1109
``5.6.7.1`` as their gateway. You can add the network by running:
1110

    
1111
.. code-block:: console
1112

    
1113
   # gnt-network add --network=5.6.7.0/27 --gateway=5.6.7.1 public_network
1114

    
1115
Then, connect the network to all your nodegroups. We assume that we only have
1116
one nodegroup (``default``) in our Ganeti cluster:
1117

    
1118
.. code-block:: console
1119

    
1120
   # gnt-network connect public_network default public_link
1121

    
1122
Your new network is now ready from the Ganeti perspective. Now, we need to setup
1123
`NFDHCPD` to actually reply with the correct IPs (that Ganeti will choose for
1124
each NIC).
1125

    
1126
NFDHCPD
1127
```````
1128

    
1129
At this point, Ganeti knows about your preferred network, it can manage the IP
1130
pool and choose a specific IP for each new VM's NIC. However, the actual
1131
assignment of the IP to the NIC is not done by Ganeti. It is done after the VM
1132
boots and its dhcp client makes a request. When this is done, `NFDHCPD` will
1133
reply to the request with Ganeti's chosen IP. So, we need to install `NFDHCPD`
1134
on all VM-capable nodes of the Ganeti cluster (node1 and node2 in our case) and
1135
connect it to Ganeti:
1136

    
1137
.. code-block:: console
1138

    
1139
   # apt-get install nfdhcpd
1140

    
1141
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1142
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1143
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
1144
VMs. Once you are finished, restart the server on all nodes:
1145

    
1146
.. code-block:: console
1147

    
1148
   # /etc/init.d/nfdhcpd restart
1149

    
1150
If you are using ``ferm``, then you need to run the following:
1151

    
1152
.. code-block:: console
1153

    
1154
   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
1155
   # /etc/init.d/ferm restart
1156

    
1157
Now, you need to connect `NFDHCPD` with Ganeti. To do that, you need to install
1158
a custom KVM ifup script for use by Ganeti, as ``/etc/ganeti/kvm-vif-bridge``,
1159
on all VM-capable GANETI-NODEs (node1 and node2). A sample implementation is
1160
provided along with `snf-cyclades-gtools <snf-cyclades-gtools>`, that will
1161
be installed in the next sections, however you will probably need to write your
1162
own, according to your underlying network configuration.
1163

    
1164
Testing the Public Network
1165
``````````````````````````
1166

    
1167
So, we have setup the bridges/vlans on the physical hosts appropriately, we have
1168
added the desired network to Ganeti, we have installed nfdhcpd and installed the
1169
appropriate ``kvm-vif-bridge`` script under ``/etc/ganeti``.
1170

    
1171
Now, it is time to test that the backend infrastracture is correctly setup for
1172
the Public Network. We assume to have used the (b) method on setting up the
1173
physical hosts. We will add a new VM, the same way we did it on the previous
1174
testing section. However, now will also add one NIC, configured to be managed
1175
from our previously defined network. Run on the GANETI-MASTER (node1):
1176

    
1177
.. code-block:: console
1178

    
1179
   # gnt-instance add -o snf-image+default --os-parameters
1180
                      img_passwd=my_vm_example_passw0rd,
1181
                      img_format=diskdump,
1182
                      img_id="pithos://user@example.com/pithos/debian_base-6.0-7-x86_64.diskdump",
1183
                      img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}'
1184
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check
1185
                      --net 0:ip=pool,mode=routed,link=public_link
1186
                      testvm2
1187

    
1188
If the above returns successfully, connect to the new VM and run:
1189

    
1190
.. code-block:: console
1191

    
1192
   root@testvm2:~ # ifconfig -a
1193

    
1194
If a network interface appears with an IP from you Public Network's range
1195
(``5.6.7.0/27``) and the corresponding gateway, then you have successfully
1196
connected Ganeti with `NFDHCPD` (and ``kvm-vif-bridge`` works correctly).
1197

    
1198
Now ping the outside world. If this works too, then you have also configured
1199
correctly your physical hosts' networking.
1200

    
1201
Make sure everything works as expected, before procceding with the Private
1202
Networks setup.
1203

    
1204
Private Networks setup
1205
~~~~~~~~~~~~~~~~~~~~~~
1206

    
1207
Physical hosts' private networks setup
1208
``````````````````````````````````````
1209

    
1210
Testing the Private Networks
1211
````````````````````````````
1212

    
1213
Synnefo RAPI user
1214
~~~~~~~~~~~~~~~~~
1215

    
1216
Once you have a working Ganeti installation create a new RAPI user that will
1217
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1218
so we will call the user ``cyclades``. You can do this, by editting the file
1219
``/var/lib/ganeti/rapi/users`` and adding the line:
1220

    
1221
.. code-block:: console
1222

    
1223
   cyclades {HA1}a62c-example_hash_here-6f0436ddb write
1224

    
1225
More about Ganeti's RAPI users `here.
1226
<http://docs.ganeti.org/ganeti/2.5/html/rapi.html#introduction>`_
1227

    
1228

    
1229

    
1230

    
1231
.. _cyclades-install-rabbitmq:
1232

    
1233
RabbitMQ
1234
~~~~~~~~
1235

    
1236
RabbitMQ is used as a generic message broker for cyclades. It should be
1237
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability
1238
configuration as described here:
1239

    
1240
    http://www.rabbitmq.com/pacemaker.html
1241

    
1242
The values set for the user and password must be mirrored in the
1243
``RABBIT_*`` variables in your settings, as managed by
1244
:ref:`snf-common <snf-common>`.
1245

    
1246
.. todo:: Document an active-active configuration based on the latest version
1247
   of RabbitMQ.
1248

    
1249
.. _cyclades-install-vncauthproxy:
1250

    
1251
vncauthproxy
1252
~~~~~~~~~~~~
1253

    
1254
To support OOB console access to the VMs over VNC, the vncauthproxy
1255
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node.
1256

    
1257
.. note:: The Debian package for vncauthproxy undertakes all configuration
1258
   automatically.
1259

    
1260
Download and install the latest vncauthproxy from its own repository,
1261
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit:
1262

    
1263
.. code-block:: console
1264

    
1265
    $ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy
1266

    
1267
Create ``/var/log/vncauthproxy`` and set its permissions appropriately.
1268

    
1269
Alternatively, build and install Debian packages.
1270

    
1271
.. code-block:: console
1272

    
1273
    $ git checkout debian
1274
    $ dpkg-buildpackage -b -uc -us
1275
    # dpkg -i ../vncauthproxy_1.0-1_all.deb
1276

    
1277
.. warning::
1278
    **Failure to build the package on the Mac.**
1279

    
1280
    ``libevent``, a requirement for gevent which in turn is a requirement for
1281
    vncauthproxy is not included in `MacOSX` by default and installing it with
1282
    MacPorts does not lead to a version that can be found by the gevent
1283
    build process. A quick workaround is to execute the following commands::
1284

    
1285
        $ cd $SYNNEFO
1286
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
1287
        <the above fails>
1288
        $ cd build/gevent
1289
        $ sudo python setup.py -I/opt/local/include -L/opt/local/lib build
1290
        $ cd $SYNNEFO
1291
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
1292

    
1293
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip
1294
.. todo:: kpap: fix installation commands
1295

    
1296

    
1297
Configuration of Cyclades (and Plankton)
1298
========================================
1299

    
1300
This section targets the configuration of the prerequisites for cyclades,
1301
and the configuration of the associated synnefo software components.
1302

    
1303
synnefo components
1304
------------------
1305

    
1306
cyclades uses :ref:`snf-common <snf-common>` for settings.
1307
Please refer to the configuration sections of
1308
:ref:`snf-webproject <snf-webproject>`,
1309
:ref:`snf-cyclades-app <snf-cyclades-app>`,
1310
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more
1311
information on their configuration.
1312

    
1313
Ganeti
1314
~~~~~~
1315

    
1316
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on
1317
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the
1318
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``.
1319

    
1320
Database
1321
~~~~~~~~
1322

    
1323
Once all components are installed and configured,
1324
initialize the Django DB:
1325

    
1326
.. code-block:: console
1327

    
1328
   $ snf-manage syncdb
1329
   $ snf-manage migrate
1330

    
1331
and load fixtures ``{users, flavors, images}``,
1332
which make the API usable by end users by defining a sample set of users,
1333
hardware configurations (flavors) and OS images:
1334

    
1335
.. code-block:: console
1336

    
1337
   $ snf-manage loaddata /path/to/users.json
1338
   $ snf-manage loaddata flavors
1339
   $ snf-manage loaddata images
1340

    
1341
.. warning::
1342
    Be sure to load a custom users.json and select a unique token
1343
    for each of the initial and any other users defined in this file.
1344
    **DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed
1345
    configurations.
1346

    
1347
sample users.json file:
1348

    
1349
.. literalinclude:: ../../synnefo/db/fixtures/users.json
1350

    
1351
`download <../_static/users.json>`_
1352

    
1353
RabbitMQ
1354
~~~~~~~~
1355

    
1356
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup
1357
<cyclades-install-rabbitmq>`.
1358

    
1359
.. include:: ../../Changelog
1360

    
1361

    
1362
Testing of Cyclades (and Plankton)
1363
==================================
1364

    
1365

    
1366
General Testing
1367
===============
1368

    
1369

    
1370
Notes
1371
=====