Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 4de94e15

History | View | Annotate | Download (30.1 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
 * Identity Management (Astakos)
14
 * Object Storage Service (Pithos+)
15
 * Compute Service (Cyclades)
16
 * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the guide
24
and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr squeeze main``
53
| ``deb-src http://apt.dev.grnet.gr squeeze main``
54

    
55
You also need a shared directory visible by both nodes. Pithos+ will save all
56
data inside this directory. By 'all data', we mean files, images, and pithos
57
specific mapping data. If you plan to upload more than one basic image, this
58
directory should have at least 50GB of free space. During this guide, we will
59
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
60
to node2. Node2 has this directory mounted under ``/srv/pithos``, too.
61

    
62
Before starting the synnefo installation, you will need basic third party
63
software to be installed and configured on the physical nodes. We will describe
64
each node's general prerequisites separately. Any additional configuration,
65
specific to a synnefo service for each node, will be described at the service's
66
section.
67

    
68
Node1
69
-----
70

    
71
General Synnefo dependencies
72
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73

    
74
 * apache (http server)
75
 * gunicorn (WSGI http server)
76
 * postgresql (database)
77
 * rabbitmq (message queue)
78

    
79
You can install the above by running:
80

    
81
.. code-block:: console
82

    
83
   # apt-get install apache2 postgresql
84

    
85
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
86
the official debian backports:
87

    
88
.. code-block:: console
89

    
90
   # apt-get -t squeeze-backports install gunicorn
91

    
92
On node1, we will create our databases, so you will also need the
93
python-psycopg2 package:
94

    
95
.. code-block:: console
96

    
97
   # apt-get install python-psycopg2
98

    
99
Database setup
100
~~~~~~~~~~~~~~
101

    
102
On node1, we create a database called ``snf_apps``, that will host all django
103
apps related tables. We also create the user ``synnefo`` and grant him all
104
privileges on the database. We do this by running:
105

    
106
.. code-block:: console
107

    
108
   root@node1:~ # su - postgres
109
   postgres@node1:~ $ psql
110
   postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
111
   postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
112
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
113

    
114
We also create the database ``snf_pithos`` needed by the pithos+ backend and
115
grant the ``synnefo`` user all privileges on the database. This database could
116
be created on node2 instead, but we do it on node1 for simplicity. We will
117
create all needed databases on node1 and then node2 will connect to them.
118

    
119
.. code-block:: console
120

    
121
   postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
122
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
123

    
124
Configure the database to listen to all network interfaces. You can do this by
125
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
126
``listen_addresses`` to ``'*'`` :
127

    
128
.. code-block:: console
129

    
130
   listen_addresses = '*'
131

    
132
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
133
node2 to connect to the database. Add the following lines under ``#IPv4 local
134
connections:`` :
135

    
136
.. code-block:: console
137

    
138
   host		all	all	4.3.2.1/32	md5
139
   host		all	all	4.3.2.2/32	md5
140

    
141
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
142
actual IPs. Now, restart the server to apply the changes:
143

    
144
.. code-block:: console
145

    
146
   # /etc/init.d/postgresql restart
147

    
148
Gunicorn setup
149
~~~~~~~~~~~~~~
150

    
151
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following:
152

    
153
.. code-block:: console
154

    
155
   CONFIG = {
156
    'mode': 'django',
157
    'environment': {
158
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
159
    },
160
    'working_dir': '/etc/synnefo',
161
    'user': 'www-data',
162
    'group': 'www-data',
163
    'args': (
164
      '--bind=127.0.0.1:8080',
165
      '--workers=4',
166
      '--log-level=debug',
167
    ),
168
   }
169

    
170
.. warning:: Do NOT start the server yet, because it won't find the
171
    ``synnefo.settings`` module. We will start the server after successful
172
    installation of astakos. If the server is running::
173

    
174
       # /etc/init.d/gunicorn stop
175

    
176
Apache2 setup
177
~~~~~~~~~~~~~
178

    
179
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
180
the following:
181

    
182
.. code-block:: console
183

    
184
   <VirtualHost *:80>
185
     ServerName node1.example.com
186

    
187
     RewriteEngine On
188
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
189
   </VirtualHost>
190

    
191
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
192
containing the following:
193

    
194
.. code-block:: console
195

    
196
   <IfModule mod_ssl.c>
197
   <VirtualHost _default_:443>
198
     ServerName node1.example.com
199

    
200
     Alias /static "/usr/share/synnefo/static"
201

    
202
   #  SetEnv no-gzip
203
   #  SetEnv dont-vary
204

    
205
     AllowEncodedSlashes On
206

    
207
     RequestHeader set X-Forwarded-Protocol "https"
208

    
209
     <Proxy * >
210
       Order allow,deny
211
       Allow from all
212
     </Proxy>
213

    
214
     SetEnv                proxy-sendchunked
215
     SSLProxyEngine        off
216
     ProxyErrorOverride    off
217

    
218
     ProxyPass        /static !
219
     ProxyPass        / http://localhost:8080/ retry=0
220
     ProxyPassReverse / http://localhost:8080/
221

    
222
     RewriteEngine On
223
     RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE]
224

    
225
     SSLEngine on
226
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
227
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
228
   </VirtualHost>
229
   </IfModule>
230

    
231
Now enable sites and modules by running:
232

    
233
.. code-block:: console
234

    
235
   # a2enmod ssl
236
   # a2enmod rewrite
237
   # a2dissite default
238
   # a2ensite synnefo
239
   # a2ensite synnefo-ssl
240
   # a2enmod headers
241
   # a2enmod proxy_http
242

    
243
.. warning:: Do NOT start/restart the server yet. If the server is running::
244

    
245
       # /etc/init.d/apache2 stop
246

    
247
Pithos+ data directory setup
248
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
249

    
250
As mentioned in the General Prerequisites section, there is a directory called
251
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
252
directory inside it:
253

    
254
.. code-block:: console
255

    
256
   # cd /srv/pithos
257
   # mkdir data
258
   # chown www-data:www-data data
259
   # chmod g+ws data
260

    
261
You are now ready with all general prerequisites concerning node1. Let's go to
262
node2.
263

    
264
Node2
265
-----
266

    
267
General Synnefo dependencies
268
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
269

    
270
 * apache (http server)
271
 * gunicorn (WSGI http server)
272
 * postgresql (database)
273
 * rabbitmq (message queue)
274

    
275
You can install the above by running:
276

    
277
.. code-block:: console
278

    
279
   # apt-get install apache2 postgresql
280

    
281
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
282
the official debian backports:
283

    
284
.. code-block:: console
285

    
286
   # apt-get -t squeeze-backports install gunicorn
287

    
288
Node2 will connect to the databases on node1, so you will also need the
289
python-psycopg2 package:
290

    
291
.. code-block:: console
292

    
293
   # apt-get install python-psycopg2
294

    
295
Database setup
296
~~~~~~~~~~~~~~
297

    
298
All databases have been created and setup on node1, so we do not need to take
299
any action here. From node2, we will just connect to them. When you get familiar
300
with the software you may choose to run different databases on different nodes,
301
for performance/scalability/redundancy reasons, but those kind of setups are out
302
of the purpose of this guide.
303

    
304
Gunicorn setup
305
~~~~~~~~~~~~~~
306

    
307
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following
308
(same contents as in node1; you can just copy/paste the file):
309

    
310
.. code-block:: console
311

    
312
   CONFIG = {
313
    'mode': 'django',
314
    'environment': {
315
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
316
    },
317
    'working_dir': '/etc/synnefo',
318
    'user': 'www-data',
319
    'group': 'www-data',
320
    'args': (
321
      '--bind=127.0.0.1:8080',
322
      '--workers=4',
323
      '--log-level=debug',
324
    ),
325
   }
326

    
327
.. warning:: Do NOT start the server yet, because it won't find the
328
    ``synnefo.settings`` module. We will start the server after successful
329
    installation of astakos. If the server is running::
330

    
331
       # /etc/init.d/gunicorn stop
332

    
333
Apache2 setup
334
~~~~~~~~~~~~~
335

    
336
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
337
the following:
338

    
339
.. code-block:: console
340

    
341
   <VirtualHost *:80>
342
     ServerName node2.example.com
343

    
344
     RewriteEngine On
345
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
346
   </VirtualHost>
347

    
348
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
349
containing the following:
350

    
351
.. code-block:: console
352

    
353
   <IfModule mod_ssl.c>
354
   <VirtualHost _default_:443>
355
     ServerName node2.example.com
356

    
357
     Alias /static "/usr/share/synnefo/static"
358

    
359
     SetEnv no-gzip
360
     SetEnv dont-vary
361
     AllowEncodedSlashes On
362

    
363
     RequestHeader set X-Forwarded-Protocol "https"
364

    
365
     <Proxy * >
366
       Order allow,deny
367
       Allow from all
368
     </Proxy>
369

    
370
     SetEnv                proxy-sendchunked
371
     SSLProxyEngine        off
372
     ProxyErrorOverride    off
373

    
374
     ProxyPass        /static !
375
     ProxyPass        / http://localhost:8080/ retry=0
376
     ProxyPassReverse / http://localhost:8080/
377

    
378
     SSLEngine on
379
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
380
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
381
   </VirtualHost>
382
   </IfModule>
383

    
384
As in node1, enable sites and modules by running:
385

    
386
.. code-block:: console
387

    
388
   # a2enmod ssl
389
   # a2enmod rewrite
390
   # a2dissite default
391
   # a2ensite synnefo
392
   # a2ensite synnefo-ssl
393
   # a2enmod headers
394
   # a2enmod proxy_http
395

    
396
.. warning:: Do NOT start/restart the server yet. If the server is running::
397

    
398
       # /etc/init.d/apache2 stop
399

    
400
We are now ready with all general prerequisites for node2. Now that we have
401
finished with all general prerequisites for both nodes, we can start installing
402
the services. First, let's install Astakos on node1.
403

    
404

    
405
Installation of Astakos on node1
406
================================
407

    
408
To install astakos, grab the package from our repository (make sure  you made
409
the additions needed in your ``/etc/apt/sources.list`` file, as described
410
previously), by running:
411

    
412
.. code-block:: console
413

    
414
   # apt-get install snf-astakos-app
415

    
416
After successful installation of snf-astakos-app, make sure that also
417
snf-webproject has been installed (marked as "Recommended" package). By default
418
Debian installs "Recommended" packages, but if you have changed your
419
configuration and the package didn't install automatically, you should
420
explicitly install it manually running:
421

    
422
.. code-block:: console
423

    
424
   # apt-get install snf-webproject
425

    
426
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
427
the experienced administrator the ability to install synnefo in a custom made
428
django project. This corner case concerns only very advanced users that know
429
what they are doing and want to experiment with synnefo.
430

    
431

    
432
Configuration of Astakos
433
========================
434

    
435
Conf Files
436
----------
437

    
438
After astakos is successfully installed, you will find the directory
439
``/etc/synnefo`` and some configuration files inside it. The files contain
440
commented configuration options, which are the default options. While installing
441
new snf-* components, new configuration files will appear inside the directory.
442
In this guide (and for all services), we will edit only the minimum necessary
443
configuration options, to reflect our setup. Everything else will remain as is.
444

    
445
After getting familiar with synnefo, you will be able to customize the software
446
as you wish and fits your needs. Many options are available, to empower the
447
administrator with extensively customizable setups.
448

    
449
For the snf-webproject component (installed as an astakos dependency), we
450
need the following:
451

    
452
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
453
uncomment and edit the ``DATABASES`` block to reflect our database:
454

    
455
.. code-block:: console
456

    
457
   DATABASES = {
458
    'default': {
459
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
460
        'ENGINE': 'postgresql_psycopg2',
461
         # ATTENTION: This *must* be the absolute path if using sqlite3.
462
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
463
        'NAME': 'snf_apps',
464
        'USER': 'synnefo',                      # Not used with sqlite3.
465
        'PASSWORD': 'examle_passw0rd',          # Not used with sqlite3.
466
        # Set to empty string for localhost. Not used with sqlite3.
467
        'HOST': '4.3.2.1',
468
        # Set to empty string for default. Not used with sqlite3.
469
        'PORT': '5432',
470
    }
471
   }
472

    
473
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
474
``SECRET_KEY``. This is a django specific setting which is used to provide a
475
seed in secret-key hashing algorithms. Set this to a random string of your
476
choise and keep it private:
477

    
478
.. code-block:: console
479

    
480
   SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
481

    
482
For astakos specific configuration, edit the following options in
483
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
484

    
485
.. code-block:: console
486

    
487
   ASTAKOS_IM_MODULES = ['local']
488

    
489
   ASTAKOS_COOKIE_DOMAIN = '.example.com'
490

    
491
   ASTAKOS_BASEURL = 'https://node1.example.com'
492

    
493
   ASTAKOS_SITENAME = '~okeanos demo example'
494

    
495
   ASTAKOS_CLOUD_SERVICES = (
496
           { 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' },
497
           { 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' },
498
           { 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' })
499

    
500
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
501
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
502

    
503
   ASTAKOS_RECAPTCHA_USE_SSL = True
504

    
505
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local
506
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our
507
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page.
508
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by
509
astakos. The first element of the dictionary is used to point to a generic
510
landing page for your services (cyclades, pithos). If you don't have such a
511
page it can be omitted. The second and third element point to our services
512
themselves (the apps) and should be set as above.
513

    
514
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
515
go to https://www.google.com/recaptcha/admin/create and create your own pair.
516

    
517
Servers Initialization
518
----------------------
519

    
520
After configuration is done, we initialize the servers on node1:
521

    
522
.. code-block:: console
523

    
524
   root@node1:~ # /etc/init.d/gunicorn restart
525
   root@node1:~ # /etc/init.d/apache2 restart
526

    
527
Database Initialization
528
-----------------------
529

    
530
Then, we initialize the database by running:
531

    
532
.. code-block:: console
533

    
534
   # snf-manage syncdb
535

    
536
At this example we don't need to create a django superuser, so we select
537
``[no]`` to the question. After a successful sync, we run the migration needed
538
for astakos:
539

    
540
.. code-block:: console
541

    
542
   # snf-manage migrate im
543

    
544
You have now finished the Astakos setup. Let's test it now.
545

    
546

    
547
Testing of Astakos
548
==================
549

    
550
Open your favorite browser and go to:
551

    
552
``http://node1.example.com/im``
553

    
554
If this redirects you to ``https://node1.example.com/im`` and you can see
555
the "welcome" door of Astakos, then you have successfully setup Astakos.
556

    
557
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
558
and fill all your data at the sign up form. Then click "SUBMIT". You should now
559
see a green box on the top, which informs you that you made a successful request
560
and the request has been sent to the administrators. So far so good.
561

    
562
Now we need to activate that user. Return to a command prompt at node1 and run:
563

    
564
.. code-block:: console
565

    
566
   root@node1:~ # snf-manage listusers
567

    
568
This command should show you a list with only one user; the one we just created.
569
This user should have an id with a value of ``1``. It should also have an
570
"active" status with the value of ``0`` (inactive). Now run:
571

    
572
.. code-block:: console
573

    
574
   root@node1:~ # snf-manage modifyuser --set-active 1
575

    
576
This modifies the active value to ``1``, and actually activates the user.
577
When running in production, the activation is done automatically with different
578
types of moderation, that Astakos supports. You can see the moderation methods
579
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
580
documentation. In production, you can also manually activate a user, by sending
581
him/her an activation email. See how to do this at the :ref:`User
582
activation <user_activation>` section.
583

    
584
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with
585
your browser again. Try to sign in using your new credentials. If the astakos
586
menu appears and you can see your profile, then you have successfully setup
587
Astakos.
588

    
589
Let's continue to install Pithos+ now.
590

    
591

    
592
Installation of Pithos+ on node2
593
================================
594

    
595
To install pithos+, grab the packages from our repository (make sure  you made
596
the additions needed in your ``/etc/apt/sources.list`` file, as described
597
previously), by running:
598

    
599
.. code-block:: console
600

    
601
   # apt-get install snf-pithos-app
602

    
603
After successful installation of snf-pithos-app, make sure that also
604
snf-webproject has been installed (marked as "Recommended" package). Refer to
605
the "Installation of Astakos on node1" section, if you don't remember why this
606
should happen. Now, install the pithos web interface:
607

    
608
.. code-block:: console
609

    
610
   # apt-get install snf-pithos-webclient
611

    
612
This package provides the standalone pithos web client. The web client is the
613
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
614
interface's cloudbar, at the top of the Astakos homepage.
615

    
616
Configuration of Pithos+
617
========================
618

    
619
Conf Files
620
----------
621

    
622
After pithos+ is successfully installed, you will find the directory
623
``/etc/synnefo`` and some configuration files inside it, as you did in node1
624
after installation of astakos. Here, you will not have to change anything that
625
has to do with snf-common or snf-webproject. Everything is set at node1. You
626
only need to change settings that have to do with pithos+. Specifically:
627

    
628
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
629
only the two options:
630

    
631
.. code-block:: console
632

    
633
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
634

    
635
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
636

    
637
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ backend where
638
to find its database. Above we tell pithos+ that its database is ``snf_pithos``
639
at node1 and to connect as user ``synnefo`` with password ``example_passw0rd``.
640
All those settings where setup during node1's "Database setup" section.
641

    
642
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ backend where to
643
store its data. Above we tell pithos+ to store its data under
644
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
645
directory at node1's "Pithos+ data directory setup" section.
646

    
647
Then we need to setup the web UI and connect it to astakos. To do so, edit
648
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
649

    
650
.. code-block:: console
651

    
652
   PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
653
   PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback"
654

    
655
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
656
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
657
pithos+ feedback form. Astakos already provides a generic feedback form for all
658
services, so we use this one.
659

    
660
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
661
pithos+ web UI with the astakos web UI (through the top cloudbar):
662

    
663
.. code-block:: console
664

    
665
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
666
   CLOUDBAR_ACTIVE_SERVICE = 'pithos'
667
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
668
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
669

    
670
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
671
cloudbar.
672

    
673
The ``CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service served by
674
astakos. It's name should be identical with the ``id`` name given at the
675
astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf
676
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES``
677
list, to the dictionary:
678
``{ 'url':'https://nod...', 'name':'pithos+', 'id':'pithos }``. This item
679
represents the pithos+ service. The ``id`` we set there, is the ``id`` we want
680
here.
681

    
682
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
683
pithos+ web client to get from astakos all the information needed to fill its
684
own cloudbar.  So we put our astakos deployment urls there.
685

    
686
Servers Initialization
687
----------------------
688

    
689
After configuration is done, we initialize the servers on node2:
690

    
691
.. code-block:: console
692

    
693
   root@node2:~ # /etc/init.d/gunicorn restart
694
   root@node2:~ # /etc/init.d/apache2 restart
695

    
696
You have now finished the Pithos+ setup. Let's test it now.
697

    
698

    
699
Testing of Pithos+
700
==================
701

    
702

    
703
Installation of Cyclades (and Plankton) on node1
704
================================================
705

    
706
Installation of cyclades is a two step process:
707

    
708
1. install the external services (prerequisites) on which cyclades depends
709
2. install the synnefo software components associated with cyclades
710

    
711
Prerequisites
712
-------------
713
.. _cyclades-install-ganeti:
714

    
715
Ganeti installation
716
~~~~~~~~~~~~~~~~~~~
717

    
718
Synnefo requires a working Ganeti installation at the backend. Installation
719
of Ganeti is not covered by this document, please refer to
720
`ganeti documentation <http://docs.ganeti.org/ganeti/current/html>`_ for all the
721
gory details. A successful Ganeti installation concludes with a working
722
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs <GANETI_NODES>`.
723

    
724
.. _cyclades-install-db:
725

    
726
Database
727
~~~~~~~~
728

    
729
Database installation is done as part of the
730
:ref:`snf-webproject <snf-webproject>` component.
731

    
732
.. _cyclades-install-rabbitmq:
733

    
734
RabbitMQ
735
~~~~~~~~
736

    
737
RabbitMQ is used as a generic message broker for cyclades. It should be
738
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability
739
configuration as described here:
740

    
741
    http://www.rabbitmq.com/pacemaker.html
742

    
743
After installation, create a user and set its permissions:
744

    
745
.. code-block:: console
746

    
747
    $ rabbitmqctl add_user <username> <password>
748
    $ rabbitmqctl set_permissions -p / <username>  "^.*" ".*" ".*"
749

    
750
The values set for the user and password must be mirrored in the
751
``RABBIT_*`` variables in your settings, as managed by
752
:ref:`snf-common <snf-common>`.
753

    
754
.. todo:: Document an active-active configuration based on the latest version
755
   of RabbitMQ.
756

    
757
.. _cyclades-install-vncauthproxy:
758

    
759
vncauthproxy
760
~~~~~~~~~~~~
761

    
762
To support OOB console access to the VMs over VNC, the vncauthproxy
763
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node.
764

    
765
.. note:: The Debian package for vncauthproxy undertakes all configuration
766
   automatically.
767

    
768
Download and install the latest vncauthproxy from its own repository,
769
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit:
770

    
771
.. code-block:: console
772

    
773
    $ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy
774

    
775
Create ``/var/log/vncauthproxy`` and set its permissions appropriately.
776

    
777
Alternatively, build and install Debian packages.
778

    
779
.. code-block:: console
780

    
781
    $ git checkout debian
782
    $ dpkg-buildpackage -b -uc -us
783
    # dpkg -i ../vncauthproxy_1.0-1_all.deb
784

    
785
.. warning::
786
    **Failure to build the package on the Mac.**
787

    
788
    ``libevent``, a requirement for gevent which in turn is a requirement for
789
    vncauthproxy is not included in `MacOSX` by default and installing it with
790
    MacPorts does not lead to a version that can be found by the gevent
791
    build process. A quick workaround is to execute the following commands::
792

    
793
        $ cd $SYNNEFO
794
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
795
        <the above fails>
796
        $ cd build/gevent
797
        $ sudo python setup.py -I/opt/local/include -L/opt/local/lib build
798
        $ cd $SYNNEFO
799
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
800

    
801
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip
802
.. todo:: kpap: fix installation commands
803

    
804
.. _cyclades-install-nfdhcpd:
805

    
806
NFDHCPD
807
~~~~~~~
808

    
809
Setup Synnefo-specific networking on the Ganeti backend.
810
This part is deployment-specific and must be customized based on the
811
specific needs of the system administrators.
812

    
813
A reference installation will use a Synnefo-specific KVM ifup script,
814
NFDHCPD and pre-provisioned Linux bridges to support public and private
815
network functionality. For this:
816

    
817
Grab NFDHCPD from its own repository (https://code.grnet.gr/git/nfdhcpd),
818
install it, modify ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network
819
configuration.
820

    
821
Install a custom KVM ifup script for use by Ganeti, as
822
``/etc/ganeti/kvm-vif-bridge``, on GANETI-NODEs. A sample implementation is
823
provided under ``/contrib/ganeti-hooks``. Set ``NFDHCPD_STATE_DIR`` to point
824
to NFDHCPD's state directory, usually ``/var/lib/nfdhcpd``.
825

    
826
.. todo:: soc: document NFDHCPD installation, settle on KVM ifup script
827

    
828
.. _cyclades-install-snfimage:
829

    
830
snf-image
831
~~~~~~~~~
832

    
833
Install the :ref:`snf-image <snf-image>` Ganeti OS provider for image
834
deployment.
835

    
836
For :ref:`cyclades <cyclades>` to be able to launch VMs from specified
837
Images, you need the snf-image OS Provider installed on *all* Ganeti nodes.
838

    
839
Please see `https://code.grnet.gr/projects/snf-image/wiki`_
840
for installation instructions and documentation on the design
841
and implementation of snf-image.
842

    
843
Please see `https://code.grnet.gr/projects/snf-image/files`
844
for the latest packages.
845

    
846
Images should be stored in ``extdump``, or ``diskdump`` format in a directory
847
of your choice, configurable as ``IMAGE_DIR`` in
848
:file:`/etc/default/snf-image`.
849

    
850
synnefo components
851
------------------
852

    
853
You need to install the appropriate synnefo software components on each node,
854
depending on its type, see :ref:`Architecture <cyclades-architecture>`.
855

    
856
Most synnefo components have dependencies on additional Python packages.
857
The dependencies are described inside each package, and are setup
858
automatically when installing using :command:`pip`, or when installing
859
using your system's package manager.
860

    
861
Please see the page of each synnefo software component for specific
862
installation instructions, where applicable.
863

    
864
Install the following synnefo components:
865

    
866
Nodes of type :ref:`APISERVER <APISERVER_NODE>`
867
    Components
868
    :ref:`snf-common <snf-common>`,
869
    :ref:`snf-webproject <snf-webproject>`,
870
    :ref:`snf-cyclades-app <snf-cyclades-app>`
871
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>`
872
    Components
873
    :ref:`snf-common <snf-common>`,
874
    :ref:`snf-cyclades-gtools <snf-cyclades-gtools>`
875
Nodes of type :ref:`LOGIC <LOGIC_NODE>`
876
    Components
877
    :ref:`snf-common <snf-common>`,
878
    :ref:`snf-webproject <snf-webproject>`,
879
    :ref:`snf-cyclades-app <snf-cyclades-app>`.
880

    
881

    
882
Configuration of Cyclades (and Plankton)
883
========================================
884

    
885
This section targets the configuration of the prerequisites for cyclades,
886
and the configuration of the associated synnefo software components.
887

    
888
synnefo components
889
------------------
890

    
891
cyclades uses :ref:`snf-common <snf-common>` for settings.
892
Please refer to the configuration sections of
893
:ref:`snf-webproject <snf-webproject>`,
894
:ref:`snf-cyclades-app <snf-cyclades-app>`,
895
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more
896
information on their configuration.
897

    
898
Ganeti
899
~~~~~~
900

    
901
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on
902
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the
903
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``.
904

    
905
Database
906
~~~~~~~~
907

    
908
Once all components are installed and configured,
909
initialize the Django DB:
910

    
911
.. code-block:: console
912

    
913
   $ snf-manage syncdb
914
   $ snf-manage migrate
915

    
916
and load fixtures ``{users, flavors, images}``,
917
which make the API usable by end users by defining a sample set of users,
918
hardware configurations (flavors) and OS images:
919

    
920
.. code-block:: console
921

    
922
   $ snf-manage loaddata /path/to/users.json
923
   $ snf-manage loaddata flavors
924
   $ snf-manage loaddata images
925

    
926
.. warning::
927
    Be sure to load a custom users.json and select a unique token
928
    for each of the initial and any other users defined in this file.
929
    **DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed
930
    configurations.
931

    
932
sample users.json file:
933

    
934
.. literalinclude:: ../../synnefo/db/fixtures/users.json
935

    
936
`download <../_static/users.json>`_
937

    
938
RabbitMQ
939
~~~~~~~~
940

    
941
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup
942
<cyclades-install-rabbitmq>`.
943

    
944
.. include:: ../../Changelog
945

    
946

    
947
Testing of Cyclades (and Plankton)
948
==================================
949

    
950

    
951
General Testing
952
===============
953

    
954

    
955
Notes
956
=====