Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ d2a9f85f

History | View | Annotate | Download (33.4 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Quick Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's quick installation guide.
7

    
8
It describes how to install the whole synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Squeeze. After successful installation, you will
11
have the following services running:
12

    
13
 * Identity Management (Astakos)
14
 * Object Storage Service (Pithos+)
15
 * Compute Service (Cyclades)
16
 * Image Registry Service (Plankton)
17

    
18
and a single unified Web UI to manage them all.
19

    
20
The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
21
not released yet.
22

    
23
If you just want to install the Object Storage Service (Pithos+), follow the guide
24
and just stop after the "Testing of Pithos+" section.
25

    
26

    
27
Installation of Synnefo / Introduction
28
======================================
29

    
30
We will install the services with the above list's order. Cyclades and Plankton
31
will be installed in a single step (at the end), because at the moment they are
32
contained in the same software component. Furthermore, we will install all
33
services in the first physical node, except Pithos+ which will be installed in
34
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
35
component (scheduled to be fixed in the next version).
36

    
37
For the rest of the documentation we will refer to the first physical node as
38
"node1" and the second as "node2". We will also assume that their domain names
39
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
40
"4.3.2.2" respectively.
41

    
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr squeeze main``
53
| ``deb-src http://apt.dev.grnet.gr squeeze main``
54

    
55
You also need a shared directory visible by both nodes. Pithos+ will save all
56
data inside this directory. By 'all data', we mean files, images, and pithos
57
specific mapping data. If you plan to upload more than one basic image, this
58
directory should have at least 50GB of free space. During this guide, we will
59
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
60
to node2. Node2 has this directory mounted under ``/srv/pithos``, too.
61

    
62
Before starting the synnefo installation, you will need basic third party
63
software to be installed and configured on the physical nodes. We will describe
64
each node's general prerequisites separately. Any additional configuration,
65
specific to a synnefo service for each node, will be described at the service's
66
section.
67

    
68
Node1
69
-----
70

    
71
General Synnefo dependencies
72
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73

    
74
 * apache (http server)
75
 * gunicorn (WSGI http server)
76
 * postgresql (database)
77
 * rabbitmq (message queue)
78

    
79
You can install the above by running:
80

    
81
.. code-block:: console
82

    
83
   # apt-get install apache2 postgresql rabbitmq-server
84

    
85
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
86
the official debian backports:
87

    
88
.. code-block:: console
89

    
90
   # apt-get -t squeeze-backports install gunicorn
91

    
92
On node1, we will create our databases, so you will also need the
93
python-psycopg2 package:
94

    
95
.. code-block:: console
96

    
97
   # apt-get install python-psycopg2
98

    
99
Database setup
100
~~~~~~~~~~~~~~
101

    
102
On node1, we create a database called ``snf_apps``, that will host all django
103
apps related tables. We also create the user ``synnefo`` and grant him all
104
privileges on the database. We do this by running:
105

    
106
.. code-block:: console
107

    
108
   root@node1:~ # su - postgres
109
   postgres@node1:~ $ psql
110
   postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
111
   postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
112
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
113

    
114
We also create the database ``snf_pithos`` needed by the pithos+ backend and
115
grant the ``synnefo`` user all privileges on the database. This database could
116
be created on node2 instead, but we do it on node1 for simplicity. We will
117
create all needed databases on node1 and then node2 will connect to them.
118

    
119
.. code-block:: console
120

    
121
   postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
122
   postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
123

    
124
Configure the database to listen to all network interfaces. You can do this by
125
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
126
``listen_addresses`` to ``'*'`` :
127

    
128
.. code-block:: console
129

    
130
   listen_addresses = '*'
131

    
132
Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
133
node2 to connect to the database. Add the following lines under ``#IPv4 local
134
connections:`` :
135

    
136
.. code-block:: console
137

    
138
   host		all	all	4.3.2.1/32	md5
139
   host		all	all	4.3.2.2/32	md5
140

    
141
Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
142
actual IPs. Now, restart the server to apply the changes:
143

    
144
.. code-block:: console
145

    
146
   # /etc/init.d/postgresql restart
147

    
148
Gunicorn setup
149
~~~~~~~~~~~~~~
150

    
151
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following:
152

    
153
.. code-block:: console
154

    
155
   CONFIG = {
156
    'mode': 'django',
157
    'environment': {
158
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
159
    },
160
    'working_dir': '/etc/synnefo',
161
    'user': 'www-data',
162
    'group': 'www-data',
163
    'args': (
164
      '--bind=127.0.0.1:8080',
165
      '--workers=4',
166
      '--log-level=debug',
167
    ),
168
   }
169

    
170
.. warning:: Do NOT start the server yet, because it won't find the
171
    ``synnefo.settings`` module. We will start the server after successful
172
    installation of astakos. If the server is running::
173

    
174
       # /etc/init.d/gunicorn stop
175

    
176
Apache2 setup
177
~~~~~~~~~~~~~
178

    
179
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
180
the following:
181

    
182
.. code-block:: console
183

    
184
   <VirtualHost *:80>
185
     ServerName node1.example.com
186

    
187
     RewriteEngine On
188
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
189
     RewriteRule ^(.*)$ - [F,L]
190
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
191
   </VirtualHost>
192

    
193
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
194
containing the following:
195

    
196
.. code-block:: console
197

    
198
   <IfModule mod_ssl.c>
199
   <VirtualHost _default_:443>
200
     ServerName node1.example.com
201

    
202
     Alias /static "/usr/share/synnefo/static"
203

    
204
   #  SetEnv no-gzip
205
   #  SetEnv dont-vary
206

    
207
     AllowEncodedSlashes On
208

    
209
     RequestHeader set X-Forwarded-Protocol "https"
210

    
211
     <Proxy * >
212
       Order allow,deny
213
       Allow from all
214
     </Proxy>
215

    
216
     SetEnv                proxy-sendchunked
217
     SSLProxyEngine        off
218
     ProxyErrorOverride    off
219

    
220
     ProxyPass        /static !
221
     ProxyPass        / http://localhost:8080/ retry=0
222
     ProxyPassReverse / http://localhost:8080/
223

    
224
     RewriteEngine On
225
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
226
     RewriteRule ^(.*)$ - [F,L]
227
     RewriteRule ^/login(.*) /im/login/redirect$1 [PT,NE]
228

    
229
     SSLEngine on
230
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
231
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
232
   </VirtualHost>
233
   </IfModule>
234

    
235
Now enable sites and modules by running:
236

    
237
.. code-block:: console
238

    
239
   # a2enmod ssl
240
   # a2enmod rewrite
241
   # a2dissite default
242
   # a2ensite synnefo
243
   # a2ensite synnefo-ssl
244
   # a2enmod headers
245
   # a2enmod proxy_http
246

    
247
.. warning:: Do NOT start/restart the server yet. If the server is running::
248

    
249
       # /etc/init.d/apache2 stop
250

    
251
Message Queue setup
252
~~~~~~~~~~~~~~~~~~~
253

    
254
The message queue will run on node1, so we need to create the appropriate
255
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
256
exchanges:
257

    
258
.. code-block:: console
259

    
260
   # rabbitmqctl add_user synnefo "examle_rabbitmq_passw0rd"
261
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
262

    
263
We do not need to initialize the exchanges. This will be done automatically,
264
during the Cyclades setup.
265

    
266
Pithos+ data directory setup
267
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
268

    
269
As mentioned in the General Prerequisites section, there is a directory called
270
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
271
directory inside it:
272

    
273
.. code-block:: console
274

    
275
   # cd /srv/pithos
276
   # mkdir data
277
   # chown www-data:www-data data
278
   # chmod g+ws data
279

    
280
You are now ready with all general prerequisites concerning node1. Let's go to
281
node2.
282

    
283
Node2
284
-----
285

    
286
General Synnefo dependencies
287
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288

    
289
 * apache (http server)
290
 * gunicorn (WSGI http server)
291
 * postgresql (database)
292

    
293
You can install the above by running:
294

    
295
.. code-block:: console
296

    
297
   # apt-get install apache2 postgresql
298

    
299
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
300
the official debian backports:
301

    
302
.. code-block:: console
303

    
304
   # apt-get -t squeeze-backports install gunicorn
305

    
306
Node2 will connect to the databases on node1, so you will also need the
307
python-psycopg2 package:
308

    
309
.. code-block:: console
310

    
311
   # apt-get install python-psycopg2
312

    
313
Database setup
314
~~~~~~~~~~~~~~
315

    
316
All databases have been created and setup on node1, so we do not need to take
317
any action here. From node2, we will just connect to them. When you get familiar
318
with the software you may choose to run different databases on different nodes,
319
for performance/scalability/redundancy reasons, but those kind of setups are out
320
of the purpose of this guide.
321

    
322
Gunicorn setup
323
~~~~~~~~~~~~~~
324

    
325
Create the file ``synnefo`` under ``/etc/gunicorn.d/`` containing the following
326
(same contents as in node1; you can just copy/paste the file):
327

    
328
.. code-block:: console
329

    
330
   CONFIG = {
331
    'mode': 'django',
332
    'environment': {
333
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
334
    },
335
    'working_dir': '/etc/synnefo',
336
    'user': 'www-data',
337
    'group': 'www-data',
338
    'args': (
339
      '--bind=127.0.0.1:8080',
340
      '--workers=4',
341
      '--log-level=debug',
342
      '--timeout=43200'
343
    ),
344
   }
345

    
346
.. warning:: Do NOT start the server yet, because it won't find the
347
    ``synnefo.settings`` module. We will start the server after successful
348
    installation of astakos. If the server is running::
349

    
350
       # /etc/init.d/gunicorn stop
351

    
352
Apache2 setup
353
~~~~~~~~~~~~~
354

    
355
Create the file ``synnefo`` under ``/etc/apache2/sites-available/`` containing
356
the following:
357

    
358
.. code-block:: console
359

    
360
   <VirtualHost *:80>
361
     ServerName node2.example.com
362

    
363
     RewriteEngine On
364
     RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
365
     RewriteRule ^(.*)$ - [F,L]
366
     RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
367
   </VirtualHost>
368

    
369
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
370
containing the following:
371

    
372
.. code-block:: console
373

    
374
   <IfModule mod_ssl.c>
375
   <VirtualHost _default_:443>
376
     ServerName node2.example.com
377

    
378
     Alias /static "/usr/share/synnefo/static"
379

    
380
     SetEnv no-gzip
381
     SetEnv dont-vary
382
     AllowEncodedSlashes On
383

    
384
     RequestHeader set X-Forwarded-Protocol "https"
385

    
386
     <Proxy * >
387
       Order allow,deny
388
       Allow from all
389
     </Proxy>
390

    
391
     SetEnv                proxy-sendchunked
392
     SSLProxyEngine        off
393
     ProxyErrorOverride    off
394

    
395
     ProxyPass        /static !
396
     ProxyPass        / http://localhost:8080/ retry=0
397
     ProxyPassReverse / http://localhost:8080/
398

    
399
     SSLEngine on
400
     SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
401
     SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
402
   </VirtualHost>
403
   </IfModule>
404

    
405
As in node1, enable sites and modules by running:
406

    
407
.. code-block:: console
408

    
409
   # a2enmod ssl
410
   # a2enmod rewrite
411
   # a2dissite default
412
   # a2ensite synnefo
413
   # a2ensite synnefo-ssl
414
   # a2enmod headers
415
   # a2enmod proxy_http
416

    
417
.. warning:: Do NOT start/restart the server yet. If the server is running::
418

    
419
       # /etc/init.d/apache2 stop
420

    
421
We are now ready with all general prerequisites for node2. Now that we have
422
finished with all general prerequisites for both nodes, we can start installing
423
the services. First, let's install Astakos on node1.
424

    
425

    
426
Installation of Astakos on node1
427
================================
428

    
429
To install astakos, grab the package from our repository (make sure  you made
430
the additions needed in your ``/etc/apt/sources.list`` file, as described
431
previously), by running:
432

    
433
.. code-block:: console
434

    
435
   # apt-get install snf-astakos-app
436

    
437
After successful installation of snf-astakos-app, make sure that also
438
snf-webproject has been installed (marked as "Recommended" package). By default
439
Debian installs "Recommended" packages, but if you have changed your
440
configuration and the package didn't install automatically, you should
441
explicitly install it manually running:
442

    
443
.. code-block:: console
444

    
445
   # apt-get install snf-webproject
446

    
447
The reason snf-webproject is "Recommended" and not a hard dependency, is to give
448
the experienced administrator the ability to install synnefo in a custom made
449
django project. This corner case concerns only very advanced users that know
450
what they are doing and want to experiment with synnefo.
451

    
452
Configuration of Astakos
453
========================
454

    
455
Conf Files
456
----------
457

    
458
After astakos is successfully installed, you will find the directory
459
``/etc/synnefo`` and some configuration files inside it. The files contain
460
commented configuration options, which are the default options. While installing
461
new snf-* components, new configuration files will appear inside the directory.
462
In this guide (and for all services), we will edit only the minimum necessary
463
configuration options, to reflect our setup. Everything else will remain as is.
464

    
465
After getting familiar with synnefo, you will be able to customize the software
466
as you wish and fits your needs. Many options are available, to empower the
467
administrator with extensively customizable setups.
468

    
469
For the snf-webproject component (installed as an astakos dependency), we
470
need the following:
471

    
472
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
473
uncomment and edit the ``DATABASES`` block to reflect our database:
474

    
475
.. code-block:: console
476

    
477
   DATABASES = {
478
    'default': {
479
        # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
480
        'ENGINE': 'postgresql_psycopg2',
481
         # ATTENTION: This *must* be the absolute path if using sqlite3.
482
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
483
        'NAME': 'snf_apps',
484
        'USER': 'synnefo',                      # Not used with sqlite3.
485
        'PASSWORD': 'examle_passw0rd',          # Not used with sqlite3.
486
        # Set to empty string for localhost. Not used with sqlite3.
487
        'HOST': '4.3.2.1',
488
        # Set to empty string for default. Not used with sqlite3.
489
        'PORT': '5432',
490
    }
491
   }
492

    
493
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
494
``SECRET_KEY``. This is a django specific setting which is used to provide a
495
seed in secret-key hashing algorithms. Set this to a random string of your
496
choise and keep it private:
497

    
498
.. code-block:: console
499

    
500
   SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
501

    
502
For astakos specific configuration, edit the following options in
503
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
504

    
505
.. code-block:: console
506

    
507
   ASTAKOS_IM_MODULES = ['local']
508

    
509
   ASTAKOS_COOKIE_DOMAIN = '.example.com'
510

    
511
   ASTAKOS_BASEURL = 'https://node1.example.com'
512

    
513
   ASTAKOS_SITENAME = '~okeanos demo example'
514

    
515
   ASTAKOS_CLOUD_SERVICES = (
516
           { 'url':'https://node1.example.com/im/', 'name':'~okeanos home', 'id':'cloud', 'icon':'home-icon.png' },
517
           { 'url':'https://node1.example.com/ui/', 'name':'cyclades', 'id':'cyclades' },
518
           { 'url':'https://node2.example.com/ui/', 'name':'pithos+', 'id':'pithos' })
519

    
520
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
521
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
522

    
523
   ASTAKOS_RECAPTCHA_USE_SSL = True
524

    
525
``ASTAKOS_IM_MODULES`` refers to the astakos login methods. For now only local
526
is supported. The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our
527
domain (for all services). ``ASTAKOS_BASEURL`` is the astakos home page.
528
``ASTAKOS_CLOUD_SERVICES`` contains all services visible to and served by
529
astakos. The first element of the dictionary is used to point to a generic
530
landing page for your services (cyclades, pithos). If you don't have such a
531
page it can be omitted. The second and third element point to our services
532
themselves (the apps) and should be set as above.
533

    
534
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
535
go to https://www.google.com/recaptcha/admin/create and create your own pair.
536

    
537
Shibboleth Setup
538
----------------
539
Optionally, Astakos can delegate user authentication to a Shibboleth federation.
540

    
541
To setup shibboleth, install package::
542

    
543
  apt-get install libapache2-mod-shib2
544

    
545
Change appropriately the configuration files in ``/etc/shibboleth``.
546

    
547
Add in ``/etc/apache2/sites-available/synnefo-ssl``::
548

    
549
  ShibConfig /etc/shibboleth/shibboleth2.xml
550
  Alias      /shibboleth-sp /usr/share/shibboleth
551

    
552
  <Location /im/login/shibboleth>
553
    AuthType shibboleth
554
    ShibRequireSession On
555
    ShibUseHeaders On
556
    require valid-user
557
  </Location>
558

    
559
and before the line containing::
560

    
561
  ProxyPass        / http://localhost:8080/ retry=0
562

    
563
add::
564

    
565
  ProxyPass /Shibboleth.sso !
566

    
567
Then, enable the shibboleth module::
568

    
569
  a2enmod shib2
570

    
571
After passing through the apache module, the following tokens should be available at the destination::
572

    
573
  eppn # eduPersonPrincipalName
574
  Shib-InetOrgPerson-givenName
575
  Shib-Person-surname
576
  Shib-Person-commonName
577
  Shib-InetOrgPerson-displayName
578
  Shib-EP-Affiliation
579
  Shib-Session-ID
580

    
581
Finally, add 'shibboleth' in ``ASTAKOS_IM_MODULES``.
582

    
583
Servers Initialization
584
----------------------
585

    
586
After configuration is done, we initialize the servers on node1:
587

    
588
.. code-block:: console
589

    
590
   root@node1:~ # /etc/init.d/gunicorn restart
591
   root@node1:~ # /etc/init.d/apache2 restart
592

    
593
Database Initialization
594
-----------------------
595

    
596
Then, we initialize the database by running:
597

    
598
.. code-block:: console
599

    
600
   # snf-manage syncdb
601

    
602
At this example we don't need to create a django superuser, so we select
603
``[no]`` to the question. After a successful sync, we run the migration needed
604
for astakos:
605

    
606
.. code-block:: console
607

    
608
   # snf-manage migrate im
609

    
610
Finally we load the pre-defined user groups
611

    
612
.. code-block:: console
613

    
614
   # snf-manage loaddata groups
615

    
616
You have now finished the Astakos setup. Let's test it now.
617

    
618

    
619
Testing of Astakos
620
==================
621

    
622
Open your favorite browser and go to:
623

    
624
``http://node1.example.com/im``
625

    
626
If this redirects you to ``https://node1.example.com/im`` and you can see
627
the "welcome" door of Astakos, then you have successfully setup Astakos.
628

    
629
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
630
and fill all your data at the sign up form. Then click "SUBMIT". You should now
631
see a green box on the top, which informs you that you made a successful request
632
and the request has been sent to the administrators. So far so good.
633

    
634
Now we need to activate that user. Return to a command prompt at node1 and run:
635

    
636
.. code-block:: console
637

    
638
   root@node1:~ # snf-manage listusers
639

    
640
This command should show you a list with only one user; the one we just created.
641
This user should have an id with a value of ``1``. It should also have an
642
"active" status with the value of ``0`` (inactive). Now run:
643

    
644
.. code-block:: console
645

    
646
   root@node1:~ # snf-manage modifyuser --set-active 1
647

    
648
This modifies the active value to ``1``, and actually activates the user.
649
When running in production, the activation is done automatically with different
650
types of moderation, that Astakos supports. You can see the moderation methods
651
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
652
documentation. In production, you can also manually activate a user, by sending
653
him/her an activation email. See how to do this at the :ref:`User
654
activation <user_activation>` section.
655

    
656
Now let's go back to the homepage. Open ``http://node1.example.com/im`` with
657
your browser again. Try to sign in using your new credentials. If the astakos
658
menu appears and you can see your profile, then you have successfully setup
659
Astakos.
660

    
661
Let's continue to install Pithos+ now.
662

    
663

    
664
Installation of Pithos+ on node2
665
================================
666

    
667
To install pithos+, grab the packages from our repository (make sure  you made
668
the additions needed in your ``/etc/apt/sources.list`` file, as described
669
previously), by running:
670

    
671
.. code-block:: console
672

    
673
   # apt-get install snf-pithos-app
674

    
675
After successful installation of snf-pithos-app, make sure that also
676
snf-webproject has been installed (marked as "Recommended" package). Refer to
677
the "Installation of Astakos on node1" section, if you don't remember why this
678
should happen. Now, install the pithos web interface:
679

    
680
.. code-block:: console
681

    
682
   # apt-get install snf-pithos-webclient
683

    
684
This package provides the standalone pithos web client. The web client is the
685
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
686
interface's cloudbar, at the top of the Astakos homepage.
687

    
688
Configuration of Pithos+
689
========================
690

    
691
Conf Files
692
----------
693

    
694
After pithos+ is successfully installed, you will find the directory
695
``/etc/synnefo`` and some configuration files inside it, as you did in node1
696
after installation of astakos. Here, you will not have to change anything that
697
has to do with snf-common or snf-webproject. Everything is set at node1. You
698
only need to change settings that have to do with pithos+. Specifically:
699

    
700
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
701
only the two options:
702

    
703
.. code-block:: console
704

    
705
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
706

    
707
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
708

    
709
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
710
   PITHOS_AUTHENTICATION_USERS = None
711

    
712
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
713
find the pithos+ backend database. Above we tell pithos+ that its database is
714
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
715
``example_passw0rd``.  All those settings where setup during node1's "Database
716
setup" section.
717

    
718
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
719
the pithos+ backend data. Above we tell pithos+ to store its data under
720
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
721
directory at node1's "Pithos+ data directory setup" section.
722

    
723
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
724
is available the astakos authentication api. If not set, pithos+ tries to
725
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.
726

    
727
Then we need to setup the web UI and connect it to astakos. To do so, edit
728
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:
729

    
730
.. code-block:: console
731

    
732
   PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
733
   PITHOS_UI_FEEDBACK_URL = "https://node1.example.com/im/feedback"
734

    
735
The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
736
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
737
pithos+ feedback form. Astakos already provides a generic feedback form for all
738
services, so we use this one.
739

    
740
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
741
pithos+ web UI with the astakos web UI (through the top cloudbar):
742

    
743
.. code-block:: console
744

    
745
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
746
   PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = 'pithos'
747
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
748
   CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
749

    
750
The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
751
cloudbar.
752

    
753
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` registers the client as a new service
754
served by astakos. It's name should be identical with the ``id`` name given at
755
the astakos' ``ASTAKOS_CLOUD_SERVICES`` variable. Note that at the Astakos "Conf
756
Files" section, we actually set the third item of the ``ASTAKOS_CLOUD_SERVICES``
757
list, to the dictionary: ``{ 'url':'https://nod...', 'name':'pithos+',
758
'id':'pithos }``. This item represents the pithos+ service. The ``id`` we set
759
there, is the ``id`` we want here.
760

    
761
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
762
pithos+ web client to get from astakos all the information needed to fill its
763
own cloudbar. So we put our astakos deployment urls there.
764

    
765
Servers Initialization
766
----------------------
767

    
768
After configuration is done, we initialize the servers on node2:
769

    
770
.. code-block:: console
771

    
772
   root@node2:~ # /etc/init.d/gunicorn restart
773
   root@node2:~ # /etc/init.d/apache2 restart
774

    
775
You have now finished the Pithos+ setup. Let's test it now.
776

    
777

    
778
Testing of Pithos+
779
==================
780

    
781
Open your browser and go to the Astakos homepage:
782

    
783
``http://node1.example.com/im``
784

    
785
Login, and you will see your profile page. Now, click the "pithos+" link on the
786
top black cloudbar. If everything was setup correctly, this will redirect you
787
to:
788

    
789
``https://node2.example.com/ui``
790

    
791
and you will see the blue interface of the Pithos+ application.  Click the
792
orange "Upload" button and upload your first file. If the file gets uploaded
793
successfully, then this is your first sign of a successful Pithos+ installation.
794
Go ahead and experiment with the interface to make sure everything works
795
correctly.
796

    
797
You can also use the Pithos+ clients to sync data from your Windows PC or MAC.
798

    
799
If you don't stumble on any problems, then you have successfully installed
800
Pithos+, which you can use as a standalone File Storage Service.
801

    
802
If you would like to do more, such as:
803

    
804
 * Spawning VMs
805
 * Spawning VMs from Images stored on Pithos+
806
 * Uploading your custom Images to Pithos+
807
 * Spawning VMs from those custom Images
808
 * Registering existing Pithos+ files as Images
809

    
810
please continue with the rest of the guide.
811

    
812
Installation of Cyclades (and Plankton) on node1
813
================================================
814

    
815
Installation of cyclades is a two step process:
816

    
817
1. install the external services (prerequisites) on which cyclades depends
818
2. install the synnefo software components associated with cyclades
819

    
820
Prerequisites
821
-------------
822
.. _cyclades-install-ganeti:
823

    
824
Ganeti installation
825
~~~~~~~~~~~~~~~~~~~
826

    
827
Synnefo requires a working Ganeti installation at the backend. Installation
828
of Ganeti is not covered by this document, please refer to
829
`ganeti documentation <http://docs.ganeti.org/ganeti/current/html>`_ for all the
830
gory details. A successful Ganeti installation concludes with a working
831
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs <GANETI_NODES>`.
832

    
833
.. _cyclades-install-db:
834

    
835
Database
836
~~~~~~~~
837

    
838
Database installation is done as part of the
839
:ref:`snf-webproject <snf-webproject>` component.
840

    
841
.. _cyclades-install-rabbitmq:
842

    
843
RabbitMQ
844
~~~~~~~~
845

    
846
RabbitMQ is used as a generic message broker for cyclades. It should be
847
installed on two seperate :ref:`QUEUE <QUEUE_NODE>` nodes in a high availability
848
configuration as described here:
849

    
850
    http://www.rabbitmq.com/pacemaker.html
851

    
852
After installation, create a user and set its permissions:
853

    
854
.. code-block:: console
855

    
856
    $ rabbitmqctl add_user <username> <password>
857
    $ rabbitmqctl set_permissions -p / <username>  "^.*" ".*" ".*"
858

    
859
The values set for the user and password must be mirrored in the
860
``RABBIT_*`` variables in your settings, as managed by
861
:ref:`snf-common <snf-common>`.
862

    
863
.. todo:: Document an active-active configuration based on the latest version
864
   of RabbitMQ.
865

    
866
.. _cyclades-install-vncauthproxy:
867

    
868
vncauthproxy
869
~~~~~~~~~~~~
870

    
871
To support OOB console access to the VMs over VNC, the vncauthproxy
872
daemon must be running on every :ref:`APISERVER <APISERVER_NODE>` node.
873

    
874
.. note:: The Debian package for vncauthproxy undertakes all configuration
875
   automatically.
876

    
877
Download and install the latest vncauthproxy from its own repository,
878
at `https://code.grnet.gr/git/vncauthproxy`, or a specific commit:
879

    
880
.. code-block:: console
881

    
882
    $ bin/pip install -e git+https://code.grnet.gr/git/vncauthproxy@INSERT_COMMIT_HERE#egg=vncauthproxy
883

    
884
Create ``/var/log/vncauthproxy`` and set its permissions appropriately.
885

    
886
Alternatively, build and install Debian packages.
887

    
888
.. code-block:: console
889

    
890
    $ git checkout debian
891
    $ dpkg-buildpackage -b -uc -us
892
    # dpkg -i ../vncauthproxy_1.0-1_all.deb
893

    
894
.. warning::
895
    **Failure to build the package on the Mac.**
896

    
897
    ``libevent``, a requirement for gevent which in turn is a requirement for
898
    vncauthproxy is not included in `MacOSX` by default and installing it with
899
    MacPorts does not lead to a version that can be found by the gevent
900
    build process. A quick workaround is to execute the following commands::
901

    
902
        $ cd $SYNNEFO
903
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
904
        <the above fails>
905
        $ cd build/gevent
906
        $ sudo python setup.py -I/opt/local/include -L/opt/local/lib build
907
        $ cd $SYNNEFO
908
        $ sudo pip install -e git+https://code.grnet.gr/git/vncauthproxy@5a196d8481e171a#egg=vncauthproxy
909

    
910
.. todo:: Mention vncauthproxy bug, snf-vncauthproxy, inability to install using pip
911
.. todo:: kpap: fix installation commands
912

    
913
.. _cyclades-install-nfdhcpd:
914

    
915
NFDHCPD
916
~~~~~~~
917

    
918
Setup Synnefo-specific networking on the Ganeti backend.
919
This part is deployment-specific and must be customized based on the
920
specific needs of the system administrators.
921

    
922
A reference installation will use a Synnefo-specific KVM ifup script,
923
NFDHCPD and pre-provisioned Linux bridges to support public and private
924
network functionality. For this:
925

    
926
Grab NFDHCPD from its own repository (https://code.grnet.gr/git/nfdhcpd),
927
install it, modify ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network
928
configuration.
929

    
930
Install a custom KVM ifup script for use by Ganeti, as
931
``/etc/ganeti/kvm-vif-bridge``, on GANETI-NODEs. A sample implementation is
932
provided under ``/contrib/ganeti-hooks``. Set ``NFDHCPD_STATE_DIR`` to point
933
to NFDHCPD's state directory, usually ``/var/lib/nfdhcpd``.
934

    
935
.. todo:: soc: document NFDHCPD installation, settle on KVM ifup script
936

    
937
.. _cyclades-install-snfimage:
938

    
939
snf-image
940
~~~~~~~~~
941

    
942
Install the :ref:`snf-image <snf-image>` Ganeti OS provider for image
943
deployment.
944

    
945
For :ref:`cyclades <cyclades>` to be able to launch VMs from specified
946
Images, you need the snf-image OS Provider installed on *all* Ganeti nodes.
947

    
948
Please see `https://code.grnet.gr/projects/snf-image/wiki`_
949
for installation instructions and documentation on the design
950
and implementation of snf-image.
951

    
952
Please see `https://code.grnet.gr/projects/snf-image/files`
953
for the latest packages.
954

    
955
Images should be stored in ``extdump``, or ``diskdump`` format in a directory
956
of your choice, configurable as ``IMAGE_DIR`` in
957
:file:`/etc/default/snf-image`.
958

    
959
synnefo components
960
------------------
961

    
962
You need to install the appropriate synnefo software components on each node,
963
depending on its type, see :ref:`Architecture <cyclades-architecture>`.
964

    
965
Most synnefo components have dependencies on additional Python packages.
966
The dependencies are described inside each package, and are setup
967
automatically when installing using :command:`pip`, or when installing
968
using your system's package manager.
969

    
970
Please see the page of each synnefo software component for specific
971
installation instructions, where applicable.
972

    
973
Install the following synnefo components:
974

    
975
Nodes of type :ref:`APISERVER <APISERVER_NODE>`
976
    Components
977
    :ref:`snf-common <snf-common>`,
978
    :ref:`snf-webproject <snf-webproject>`,
979
    :ref:`snf-cyclades-app <snf-cyclades-app>`
980
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>`
981
    Components
982
    :ref:`snf-common <snf-common>`,
983
    :ref:`snf-cyclades-gtools <snf-cyclades-gtools>`
984
Nodes of type :ref:`LOGIC <LOGIC_NODE>`
985
    Components
986
    :ref:`snf-common <snf-common>`,
987
    :ref:`snf-webproject <snf-webproject>`,
988
    :ref:`snf-cyclades-app <snf-cyclades-app>`.
989

    
990

    
991
Configuration of Cyclades (and Plankton)
992
========================================
993

    
994
This section targets the configuration of the prerequisites for cyclades,
995
and the configuration of the associated synnefo software components.
996

    
997
synnefo components
998
------------------
999

    
1000
cyclades uses :ref:`snf-common <snf-common>` for settings.
1001
Please refer to the configuration sections of
1002
:ref:`snf-webproject <snf-webproject>`,
1003
:ref:`snf-cyclades-app <snf-cyclades-app>`,
1004
:ref:`snf-cyclades-gtools <snf-cyclades-gtools>` for more
1005
information on their configuration.
1006

    
1007
Ganeti
1008
~~~~~~
1009

    
1010
Set ``GANETI_NODES``, ``GANETI_MASTER_IP``, ``GANETI_CLUSTER_INFO`` based on
1011
your :ref:`Ganeti installation <cyclades-install-ganeti>` and change the
1012
`BACKEND_PREFIX_ID`` setting, using an custom ``PREFIX_ID``.
1013

    
1014
Database
1015
~~~~~~~~
1016

    
1017
Once all components are installed and configured,
1018
initialize the Django DB:
1019

    
1020
.. code-block:: console
1021

    
1022
   $ snf-manage syncdb
1023
   $ snf-manage migrate
1024

    
1025
and load fixtures ``{users, flavors, images}``,
1026
which make the API usable by end users by defining a sample set of users,
1027
hardware configurations (flavors) and OS images:
1028

    
1029
.. code-block:: console
1030

    
1031
   $ snf-manage loaddata /path/to/users.json
1032
   $ snf-manage loaddata flavors
1033
   $ snf-manage loaddata images
1034

    
1035
.. warning::
1036
    Be sure to load a custom users.json and select a unique token
1037
    for each of the initial and any other users defined in this file.
1038
    **DO NOT LEAVE THE SAMPLE AUTHENTICATION TOKENS** enabled in deployed
1039
    configurations.
1040

    
1041
sample users.json file:
1042

    
1043
.. literalinclude:: ../../synnefo/db/fixtures/users.json
1044

    
1045
`download <../_static/users.json>`_
1046

    
1047
RabbitMQ
1048
~~~~~~~~
1049

    
1050
Change ``RABBIT_*`` settings to match your :ref:`RabbitMQ setup
1051
<cyclades-install-rabbitmq>`.
1052

    
1053
.. include:: ../../Changelog
1054

    
1055

    
1056
Testing of Cyclades (and Plankton)
1057
==================================
1058

    
1059

    
1060
General Testing
1061
===============
1062

    
1063

    
1064
Notes
1065
=====