Statistics
| Branch: | Tag: | Revision:

root / docs / quick-install-admin-guide.rst @ 850586cb

History | View | Annotate | Download (82.8 kB)

1
.. _quick-install-admin-guide:
2

    
3
Administrator's Installation Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the Administrator's installation guide.
7

    
8
It describes how to install the whole Synnefo stack on two (2) physical nodes,
9
with minimum configuration. It installs synnefo from Debian packages, and
10
assumes the nodes run Debian Wheezy. After successful installation, you will
11
have the following services running:
12

    
13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
    * Image Service (part of Cyclades)
17
    * Network Service (part of Cyclades)
18

    
19
and a single unified Web UI to manage them all.
20

    
21
If you just want to install the Object Storage Service (Pithos), follow the
22
guide and just stop after the "Testing of Pithos" section.
23

    
24

    
25
Installation of Synnefo / Introduction
26
======================================
27

    
28
We will install the services with the above list's order. The last three
29
services will be installed in a single step (at the end), because at the moment
30
they are contained in the same software component (Cyclades). Furthermore, we
31
will install all services in the first physical node, except Pithos which will
32
be installed in the second, due to a conflict between the snf-pithos-app and
33
snf-cyclades-app component (scheduled to be fixed in the next version).
34

    
35
For the rest of the documentation we will refer to the first physical node as
36
"node1" and the second as "node2". We will also assume that their domain names
37
are "node1.example.com" and "node2.example.com" and their public IPs are "203.0.113.1" and
38
"203.0.113.2" respectively. It is important that the two machines are under the same domain name.
39
In case you choose to follow a private installation you will need to
40
set up a private dns server, using dnsmasq for example. See node1 below for
41
more information on how to do so.
42

    
43
General Prerequisites
44
=====================
45

    
46
These are the general synnefo prerequisites, that you need on node1 and node2
47
and are related to all the services (Astakos, Pithos, Cyclades).
48

    
49
To be able to download all synnefo components you need to add the following
50
lines in your ``/etc/apt/sources.list`` file:
51

    
52
| ``deb http://apt.dev.grnet.gr wheezy/``
53
| ``deb-src http://apt.dev.grnet.gr wheezy/``
54

    
55
and import the repo's GPG key:
56

    
57
| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``
58

    
59
Update your list of packages and continue with the installation:
60

    
61
.. code-block:: console
62

    
63
   # apt-get update
64

    
65
You also need a shared directory visible by both nodes. Pithos will save all
66
data inside this directory. By 'all data', we mean files, images, and Pithos
67
specific mapping data. If you plan to upload more than one basic image, this
68
directory should have at least 50GB of free space. During this guide, we will
69
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
70
to node2 (be sure to set no_root_squash flag). Node2 has this directory
71
mounted under ``/srv/pithos``, too.
72

    
73
Before starting the synnefo installation, you will need basic third party
74
software to be installed and configured on the physical nodes. We will describe
75
each node's general prerequisites separately. Any additional configuration,
76
specific to a synnefo service for each node, will be described at the service's
77
section.
78

    
79
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
80
system clocks (e.g. by running ntpd).
81

    
82
Node1
83
-----
84

    
85

    
86
General Synnefo dependencies
87
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88

    
89
		* apache (http server)
90
		* public certificate
91
		* gunicorn (WSGI http server)
92
		* postgresql (database)
93
		* rabbitmq (message queue)
94
		* ntp (NTP daemon)
95
		* gevent
96
		* dnsmasq (DNS server)
97

    
98
You can install apache2, postgresql, ntp and rabbitmq by running:
99

    
100
.. code-block:: console
101

    
102
   # apt-get install apache2 postgresql ntp rabbitmq-server
103

    
104
To install gunicorn and gevent, run:
105

    
106
.. code-block:: console
107

    
108
   # apt-get install gunicorn python-gevent
109

    
110
On node1, we will create our databases, so you will also need the
111
python-psycopg2 package:
112

    
113
.. code-block:: console
114

    
115
   # apt-get install python-psycopg2
116

    
117
Database setup
118
~~~~~~~~~~~~~~
119

    
120
On node1, we create a database called ``snf_apps``, that will host all django
121
apps related tables. We also create the user ``synnefo`` and grant him all
122
privileges on the database. We do this by running:
123

    
124
.. code-block:: console
125

    
126
    root@node1:~ # su - postgres
127
    postgres@node1:~ $ psql
128
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
129
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
130
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
131

    
132
We also create the database ``snf_pithos`` needed by the Pithos backend and
133
grant the ``synnefo`` user all privileges on the database. This database could
134
be created on node2 instead, but we do it on node1 for simplicity. We will
135
create all needed databases on node1 and then node2 will connect to them.
136

    
137
.. code-block:: console
138

    
139
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
140
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
141

    
142
Configure the database to listen to all network interfaces. You can do this by
143
editting the file ``/etc/postgresql/9.1/main/postgresql.conf`` and change
144
``listen_addresses`` to ``'*'`` :
145

    
146
.. code-block:: console
147

    
148
    listen_addresses = '*'
149

    
150
Furthermore, edit ``/etc/postgresql/9.1/main/pg_hba.conf`` to allow node1 and
151
node2 to connect to the database. Add the following lines under ``#IPv4 local
152
connections:`` :
153

    
154
.. code-block:: console
155

    
156
    host		all	all	203.0.113.1/32	md5
157
    host		all	all	203.0.113.2/32	md5
158

    
159
Make sure to substitute "203.0.113.1" and "203.0.113.2" with node1's and node2's
160
actual IPs. Now, restart the server to apply the changes:
161

    
162
.. code-block:: console
163

    
164
   # /etc/init.d/postgresql restart
165

    
166

    
167
Certificate Creation
168
~~~~~~~~~~~~~~~~~~~~~
169

    
170
Node1 will host Cyclades. Cyclades should communicate with the other Synnefo
171
Services and users over a secure channel. In order for the connection to be
172
trusted, the keys provided to Apache below should be signed with a certificate.
173
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
174
and sign your keys with this. To do so on node1 run:
175

    
176
.. code-block:: console
177

    
178
		# apt-get install openvpn
179
		# mkdir /etc/openvpn/easy-rsa
180
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
181
		# cd /etc/openvpn/easy-rsa/2.0
182
		# vim vars
183

    
184
In vars you can set your own parameters such as KEY_COUNTRY
185

    
186
.. code-block:: console
187

    
188
	# . ./vars
189
	# ./clean-all
190

    
191
Now you can create the certificate
192

    
193
.. code-block:: console
194

    
195
		# ./build-ca
196

    
197
The previous will create a ``ca.crt`` file in the directory ``/etc/openvpn/easy-rsa/2.0/keys``.
198
Copy this file under ``/usr/local/share/ca-certificates/`` directory and run :
199

    
200
.. code-block:: console
201

    
202
		# update-ca-certificates
203

    
204
to update the records. You will have to do the following on node2 as well.
205

    
206
Now you can create the keys and sign them with the certificate
207

    
208
.. code-block:: console
209

    
210
		# ./build-key-server node1.example.com
211

    
212
This will create a ``01.pem`` and a ``node1.example.com.key`` files in the
213
``/etc/openvpn/easy-rsa/2.0/keys`` directory. Copy these in ``/etc/ssl/certs/``
214
and ``/etc/ssl/private/`` respectively and use them in the apache2
215
configuration file below instead of the defaults.
216

    
217
Apache2 setup
218
~~~~~~~~~~~~~
219

    
220
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
221
following:
222

    
223
.. code-block:: console
224

    
225
    <VirtualHost *:80>
226
        ServerName node1.example.com
227

    
228
        RewriteEngine On
229
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
230
        RewriteRule ^(.*)$ - [F,L]
231
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
232
    </VirtualHost>
233

    
234

    
235
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
236
following:
237

    
238
.. code-block:: console
239

    
240
    <IfModule mod_ssl.c>
241
    <VirtualHost _default_:443>
242
        ServerName node1.example.com
243

    
244
        Alias /static "/usr/share/synnefo/static"
245

    
246
        #  SetEnv no-gzip
247
        #  SetEnv dont-vary
248

    
249
       AllowEncodedSlashes On
250

    
251
       RequestHeader set X-Forwarded-Protocol "https"
252

    
253
    <Proxy * >
254
        Order allow,deny
255
        Allow from all
256
    </Proxy>
257

    
258
        SetEnv                proxy-sendchunked
259
        SSLProxyEngine        off
260
        ProxyErrorOverride    off
261

    
262
        ProxyPass        /static !
263
        ProxyPass        / http://localhost:8080/ retry=0
264
        ProxyPassReverse / http://localhost:8080/
265

    
266
        RewriteEngine On
267
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
268
        RewriteRule ^(.*)$ - [F,L]
269

    
270
        SSLEngine on
271
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
272
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
273
    </VirtualHost>
274
    </IfModule>
275

    
276
Now enable sites and modules by running:
277

    
278
.. code-block:: console
279

    
280
   # a2enmod ssl
281
   # a2enmod rewrite
282
   # a2dissite default
283
   # a2ensite synnefo
284
   # a2ensite synnefo-ssl
285
   # a2enmod headers
286
   # a2enmod proxy_http
287

    
288
.. note:: This isn't really needed, but it's a good security practice to disable
289
    directory listing in apache::
290

    
291
        # a2dismod autoindex
292

    
293

    
294
.. warning:: Do NOT start/restart the server yet. If the server is running::
295

    
296
       # /etc/init.d/apache2 stop
297

    
298

    
299
.. _rabbitmq-setup:
300

    
301
Message Queue setup
302
~~~~~~~~~~~~~~~~~~~
303

    
304
The message queue will run on node1, so we need to create the appropriate
305
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
306
exchanges:
307

    
308
.. code-block:: console
309

    
310
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
311
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
312

    
313
We do not need to initialize the exchanges. This will be done automatically,
314
during the Cyclades setup.
315

    
316
Pithos data directory setup
317
~~~~~~~~~~~~~~~~~~~~~~~~~~~
318

    
319
As mentioned in the General Prerequisites section, there should be a directory
320
called ``/srv/pithos`` visible by both nodes. We create and setup the ``data``
321
directory inside it:
322

    
323
.. code-block:: console
324

    
325
   # mkdir /srv/pithos
326
   # cd /srv/pithos
327
   # mkdir data
328
   # chown www-data:www-data data
329
   # chmod g+ws data
330

    
331
This directory must be shared via `NFS <https://en.wikipedia.org/wiki/Network_File_System>`_.
332
In order to do this, run:
333

    
334
.. code-block:: console
335

    
336
   # apt-get install rpcbind nfs-kernel-server
337

    
338
Now edit ``/etc/exports`` and add the following line:
339

    
340
.. code-block:: console
341

    
342
   /srv/pithos/ 203.0.113.2(rw,no_root_squash,sync,subtree_check)
343

    
344
Once done, run:
345

    
346
.. code-block:: console
347

    
348
   # /etc/init.d/nfs-kernel-server restart
349

    
350

    
351
DNS server setup
352
~~~~~~~~~~~~~~~~
353

    
354
If your machines are not under the same domain name you have to set up a dns server.
355
In order to set up a dns server using dnsmasq do the following:
356

    
357
.. code-block:: console
358

    
359
   # apt-get install dnsmasq
360

    
361
Then edit your ``/etc/hosts/`` file as follows:
362

    
363
.. code-block:: console
364

    
365
		203.0.113.1     node1.example.com
366
		203.0.113.2     node2.example.com
367

    
368
dnsmasq will serve any IPs/domains found in ``/etc/resolv.conf``.
369

    
370
There is a `"bug" in libevent 2.0.5 <http://sourceforge.net/p/levent/bugs/193/>`_
371
, where if you have multiple nameservers in your ``/etc/resolv.conf``, libevent
372
will round-robin against them. To avoid this, you must use a single nameserver
373
for all your needs. Edit your ``/etc/resolv.conf`` to include your dns server:
374

    
375
.. code-block:: console
376

    
377
   nameserver 203.0.113.1
378

    
379
Because of the aforementioned bug, you can't specify more than one DNS servers
380
in your ``/etc/resolv.conf``. In order for dnsmasq to serve domains not in
381
``/etc/hosts``, edit ``/etc/dnsmasq.conf`` and change the line starting with
382
``#resolv-file=`` to:
383

    
384
.. code-block:: console
385

    
386
   resolv-file=/etc/external-dns
387

    
388
Now create the file ``/etc/external-dns`` and specify any extra DNS servers you
389
want dnsmasq to query for domains, e.g., 8.8.8.8:
390

    
391
.. code-block:: console
392

    
393
   nameserver 8.8.8.8
394

    
395
In the ``/etc/dnsmasq.conf`` file, you can also specify the ``listen-address``
396
and the ``interface`` you would like dnsmasq to listen to.
397

    
398
Finally, restart dnsmasq:
399

    
400
.. code-block:: console
401

    
402
   # /etc/init.d/dnsmasq restart
403

    
404
You are now ready with all general prerequisites concerning node1. Let's go to
405
node2.
406

    
407
Node2
408
-----
409

    
410
General Synnefo dependencies
411
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
412

    
413
    * apache (http server)
414
    * gunicorn (WSGI http server)
415
    * postgresql (database)
416
    * ntp (NTP daemon)
417
    * gevent
418
    * certificates
419
    * dnsmasq (DNS server)
420

    
421
You can install the above by running:
422

    
423
.. code-block:: console
424

    
425
   # apt-get install apache2 postgresql ntp
426

    
427
To install gunicorn and gevent, run:
428

    
429
.. code-block:: console
430

    
431
   # apt-get install gunicorn python-gevent
432

    
433
Node2 will connect to the databases on node1, so you will also need the
434
python-psycopg2 package:
435

    
436
.. code-block:: console
437

    
438
   # apt-get install python-psycopg2
439

    
440
Database setup
441
~~~~~~~~~~~~~~
442

    
443
All databases have been created and setup on node1, so we do not need to take
444
any action here. From node2, we will just connect to them. When you get familiar
445
with the software you may choose to run different databases on different nodes,
446
for performance/scalability/redundancy reasons, but those kind of setups are out
447
of the purpose of this guide.
448

    
449
Apache2 setup
450
~~~~~~~~~~~~~
451

    
452
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
453
following:
454

    
455
.. code-block:: console
456

    
457
    <VirtualHost *:80>
458
        ServerName node2.example.com
459

    
460
        RewriteEngine On
461
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
462
        RewriteRule ^(.*)$ - [F,L]
463
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
464
    </VirtualHost>
465

    
466
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
467
containing the following:
468

    
469
.. code-block:: console
470

    
471
    <IfModule mod_ssl.c>
472
    <VirtualHost _default_:443>
473
        ServerName node2.example.com
474

    
475
        Alias /static "/usr/share/synnefo/static"
476

    
477
        SetEnv no-gzip
478
        SetEnv dont-vary
479
        AllowEncodedSlashes On
480

    
481
        RequestHeader set X-Forwarded-Protocol "https"
482

    
483
        <Proxy * >
484
            Order allow,deny
485
            Allow from all
486
        </Proxy>
487

    
488
        SetEnv                proxy-sendchunked
489
        SSLProxyEngine        off
490
        ProxyErrorOverride    off
491

    
492
        ProxyPass        /static !
493
        ProxyPass        / http://localhost:8080/ retry=0
494
        ProxyPassReverse / http://localhost:8080/
495

    
496
        SSLEngine on
497
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
498
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
499
    </VirtualHost>
500
    </IfModule>
501

    
502
As in node1, enable sites and modules by running:
503

    
504
.. code-block:: console
505

    
506
   # a2enmod ssl
507
   # a2enmod rewrite
508
   # a2dissite default
509
   # a2ensite synnefo
510
   # a2ensite synnefo-ssl
511
   # a2enmod headers
512
   # a2enmod proxy_http
513

    
514
.. note:: This isn't really needed, but it's a good security practice to disable
515
    directory listing in apache::
516

    
517
        # a2dismod autoindex
518

    
519
.. warning:: Do NOT start/restart the server yet. If the server is running::
520

    
521
       # /etc/init.d/apache2 stop
522

    
523

    
524
Acquire certificate
525
~~~~~~~~~~~~~~~~~~~
526

    
527
Copy the certificate you created before on node1 (`ca.crt`) under the directory
528
``/usr/local/share/ca-certificate`` and run:
529

    
530
.. code-block:: console
531

    
532
   # update-ca-certificates
533

    
534
to update the records.
535

    
536

    
537
DNS Setup
538
~~~~~~~~~
539

    
540
Add the following line in ``/etc/resolv.conf`` file
541

    
542
.. code-block:: console
543

    
544
   nameserver 203.0.113.1
545

    
546
to inform the node about the new DNS server.
547

    
548
As mentioned before, this should be the only ``nameserver`` entry in
549
``/etc/resolv.conf``.
550

    
551
We are now ready with all general prerequisites for node2. Now that we have
552
finished with all general prerequisites for both nodes, we can start installing
553
the services. First, let's install Astakos on node1.
554

    
555
Installation of Astakos on node1
556
================================
557

    
558
To install Astakos, grab the package from our repository (make sure  you made
559
the additions needed in your ``/etc/apt/sources.list`` file and updated, as
560
described previously), by running:
561

    
562
.. code-block:: console
563

    
564
   # apt-get install snf-astakos-app snf-pithos-backend
565

    
566
.. _conf-astakos:
567

    
568
Configuration of Astakos
569
========================
570

    
571
Gunicorn setup
572
--------------
573

    
574
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
575
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
576

    
577
.. code-block:: console
578

    
579
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
580

    
581

    
582
.. warning:: Do NOT start the server yet, because it won't find the
583
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
584
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
585
    ``--worker-class=sync``. We will start the server after successful
586
    installation of Astakos. If the server is running::
587

    
588
       # /etc/init.d/gunicorn stop
589

    
590
Conf Files
591
----------
592

    
593
After Astakos is successfully installed, you will find the directory
594
``/etc/synnefo`` and some configuration files inside it. The files contain
595
commented configuration options, which are the default options. While installing
596
new snf-* components, new configuration files will appear inside the directory.
597
In this guide (and for all services), we will edit only the minimum necessary
598
configuration options, to reflect our setup. Everything else will remain as is.
599

    
600
After getting familiar with Synnefo, you will be able to customize the software
601
as you wish and fits your needs. Many options are available, to empower the
602
administrator with extensively customizable setups.
603

    
604
For the snf-webproject component (installed as an Astakos dependency), we
605
need the following:
606

    
607
Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
608
uncomment and edit the ``DATABASES`` block to reflect our database:
609

    
610
.. code-block:: console
611

    
612
    DATABASES = {
613
     'default': {
614
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
615
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
616
         # ATTENTION: This *must* be the absolute path if using sqlite3.
617
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
618
         'NAME': 'snf_apps',
619
         'USER': 'synnefo',                      # Not used with sqlite3.
620
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
621
         # Set to empty string for localhost. Not used with sqlite3.
622
         'HOST': '203.0.113.1',
623
         # Set to empty string for default. Not used with sqlite3.
624
         'PORT': '5432',
625
     }
626
    }
627

    
628
Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
629
``SECRET_KEY``. This is a Django specific setting which is used to provide a
630
seed in secret-key hashing algorithms. Set this to a random string of your
631
choice and keep it private:
632

    
633
.. code-block:: console
634

    
635
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
636

    
637
For Astakos specific configuration, edit the following options in
638
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
639

    
640
.. code-block:: console
641

    
642
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
643

    
644
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
645

    
646
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
647
services). ``ASTAKOS_BASE_URL`` is the Astakos top-level URL. Appending an
648
extra path (``/astakos`` here) is recommended in order to distinguish
649
components, if more than one are installed on the same machine.
650

    
651
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
652
    If you would like to enable it, you have to edit the following options:
653

    
654
    .. code-block:: console
655

    
656
        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
657
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
658
        ASTAKOS_RECAPTCHA_USE_SSL = True
659
        ASTAKOS_RECAPTCHA_ENABLED = True
660

    
661
    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
662
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
663

    
664
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
665

    
666
.. code-block:: console
667

    
668
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
669

    
670
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
671

    
672
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
673

    
674
Those settings have to do with the black cloudbar endpoints and will be
675
described in more detail later on in this guide. For now, just edit the domain
676
to point at node1 which is where we have installed Astakos.
677

    
678
If you are an advanced user and want to use the Shibboleth Authentication
679
method, read the relative :ref:`section <shibboleth-auth>`.
680

    
681
.. _email-configuration:
682

    
683
Email delivery configuration
684
----------------------------
685

    
686
Many of the ``Astakos`` operations require the server to notify service users
687
and administrators via email. e.g. right after the signup process, the service
688
sents an email to the registered email address containing an verification url.
689
After the user verifies the email address, Astakos once again needs to
690
notify administrators with a notice that a new account has just been verified.
691

    
692
More specifically Astakos sends emails in the following cases
693

    
694
- An email containing a verification link after each signup process.
695
- An email to the people listed in ``ADMINS`` setting after each email
696
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
697
  notifies administrators that an additional action is required in order to
698
  activate the user.
699
- A welcome email to the user email and an admin notification to ``ADMINS``
700
  right after each account activation.
701
- Feedback messages submited from Astakos contact view and Astakos feedback
702
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
703
- Project application request notifications to people included in ``HELPDESK``
704
  and ``MANAGERS`` settings.
705
- Notifications after each project members action (join request, membership
706
  accepted/declinde etc.) to project members or project owners.
707

    
708
Astakos uses the Django internal email delivering mechanism to send email
709
notifications. A simple configuration, using an external smtp server to
710
deliver messages, is shown below. Alter the following example to meet your
711
smtp server characteristics. Notice that the smtp server is needed for a proper
712
installation.
713

    
714
Edit ``/etc/synnefo/00-snf-common-admins.conf``:
715

    
716
.. code-block:: python
717

    
718
    EMAIL_HOST = "mysmtp.server.example.com"
719
    EMAIL_HOST_USER = "<smtpuser>"
720
    EMAIL_HOST_PASSWORD = "<smtppassword>"
721

    
722
    # this gets appended in all email subjects
723
    EMAIL_SUBJECT_PREFIX = "[example.com] "
724

    
725
    # Address to use for outgoing emails
726
    DEFAULT_FROM_EMAIL = "server@example.com"
727

    
728
    # Email where users can contact for support. This is used in html/email
729
    # templates.
730
    CONTACT_EMAIL = "server@example.com"
731

    
732
    # The email address that error messages come from
733
    SERVER_EMAIL = "server-errors@example.com"
734

    
735
Notice that since email settings might be required by applications other than
736
Astakos, they are defined in a different configuration file than the one
737
previously used to set Astakos specific settings.
738

    
739
Refer to
740
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
741
for additional information on available email settings.
742

    
743
As refered in the previous section, based on the operation that triggers
744
an email notification, the recipients list differs. Specifically, for
745
emails whose recipients include contacts from your service team
746
(administrators, managers, helpdesk etc) synnefo provides the following
747
settings located in ``00-snf-common-admins.conf``:
748

    
749
.. code-block:: python
750

    
751
    ADMINS = (('Admin name', 'admin@example.com'),
752
              ('Admin2 name', 'admin2@example.com))
753
    MANAGERS = (('Manager name', 'manager@example.com'),)
754
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.com'),)
755

    
756
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:
757

    
758
.. code-block:: python
759

    
760
    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
761
    EMAIL_FILE_PATH = '/tmp/app-messages'
762

    
763

    
764
Enable Pooling
765
--------------
766

    
767
This section can be bypassed, but we strongly recommend you apply the following,
768
since they result in a significant performance boost.
769

    
770
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
771
around Psycopg2. This allows independent Django requests to reuse pooled DB
772
connections, with significant performance gains.
773

    
774
To use, first monkey-patch psycopg2. For Django, run this before the
775
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
776

    
777
.. code-block:: console
778

    
779
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
780
    monkey_patch_psycopg2()
781

    
782
Since we are running with greenlets, we should modify psycopg2 behavior, so it
783
works properly in a greenlet context:
784

    
785
.. code-block:: console
786

    
787
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
788
    make_psycopg_green()
789

    
790
Use the Psycopg2 driver as usual. For Django, this means using
791
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
792
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
793
driver, through ``DATABASES.OPTIONS`` in Django.
794

    
795
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
796
file that looks like this:
797

    
798
.. code-block:: console
799

    
800
    # Monkey-patch psycopg2
801
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
802
    monkey_patch_psycopg2()
803

    
804
    # If running with greenlets
805
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
806
    make_psycopg_green()
807

    
808
    DATABASES = {
809
     'default': {
810
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
811
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
812
         'OPTIONS': {'synnefo_poolsize': 8},
813

    
814
         # ATTENTION: This *must* be the absolute path if using sqlite3.
815
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
816
         'NAME': 'snf_apps',
817
         'USER': 'synnefo',                      # Not used with sqlite3.
818
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
819
         # Set to empty string for localhost. Not used with sqlite3.
820
         'HOST': '203.0.113.1',
821
         # Set to empty string for default. Not used with sqlite3.
822
         'PORT': '5432',
823
     }
824
    }
825

    
826
Database Initialization
827
-----------------------
828

    
829
After configuration is done, we initialize the database by running:
830

    
831
.. code-block:: console
832

    
833
    # snf-manage syncdb
834

    
835
At this example we don't need to create a django superuser, so we select
836
``[no]`` to the question. After a successful sync, we run the migration needed
837
for Astakos:
838

    
839
.. code-block:: console
840

    
841
    # snf-manage migrate im
842
    # snf-manage migrate quotaholder_app
843
    # snf-manage migrate oa2
844

    
845
Then, we load the pre-defined user groups
846

    
847
.. code-block:: console
848

    
849
    # snf-manage loaddata groups
850

    
851
.. _services-reg:
852

    
853
Services Registration
854
---------------------
855

    
856
When the database is ready, we need to register the services. The following
857
command will ask you to register the standard Synnefo components (Astakos,
858
Cyclades and Pithos) along with the services they provide. Note that you
859
have to register at least Astakos in order to have a usable authentication
860
system. For each component, you will be asked to provide two URLs: its base
861
URL and its UI URL.
862

    
863
The former is the location where the component resides; it should equal
864
the ``<component_name>_BASE_URL`` as specified in the respective component
865
settings. For example, the base URL for Astakos would be
866
``https://node1.example.com/astakos``.
867

    
868
The latter is the URL that appears in the Cloudbar and leads to the
869
component UI. If you want to follow the default setup, set
870
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
871
URL as explained before. (You can later change the UI URL with
872
``snf-manage component-modify <component_name> --ui-url new_ui_url``.)
873

    
874
The command will also register automatically the resource definitions
875
offered by the services.
876

    
877
.. code-block:: console
878

    
879
    # snf-component-register
880

    
881
.. note::
882

    
883
   This command is equivalent to running the following series of commands;
884
   it registers the three components in Astakos and then in each host it
885
   exports the respective service definitions, copies the exported json file
886
   to the Astakos host, where it finally imports it:
887

    
888
    .. code-block:: console
889

    
890
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
891
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
892
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
893
       astakos-host$ snf-manage service-export-astakos > astakos.json
894
       astakos-host$ snf-manage service-import --json astakos.json
895
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
896
       # copy the file to astakos-host
897
       astakos-host$ snf-manage service-import --json cyclades.json
898
       pithos-host$ snf-manage service-export-pithos > pithos.json
899
       # copy the file to astakos-host
900
       astakos-host$ snf-manage service-import --json pithos.json
901

    
902
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2.
903

    
904
Setting Default Base Quota for Resources
905
----------------------------------------
906

    
907
We now have to specify the limit on resources that each user can employ
908
(exempting resources offered by projects). When specifying storage or
909
memory size limits you can append a unit to the value, i.e. 10240 MB,
910
10 GB etc. Use the special value ``inf``, if you don't want to restrict a
911
resource.
912

    
913
.. code-block:: console
914

    
915
    # snf-manage resource-modify --default-quota-interactive
916

    
917
Setting Resource Visibility
918
---------------------------
919

    
920
It is possible to control whether a resource is visible to the users via the
921
API or the Web UI. The default value for these options is denoted inside the
922
default resource definitions. Note that the system always checks and
923
enforces resource quota, regardless of their visibility. You can inspect the
924
current status with::
925

    
926
   # snf-manage resource-list
927

    
928
You can change a resource's visibility with::
929

    
930
   # snf-manage resource-modify <resource> --api-visible=True (or --ui-visible=True)
931

    
932
.. _pithos_view_registration:
933

    
934
Register pithos view as an OAuth 2.0 client
935
-------------------------------------------
936

    
937
Starting from synnefo version 0.15, the pithos view, in order to get access to
938
the data of a protected pithos resource, has to be granted authorization for
939
the specific resource by astakos.
940

    
941
During the authorization grant procedure, it has to authenticate itself with
942
astakos since the latter has to prevent serving requests by
943
unknown/unauthorized clients.
944

    
945
Each oauth 2.0 client is identified by a client identifier (client_id).
946
Moreover, the confidential clients are authenticated via a password
947
(client_secret).
948
Then, each client has to declare at least a redirect URI so that astakos will
949
be able to validate the redirect URI provided during the authorization code
950
request.
951
If a client is trusted (like a pithos view), astakos grants access on behalf
952
of the resource owner, otherwise the resource owner has to be asked.
953

    
954
To register the pithos view as an OAuth 2.0 client in astakos, we have to run
955
the following command::
956

    
957
    snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url https://node2.example.com/pithos/ui/view
958

    
959
Servers Initialization
960
----------------------
961

    
962
Finally, we initialize the servers on node1:
963

    
964
.. code-block:: console
965

    
966
    root@node1:~ # /etc/init.d/gunicorn restart
967
    root@node1:~ # /etc/init.d/apache2 restart
968

    
969
We have now finished the Astakos setup. Let's test it now.
970

    
971

    
972
Testing of Astakos
973
==================
974

    
975
Open your favorite browser and go to:
976

    
977
``http://node1.example.com/astakos``
978

    
979
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
980
the "welcome" door of Astakos, then you have successfully setup Astakos.
981

    
982
Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
983
and fill all your data at the sign up form. Then click "SUBMIT". You should now
984
see a green box on the top, which informs you that you made a successful request
985
and the request has been sent to the administrators. So far so good, let's
986
assume that you created the user with username ``user@example.com``.
987

    
988
Now we need to activate that user. Return to a command prompt at node1 and run:
989

    
990
.. code-block:: console
991

    
992
    root@node1:~ # snf-manage user-list
993

    
994
This command should show you a list with only one user; the one we just created.
995
This user should have an id with a value of ``1`` and flag "active" and
996
"verified" set to False. Now run:
997

    
998
.. code-block:: console
999

    
1000
    root@node1:~ # snf-manage user-modify 1 --verify --accept
1001

    
1002
This verifies the user email and activates the user.
1003
When running in production, the activation is done automatically with different
1004
types of moderation, that Astakos supports. You can see the moderation methods
1005
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
1006
documentation. In production, you can also manually activate a user, by sending
1007
him/her an activation email. See how to do this at the :ref:`User
1008
activation <user_activation>` section.
1009

    
1010
Now let's go back to the homepage. Open ``http://node1.example.com/astakos/ui/`` with
1011
your browser again. Try to sign in using your new credentials. If the Astakos
1012
menu appears and you can see your profile, then you have successfully setup
1013
Astakos.
1014

    
1015
Let's continue to install Pithos now.
1016

    
1017

    
1018
Installation of Pithos on node2
1019
===============================
1020

    
1021
To install Pithos, grab the packages from our repository (make sure  you made
1022
the additions needed in your ``/etc/apt/sources.list`` file, as described
1023
previously), by running:
1024

    
1025
.. code-block:: console
1026

    
1027
   # apt-get install snf-pithos-app snf-pithos-backend
1028

    
1029
Now, install the pithos web interface:
1030

    
1031
.. code-block:: console
1032

    
1033
   # apt-get install snf-pithos-webclient
1034

    
1035
This package provides the standalone Pithos web client. The web client is the
1036
web UI for Pithos and will be accessible by clicking "Pithos" on the Astakos
1037
interface's cloudbar, at the top of the Astakos homepage.
1038

    
1039

    
1040
.. _conf-pithos:
1041

    
1042
Configuration of Pithos
1043
=======================
1044

    
1045
Gunicorn setup
1046
--------------
1047

    
1048
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
1049
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
1050
(as happened for node1):
1051

    
1052
.. code-block:: console
1053

    
1054
    # cp /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo
1055

    
1056

    
1057
.. warning:: Do NOT start the server yet, because it won't find the
1058
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
1059
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
1060
    ``--worker-class=sync``. We will start the server after successful
1061
    installation of Astakos. If the server is running::
1062

    
1063
       # /etc/init.d/gunicorn stop
1064

    
1065
Conf Files
1066
----------
1067

    
1068
After Pithos is successfully installed, you will find the directory
1069
``/etc/synnefo`` and some configuration files inside it, as you did in node1
1070
after installation of Astakos. Here, you will not have to change anything that
1071
has to do with snf-common or snf-webproject. Everything is set at node1. You
1072
only need to change settings that have to do with Pithos. Specifically:
1073

    
1074
Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
1075
this options:
1076

    
1077
.. code-block:: console
1078

    
1079
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1080

    
1081
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1082
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1083
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1084

    
1085
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1086

    
1087

    
1088
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
1089
find the Pithos backend database. Above we tell Pithos that its database is
1090
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
1091
``example_passw0rd``.  All those settings where setup during node1's "Database
1092
setup" section.
1093

    
1094
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
1095
the Pithos backend data. Above we tell Pithos to store its data under
1096
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1097
directory at node1's "Pithos data directory setup" section.
1098

    
1099
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1100
The Astakos service is used for user management (authentication, quotas, etc.)
1101

    
1102
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.
1103

    
1104
The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1105
It can be retrieved by running on the Astakos node (node1 in our case):
1106

    
1107
.. code-block:: console
1108

    
1109
   # snf-manage component-list
1110

    
1111
The token has been generated automatically during the :ref:`Pithos service
1112
registration <services-reg>`.
1113

    
1114
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
1115
object checksums. This results to improved performance during object uploading.
1116
However, if compatibility with the OpenStack Object Storage API is important
1117
then it should be changed to ``True``.
1118

    
1119
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1120
Pithos web UI with the Astakos web UI (through the top cloudbar):
1121

    
1122
.. code-block:: console
1123

    
1124
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1125
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1126
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1127

    
1128
The ``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1129
cloudbar.
1130

    
1131
The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1132
Pithos web client to get from Astakos all the information needed to fill its
1133
own cloudbar. So we put our Astakos deployment urls there.
1134

    
1135
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view
1136
in order to authenticate itself with astakos during the authorization grant
1137
procedure and it should container the credentials issued for the pithos view
1138
in `the pithos view registration step`__.
1139

    
1140
__ pithos_view_registration_
1141

    
1142
Pooling and Greenlets
1143
---------------------
1144

    
1145
Pithos is pooling-ready without the need of further configuration, because it
1146
doesn't use a Django DB. It pools HTTP connections to Astakos and Pithos
1147
backend objects for access to the Pithos DB.
1148

    
1149
However, as in Astakos, since we are running with Greenlets, it is also
1150
recommended to modify psycopg2 behavior so it works properly in a greenlet
1151
context. This means adding the following lines at the top of your
1152
``/etc/synnefo/10-snf-webproject-database.conf`` file:
1153

    
1154
.. code-block:: console
1155

    
1156
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
1157
    make_psycopg_green()
1158

    
1159
Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
1160
mentioned above, depending on your setup) argument on your
1161
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
1162
like this:
1163

    
1164
.. code-block:: console
1165

    
1166
    CONFIG = {
1167
     'mode': 'django',
1168
     'environment': {
1169
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
1170
     },
1171
     'working_dir': '/etc/synnefo',
1172
     'user': 'www-data',
1173
     'group': 'www-data',
1174
     'args': (
1175
       '--bind=127.0.0.1:8080',
1176
       '--workers=4',
1177
       '--worker-class=gevent',
1178
       '--log-level=debug',
1179
       '--timeout=43200'
1180
     ),
1181
    }
1182

    
1183
Stamp Database Revision
1184
-----------------------
1185

    
1186
Pithos uses the alembic_ database migrations tool.
1187

    
1188
.. _alembic: http://alembic.readthedocs.org
1189

    
1190
After a successful installation, we should stamp it at the most recent
1191
revision, so that future migrations know where to start upgrading in
1192
the migration history.
1193

    
1194
.. code-block:: console
1195

    
1196
    root@node2:~ # pithos-migrate stamp head
1197

    
1198
Mount the NFS directory
1199
-----------------------
1200

    
1201
First install the package nfs-common by running:
1202

    
1203
.. code-block:: console
1204

    
1205
   root@node2:~ # apt-get install nfs-common
1206

    
1207
now create the directory /srv/pithos/ and mount the remote directory to it:
1208

    
1209
.. code-block:: console
1210

    
1211
   root@node2:~ # mkdir /srv/pithos/
1212
   root@node2:~ # mount -t nfs 203.0.113.1:/srv/pithos/ /srv/pithos/
1213

    
1214
Servers Initialization
1215
----------------------
1216

    
1217
After configuration is done, we initialize the servers on node2:
1218

    
1219
.. code-block:: console
1220

    
1221
    root@node2:~ # /etc/init.d/gunicorn restart
1222
    root@node2:~ # /etc/init.d/apache2 restart
1223

    
1224
You have now finished the Pithos setup. Let's test it now.
1225

    
1226
Testing of Pithos
1227
=================
1228

    
1229
Open your browser and go to the Astakos homepage:
1230

    
1231
``http://node1.example.com/astakos``
1232

    
1233
Login, and you will see your profile page. Now, click the "Pithos" link on the
1234
top black cloudbar. If everything was setup correctly, this will redirect you
1235
to:
1236

    
1237
``https://node2.example.com/ui``
1238

    
1239
and you will see the blue interface of the Pithos application.  Click the
1240
orange "Upload" button and upload your first file. If the file gets uploaded
1241
successfully, then this is your first sign of a successful Pithos installation.
1242
Go ahead and experiment with the interface to make sure everything works
1243
correctly.
1244

    
1245
You can also use the Pithos clients to sync data from your Windows PC or MAC.
1246

    
1247
If you don't stumble on any problems, then you have successfully installed
1248
Pithos, which you can use as a standalone File Storage Service.
1249

    
1250
If you would like to do more, such as:
1251

    
1252
    * Spawning VMs
1253
    * Spawning VMs from Images stored on Pithos
1254
    * Uploading your custom Images to Pithos
1255
    * Spawning VMs from those custom Images
1256
    * Registering existing Pithos files as Images
1257
    * Connect VMs to the Internet
1258
    * Create Private Networks
1259
    * Add VMs to Private Networks
1260

    
1261
please continue with the rest of the guide.
1262

    
1263

    
1264
Kamaki
1265
======
1266

    
1267
`Kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_ is an
1268
Openstack API client library and command line interface with custom extentions
1269
specific to Synnefo.
1270

    
1271
Kamaki Installation and Configuration
1272
-------------------------------------
1273

    
1274
To install kamaki run:
1275

    
1276
.. code-block:: console
1277

    
1278
   # apt-get install kamaki
1279

    
1280
Now, visit
1281

    
1282
 `https://node1.example.com/astakos/ui/`
1283

    
1284
log in and click on ``API access``. Scroll all the way to the bottom of the
1285
page, click on the orange ``Download your .kamakirc`` button and save the file
1286
as ``.kamakirc`` in your home directory.
1287

    
1288
That's all, kamaki is now configured and you can start using it. For a list of
1289
commands, see the `official documentantion <http://www.synnefo.org/docs/kamaki/latest/commands.html>`_.
1290

    
1291
Cyclades Prerequisites
1292
======================
1293

    
1294
Before proceeding with the Cyclades installation, make sure you have
1295
successfully set up Astakos and Pithos first, because Cyclades depends on
1296
them. If you don't have a working Astakos and Pithos installation yet, please
1297
return to the :ref:`top <quick-install-admin-guide>` of this guide.
1298

    
1299
Besides Astakos and Pithos, you will also need a number of additional working
1300
prerequisites, before you start the Cyclades installation.
1301

    
1302
Ganeti
1303
------
1304

    
1305
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
1306
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
1307
Please refer to the `ganeti documentation <http://docs.ganeti.org/ganeti/2.8/html>`_ for all
1308
the gory details. A successful Ganeti installation concludes with a working
1309
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
1310
<GANETI_NODES>`.
1311

    
1312
The above Ganeti cluster can run on different physical machines than node1 and
1313
node2 and can scale independently, according to your needs.
1314

    
1315
For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
1316
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
1317
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1318

    
1319
We highly recommend that you read the official Ganeti documentation, if you are
1320
not familiar with Ganeti.
1321

    
1322
Ganeti Prerequisites
1323
--------------------
1324
You're gonna need the ``lvm2`` and ``vlan`` packages, so run:
1325

    
1326
.. code-block:: console
1327

    
1328
   # apt-get install lvm2 vlan
1329

    
1330
Ganeti requires FQDN. To properly configure your nodes please
1331
see `this <http://docs.ganeti.org/ganeti/2.6/html/install.html#hostname-issues>`_.
1332

    
1333
Ganeti requires an extra available IP and its FQDN e.g., ``203.0.113.100`` and
1334
``ganeti.node1.example.com``. Add this IP to your DNS server configuration, as
1335
explained above.
1336

    
1337
Also, Ganeti will need a volume group with the same name e.g., ``ganeti``
1338
across all nodes, of at least 20GiB. To create the volume group,
1339
see `this <http://www.tldp.org/HOWTO/LVM-HOWTO/createvgs.html>`_.
1340

    
1341
Moreover, node1 and node2 must have the same dsa, rsa keys and authorised_keys
1342
under ``/root/.ssh/`` for password-less root ssh between each other. To
1343
generate said keys, see `this <https://wiki.debian.org/SSH#Using_shared_keys>`_.
1344

    
1345
In the following sections, we assume that the public interface of all nodes is
1346
``eth0`` and there are two extra interfaces ``eth1`` and ``eth2``, which can
1347
also be vlans on your primary interface e.g., ``eth0.1`` and ``eth0.2``  in
1348
case you don't have multiple physical interfaces. For information on how to
1349
create vlans, please see
1350
`this <https://wiki.debian.org/NetworkConfiguration#Howto_use_vlan_.28dot1q.2C_802.1q.2C_trunk.29_.28Etch.2C_Lenny.29>`_.
1351

    
1352
Finally, setup two bridges on the host machines (e.g: br1/br2 on eth1/eth2
1353
respectively), as described `here <https://wiki.debian.org/BridgeNetworkConnections>`_.
1354

    
1355
Ganeti Installation and Initialization
1356
--------------------------------------
1357

    
1358
We assume that Ganeti will use the KVM hypervisor. To install KVM, run on all
1359
Ganeti nodes:
1360

    
1361
.. code-block:: console
1362

    
1363
   # apt-get install qemu-kvm
1364

    
1365
It's time to install Ganeti. To be able to use hotplug (which will be part of
1366
the official Ganeti 2.10), we recommend using our Ganeti package version:
1367

    
1368
``2.8.3+snap1+b64v1+kvm1+ext1+lockfix1+ipfix1+backports1-1~wheezy``
1369

    
1370
Let's briefly explain each patch set:
1371

    
1372
    * snap adds snapshot support for ext disk template
1373
    * b64 saves networks' bitarrays in a more compact representation
1374
    * kvm exports disk geometry to kvm command and adds migration capabilities
1375
    * ext
1376

    
1377
      * exports logical id in hooks
1378
      * allows cache, heads, cyls arbitrary params to reach kvm command
1379

    
1380
    * lockfix is a workaround for Issue #621
1381
    * ipfix does not require IP if mode is routed (needed for IPv6 only NICs)
1382
    * backports is a set of patches backported from stable-2.10
1383

    
1384
      * Hotplug support
1385
      * Better networking support (NIC configuration scripts)
1386
      * Change IP pool to support NAT instances
1387
      * Change RAPI to accept depends body argument and shutdown_timeout
1388

    
1389
To install Ganeti run:
1390

    
1391
.. code-block:: console
1392

    
1393
   # apt-get install snf-ganeti ganeti-htools ganeti-haskell ganeti2
1394

    
1395
Ganeti will make use of drbd. To enable this and make the configuration
1396
permanent you have to do the following :
1397

    
1398
.. code-block:: console
1399

    
1400
   # modprobe drbd minor_count=255 usermode_helper=/bin/true
1401
   # echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
1402

    
1403
Then run on node1:
1404

    
1405
.. code-block:: console
1406

    
1407
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
1408
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br1 \
1409
                    --default-iallocator hail \
1410
                    --hypervisor-parameters kvm:kernel_path=,vnc_bind_address=0.0.0.0 \
1411
                    --specs-nic-count min=0,max=16 \
1412
                    --master-netdev eth0 ganeti.node1.example.com
1413

    
1414
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
1415
                    --vm-capable=yes node2.example.com
1416
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
1417
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1418

    
1419
``br1`` will be the default interface for any newly created VMs.
1420

    
1421
You can verify that the ganeti cluster is successfully setup, by running on the
1422
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1423

    
1424
.. code-block:: console
1425

    
1426
   # gnt-cluster verify
1427

    
1428
.. _cyclades-install-snfimage:
1429

    
1430
snf-image
1431
---------
1432

    
1433
Installation
1434
~~~~~~~~~~~~
1435
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
1436
you need the `snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ OS
1437
Definition installed on *all* VM-capable Ganeti nodes. This means we need
1438
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>` on
1439
node1 and node2. You can do this by running on *both* nodes:
1440

    
1441
.. code-block:: console
1442

    
1443
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1444

    
1445
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
1446
to handle image files stored on Pithos. It also needs `python-psycopg2` to be
1447
able to access the Pithos database. This is why, we also install them on *all*
1448
VM-capable Ganeti nodes.
1449

    
1450
.. warning::
1451
		snf-image uses ``curl`` for handling URLs. This means that it will
1452
		not  work out of the box if you try to use URLs served by servers which do
1453
		not have a valid certificate. In case you haven't followed the guide's
1454
		directions about the certificates, in order to circumvent this you should edit the file
1455
		``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
1456

    
1457
Configuration
1458
~~~~~~~~~~~~~
1459
snf-image supports native access to Images stored on Pithos. This means that
1460
it can talk directly to the Pithos backend, without the need of providing a
1461
public URL. More details, are described in the next section. For now, the only
1462
thing we need to do, is configure snf-image to access our Pithos backend.
1463

    
1464
To do this, we need to set the corresponding variable in
1465
``/etc/default/snf-image``, to reflect our Pithos setup:
1466

    
1467
.. code-block:: console
1468

    
1469
    PITHOS_DATA="/srv/pithos/data"
1470

    
1471
If you have installed your Ganeti cluster on different nodes than node1 and
1472
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1473

    
1474
If you would like to use Images that are also/only stored locally, you need to
1475
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
1476
Pithos.
1477

    
1478
Testing
1479
~~~~~~~
1480
You can test that snf-image is successfully installed by running on the
1481
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):
1482

    
1483
.. code-block:: console
1484

    
1485
   # gnt-os diagnose
1486

    
1487
This should return ``valid`` for snf-image.
1488

    
1489
If you are interested to learn more about snf-image's internals (and even use
1490
it alongside Ganeti without Synnefo), please see
1491
`here <http://www.synnefo.org/docs/snf-image/latest/index.html>`_ for information
1492
concerning installation instructions, documentation on the design and
1493
implementation, and supported Image formats.
1494

    
1495
.. _snf-image-images:
1496

    
1497
Actual Images for snf-image
1498
---------------------------
1499

    
1500
Now that snf-image is installed successfully we need to provide it with some
1501
Images.
1502
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1503
supports Images stored in ``extdump``, ``ntfsdump`` or ``diskdump`` format. We
1504
recommend the use of the ``diskdump`` format. For more information about
1505
snf-image Image formats see `here
1506
<http://www.synnefo.org/docs/snf-image/latest/usage.html#image-format>`_.
1507

    
1508
:ref:`snf-image <http://www.synnefo.org/docs/snf-image/latest/index.html>`
1509
also supports three (3) different locations for the above Images to be stored:
1510

    
1511
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
1512
      in :file:`/etc/default/snf-image`)
1513
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
1514
    * On Pithos (accessible natively, not only by its public URL)
1515

    
1516
For the purpose of this guide, we will use the Debian Squeeze Base Image found
1517
on the official `snf-image page
1518
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_. The
1519
image is of type ``diskdump``. We will store it in our new Pithos installation.
1520

    
1521
To do so, do the following:
1522

    
1523
a) Download the Image from the official snf-image page.
1524

    
1525
b) Upload the Image to your Pithos installation, either using the Pithos Web
1526
   UI or the command line client `kamaki
1527
   <http://www.synnefo.org/docs/kamaki/latest/index.html>`_.
1528

    
1529
To upload the file using kamaki, run:
1530

    
1531
.. code-block:: console
1532

    
1533
   # kamaki file upload debian_base-6.0-x86_64.diskdump pithos
1534

    
1535
Once the Image is uploaded successfully, download the Image's metadata file
1536
from the official snf-image page. You will need it, for spawning a VM from
1537
Ganeti, in the next section.
1538

    
1539
Of course, you can repeat the procedure to upload more Images, available from
1540
the `official snf-image page
1541
<http://www.synnefo.org/docs/snf-image/latest/usage.html#sample-images>`_.
1542

    
1543
.. _ganeti-with-pithos-images:
1544

    
1545
Spawning a VM from a Pithos Image, using Ganeti
1546
-----------------------------------------------
1547

    
1548
Now, it is time to test our installation so far. So, we have Astakos and
1549
Pithos installed, we have a working Ganeti installation, the snf-image
1550
definition installed on all VM-capable nodes, a Debian Squeeze Image on
1551
Pithos and kamaki installed and configured. Make sure you also have the
1552
`metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_
1553
for this image.
1554

    
1555
To spawn a VM from a Pithos file, we need to know:
1556

    
1557
    1) The hashmap of the file
1558
    2) The size of the file
1559

    
1560
If you uploaded the file with kamaki as described above, run:
1561

    
1562
.. code-block:: console
1563

    
1564
   # kamaki file info pithos:debian_base-6.0-x86_64.diskdump
1565

    
1566
else, replace ``pithos`` and ``debian_base-6.0-x86_64.diskdump`` with the
1567
container and filename you used, when uploading the file.
1568

    
1569
The hashmap is the field ``x-object-hash``, while the size of the file is the
1570
``content-length`` field, that ``kamaki file info`` command returns.
1571

    
1572
Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:
1573

    
1574
.. code-block:: console
1575

    
1576
   # gnt-instance add -o snf-image+default --os-parameters \
1577
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithosmap://<HashMap>/<Size>",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1578
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1579
                      testvm1
1580

    
1581
In the above command:
1582

    
1583
 * ``img_passwd``: the arbitrary root password of your new instance
1584
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1585
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1586
   should have the format ``pithosmap://<HashMap>/<size>``:
1587

    
1588
               * ``HashMap``: the map of the file
1589
               * ``size``: the size of the file, same size as reported in
1590
                 ``ls -l filename``
1591

    
1592
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
1593
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
1594
                       <http://www.synnefo.org/docs/snf-image/latest/usage.html#image-properties>`_
1595

    
1596
If the ``gnt-instance add`` command returns successfully, then run:
1597

    
1598
.. code-block:: console
1599

    
1600
   # gnt-instance info testvm1 | grep "console connection"
1601

    
1602
to find out where to connect using VNC. If you can connect successfully and can
1603
login to your new instance using the root password ``my_vm_example_passw0rd``,
1604
then everything works as expected and you have your new Debian Base VM up and
1605
running.
1606

    
1607
If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1608
to access the Pithos database and the Pithos backend data (newer versions
1609
require UUID instead of a username). Another issue you may encounter is that in
1610
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
1611
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
1612
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
1613
again find out what went wrong. Do *NOT* proceed to the next steps unless you
1614
are sure everything works till this point.
1615

    
1616
If everything works, you have successfully connected Ganeti with Pithos. Let's
1617
move on to networking now.
1618

    
1619
.. warning::
1620

    
1621
    You can bypass the networking sections and go straight to
1622
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
1623
    the Cyclades Network Service, but only the Cyclades Compute Service
1624
    (recommended for now).
1625

    
1626
Networking Setup Overview
1627
-------------------------
1628

    
1629
This part is deployment-specific and must be customized based on the specific
1630
needs of the system administrator. Synnefo supports a lot of different
1631
networking configurations in the backend (spanning from very simple to more
1632
advanced), which are not in the scope of this guide.
1633

    
1634
In this section, we'll describe the simplest scenario, which will enable the
1635
VMs to have access to the public Internet and also access to arbitrary private
1636
networks.
1637

    
1638
At the end of this section the networking setup on the two nodes will look like
1639
this:
1640

    
1641
.. image:: images/install-guide-networks.png
1642
   :width: 70%
1643
   :target: _images/install-guide-networks.png
1644

    
1645
.. _snf-network:
1646

    
1647
snf-network
1648
~~~~~~~~~~~
1649

    
1650
snf-network is a set of custom scripts, that perform all the necessary actions,
1651
so that VMs have a working networking configuration.
1652

    
1653
Install snf-network on all Ganeti nodes:
1654

    
1655
.. code-block:: console
1656

    
1657
   # apt-get install snf-network
1658

    
1659
Then, in :file:`/etc/default/snf-network` set:
1660

    
1661
.. code-block:: console
1662

    
1663
   MAC_MASK=ff:ff:f0:00:00:00
1664

    
1665
.. _nfdhcpd:
1666

    
1667
nfdhcpd
1668
~~~~~~~
1669

    
1670
nfdhcpd is an NFQUEUE based daemon, answering DHCP requests and running locally
1671
on every Ganeti node. Its leases file, gets automatically updated by
1672
snf-network and information provided by Ganeti.
1673

    
1674
.. code-block:: console
1675

    
1676
   # apt-get install python-nfqueue=0.4+physindev-1~wheezy
1677
   # apt-get install nfdhcpd
1678

    
1679
Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
1680
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
1681
variable to your DNS IP/s (the one running dnsmasq for instance or you can use
1682
Google's DNS server ``8.8.8.8``). Restart the server on all nodes:
1683

    
1684
.. code-block:: console
1685

    
1686
   # /etc/init.d/nfdhcpd restart
1687

    
1688
In order for nfdhcpd to receive the VMs requests, we have to mangle all DHCP
1689
traffic coming from the corresponding interfaces. To accomplish that run:
1690

    
1691
.. code-block:: console
1692

    
1693
   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42
1694

    
1695
and append it to your ``/etc/rc.local``.
1696

    
1697
You can check which clients are currently served by nfdhcpd by running:
1698

    
1699
.. code-block:: console
1700

    
1701
   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`
1702

    
1703
When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.
1704

    
1705
Public Network Setup
1706
--------------------
1707

    
1708
In the following section, we'll guide you through a very basic network setup.
1709
This assumes the following:
1710

    
1711
    * Node1 has access to the public network via eth0.
1712
    * Node1 will become a NAT server for the VMs.
1713
    * All nodes have ``br1/br2`` dedicated for the VMs' public/private traffic.
1714
    * VMs' public network is ``10.0.0.0/24`` with gateway ``10.0.0.1``.
1715

    
1716
Setting up the NAT server on node1
1717
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1718

    
1719
To setup the NAT server on node1, run:
1720

    
1721
.. code-block:: console
1722

    
1723
   # ip addr add 10.0.0.1/24 dev br1
1724
   # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
1725
   # echo 1 > /proc/sys/net/ipv4/ip_forward
1726

    
1727
and append it to your ``/etc/rc.local``.
1728

    
1729

    
1730
Testing the Public Networks
1731
~~~~~~~~~~~~~~~~~~~~~~~~~~~
1732

    
1733
First add the network in Ganati:
1734

    
1735
.. code-block:: console
1736

    
1737
   # gnt-network add --network=10.0.0.0/24 --gateway=10.0.0.1 --tags=nfdhcpd test-net-public
1738

    
1739
Then, provide connectivity mode and link to the network:
1740

    
1741
.. code-block:: console
1742

    
1743
   # gnt-network connect test-net-public bridged br1
1744

    
1745
Now, it is time to test that the backend infrastracture is correctly setup for
1746
the Public Network. We will add a new VM, almost the same way we did it on the
1747
previous testing section. However, now we'll also add one NIC, configured to be
1748
managed from our previously defined network.
1749

    
1750
Fetch the Debian Old Base image locally (in all nodes), by running:
1751

    
1752
.. code-block:: console
1753

    
1754
   # wget http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump -O /var/lib/snf-image/debian_base-6.0-x86_64.diskdump
1755

    
1756
Also in all nodes, bring all ``br*`` interfaces up:
1757

    
1758
.. code-block:: console
1759

    
1760
   # ifconfig br1 up
1761
   # ifconfig br2 up
1762

    
1763
Finally, run on the GANETI-MASTER (node1):
1764

    
1765
.. code-block:: console
1766

    
1767
   # gnt-instance add -o snf-image+default --os-parameters \
1768
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1769
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1770
                      --net 0:ip=pool,network=test-net-public \
1771
                      testvm2
1772

    
1773
The following things should happen:
1774

    
1775
    * Ganeti creates a tap interface.
1776
    * snf-network bridges the tap interface to ``br1`` and updates nfdhcpd state.
1777
    * nfdhcpd serves 10.0.0.2 IP to the interface of ``testvm2``.
1778

    
1779
Now try to ping the outside world e.g., ``www.synnefo.org`` from inside the VM
1780
(connect to the VM using VNC as before).
1781

    
1782
Make sure everything works as expected, before proceeding with the Private
1783
Networks setup.
1784

    
1785
.. _private-networks-setup:
1786

    
1787
Private Networks Setup
1788
----------------------
1789

    
1790
In this section, we'll describe a basic network configuration, that will provide
1791
isolated private networks to the end-users. All private network traffic, will
1792
pass through ``br1`` and isolation will be guaranteed with a specific set of
1793
``ebtables`` rules.
1794

    
1795
Testing the Private Networks
1796
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1797

    
1798
We'll create two instances and connect them to the same Private Network. This
1799
means that the instances will have a second NIC connected to the ``br1``.
1800

    
1801
.. code-block:: console
1802

    
1803
   # gnt-network add --network=192.168.1.0/24 --mac-prefix=aa:00:55 --tags=nfdhcpd,private-filtered test-net-prv-mac
1804
   # gnt-network connect test-net-prv-mac bridged br1
1805

    
1806
   # gnt-instance add -o snf-image+default --os-parameters \
1807
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1808
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1809
                      --net 0:ip=pool,network=test-net-public \
1810
                      --net 1:ip=pool,network=test-net-prv-mac \
1811
                      testvm3
1812

    
1813
   # gnt-instance add -o snf-image+default --os-parameters \
1814
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id=debian_base-6.0-x86_64,img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1815
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1816
                      --net 0:ip=pool,network=test-net-public \
1817
                      --net 1:ip=pool,network=test-net-prv-mac -n node2 \
1818
                      testvm4
1819

    
1820
Above, we create two instances with the first NIC connected to the internet and
1821
their second NIC connected to a MAC filtered private Network. Now, connect to the
1822
instances using VNC and make sure everything works as expected:
1823

    
1824
 a) The instances have access to the public internet through their first eth
1825
    interface (``eth0``), which has been automatically assigned a "public" IP.
1826

    
1827
 b) ``eth1`` will have mac prefix ``aa:00:55``
1828

    
1829
 c) On testvm3  ping 192.168.1.2
1830

    
1831
If everything works as expected, then you have finished the Network Setup at the
1832
backend for both types of Networks (Public & Private).
1833

    
1834
.. _cyclades-gtools:
1835

    
1836
Cyclades Ganeti tools
1837
---------------------
1838

    
1839
In order for Ganeti to be connected with Cyclades later on, we need the
1840
`Cyclades Ganeti tools` available on all Ganeti nodes (node1 & node2 in our
1841
case). You can install them by running in both nodes:
1842

    
1843
.. code-block:: console
1844

    
1845
   # apt-get install snf-cyclades-gtools
1846

    
1847
This will install the following:
1848

    
1849
 * ``snf-ganeti-eventd`` (daemon to publish Ganeti related messages on RabbitMQ)
1850
 * ``snf-progress-monitor`` (used by ``snf-image`` to publish progress messages)
1851

    
1852
Configure ``snf-cyclades-gtools``
1853
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1854

    
1855
The package will install the ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf``
1856
configuration file. At least we need to set the RabbitMQ endpoint for all tools
1857
that need it:
1858

    
1859
.. code-block:: console
1860

    
1861
  AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
1862

    
1863
The above variables should reflect your :ref:`Message Queue setup
1864
<rabbitmq-setup>`. This file should be editted in all Ganeti nodes.
1865

    
1866
Connect ``snf-image`` with ``snf-progress-monitor``
1867
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1868

    
1869
Finally, we need to configure ``snf-image`` to publish progress messages during
1870
the deployment of each Image. To do this, we edit ``/etc/default/snf-image`` and
1871
set the corresponding variable to ``snf-progress-monitor``:
1872

    
1873
.. code-block:: console
1874

    
1875
   PROGRESS_MONITOR="snf-progress-monitor"
1876

    
1877
This file should be editted in all Ganeti nodes.
1878

    
1879
.. _rapi-user:
1880

    
1881
Synnefo RAPI user
1882
-----------------
1883

    
1884
As a last step before installing Cyclades, create a new RAPI user that will
1885
have ``write`` access. Cyclades will use this user to issue commands to Ganeti,
1886
so we will call the user ``cyclades`` with password ``example_rapi_passw0rd``.
1887
You can do this, by first running:
1888

    
1889
.. code-block:: console
1890

    
1891
   # echo -n 'cyclades:Ganeti Remote API:example_rapi_passw0rd' | openssl md5
1892

    
1893
and then putting the output in ``/var/lib/ganeti/rapi/users`` as follows:
1894

    
1895
.. code-block:: console
1896

    
1897
   cyclades {HA1}55aec7050aa4e4b111ca43cb505a61a0 write
1898

    
1899
More about Ganeti's RAPI users `here.
1900
<http://docs.ganeti.org/ganeti/2.6/html/rapi.html#introduction>`_
1901

    
1902
You have now finished with all needed Prerequisites for Cyclades. Let's move on
1903
to the actual Cyclades installation.
1904

    
1905

    
1906
Installation of Cyclades on node1
1907
=================================
1908

    
1909
This section describes the installation of Cyclades. Cyclades is Synnefo's
1910
Compute service. The Image Service will get installed automatically along with
1911
Cyclades, because it is contained in the same Synnefo component.
1912

    
1913
We will install Cyclades on node1. To do so, we install the corresponding
1914
package by running on node1:
1915

    
1916
.. code-block:: console
1917

    
1918
   # apt-get install snf-cyclades-app memcached python-memcache
1919

    
1920
If all packages install successfully, then Cyclades are installed and we
1921
proceed with their configuration.
1922

    
1923
Since version 0.13, Synnefo uses the VMAPI in order to prevent sensitive data
1924
needed by 'snf-image' to be stored in Ganeti configuration (e.g. VM password).
1925
This is achieved by storing all sensitive information to a CACHE backend and
1926
exporting it via VMAPI. The cache entries are invalidated after the first
1927
request. Synnefo uses `memcached <http://memcached.org/>`_ as a
1928
`Django <https://www.djangoproject.com/>`_ cache backend.
1929

    
1930
Configuration of Cyclades
1931
=========================
1932

    
1933
Conf files
1934
----------
1935

    
1936
After installing Cyclades, a number of new configuration files will appear under
1937
``/etc/synnefo/`` prefixed with ``20-snf-cyclades-app-``. We will describe here
1938
only the minimal needed changes to result with a working system. In general,
1939
sane defaults have been chosen for the most of the options, to cover most of the
1940
common scenarios. However, if you want to tweak Cyclades feel free to do so,
1941
once you get familiar with the different options.
1942

    
1943
Edit ``/etc/synnefo/20-snf-cyclades-app-api.conf``:
1944

    
1945
.. code-block:: console
1946

    
1947
   CYCLADES_BASE_URL = 'https://node1.example.com/cyclades'
1948
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1949

    
1950
   CYCLADES_SERVICE_TOKEN = 'cyclades_service_token22w'
1951

    
1952
The ``ASTAKOS_AUTH_URL`` denotes the Astakos endpoint for Cyclades,
1953
which is used for all user management, including authentication.
1954
Since our Astakos, Cyclades, and Pithos installations belong together,
1955
they should all have identical ``ASTAKOS_AUTH_URL`` setting
1956
(see also, :ref:`previously <conf-pithos>`).
1957

    
1958
The ``CYCLADES_BASE_URL`` setting must point to the top-level Cyclades URL.
1959
Appending an extra path (``/cyclades`` here) is recommended in order to
1960
distinguish components, if more than one are installed on the same machine.
1961

    
1962
The ``CYCLADES_SERVICE_TOKEN`` is the token used for authentication with Astakos.
1963
It can be retrieved by running on the Astakos node (node1 in our case):
1964

    
1965
.. code-block:: console
1966

    
1967
   # snf-manage component-list
1968

    
1969
The token has been generated automatically during the :ref:`Cyclades service
1970
registration <services-reg>`.
1971

    
1972
Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
1973

    
1974
.. code-block:: console
1975

    
1976
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1977
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1978
   CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1979

    
1980
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1981
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
1982
used by the Cyclades Web UI to get from Astakos all the information needed to
1983
fill its own cloudbar. So, we put our Astakos deployment urls there. All the
1984
above should have the same values we put in the corresponding variables in
1985
``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf`` on the previous
1986
:ref:`Pithos configuration <conf-pithos>` section.
1987

    
1988
Edit ``/etc/synnefo/20-snf-cyclades-app-plankton.conf``:
1989

    
1990
.. code-block:: console
1991

    
1992
   BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1993
   BACKEND_BLOCK_PATH = '/srv/pithos/data/'
1994

    
1995
In this file we configure the Image Service. ``BACKEND_DB_CONNECTION``
1996
denotes the Pithos database (where the Image files are stored). So we set that
1997
to point to our Pithos database. ``BACKEND_BLOCK_PATH`` denotes the actual
1998
Pithos data location.
1999

    
2000
Edit ``/etc/synnefo/20-snf-cyclades-app-queues.conf``:
2001

    
2002
.. code-block:: console
2003

    
2004
   AMQP_HOSTS=["amqp://synnefo:example_rabbitmq_passw0rd@node1.example.com:5672"]
2005

    
2006
The above settings denote the Message Queue. Those settings should have the same
2007
values as in ``/etc/synnefo/20-snf-cyclades-gtools-backend.conf`` file, and
2008
reflect our :ref:`Message Queue setup <rabbitmq-setup>`.
2009

    
2010
Edit ``/etc/synnefo/20-snf-cyclades-app-vmapi.conf``:
2011

    
2012
.. code-block:: console
2013

    
2014
   VMAPI_CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=3600"
2015

    
2016
Add a vncauthproxy user:
2017

    
2018
.. code-block:: console
2019

    
2020
    # vncauthproxy-passwd /var/lib/vncauthproxy/users synnefo
2021
    # /etc/init.d/vncauthproxy restart
2022

    
2023
Configure the vncauthproxy settings in
2024
``/etc/synnefo/20/snf-cyclades-app-api.conf``:
2025

    
2026
.. code-block:: console
2027

    
2028
    CYCLADES_VNCAUTHPROXY_OPTS = {
2029
        'auth_user': 'synnefo',
2030
        'auth_password': 'secret_password',
2031
        'server_address': '127.0.0.1',
2032
        'server_port': 24999,
2033
        'enable_ssl': False,
2034
        'ca_cert': None,
2035
        'strict': False,
2036
    }
2037

    
2038
Depending on your snf-vncauthproxy setup, you might want to tweak the above
2039
settings. Check the `documentation
2040
<http://www.synnefo.org/docs/snf-vncauthproxy/latest/index.html>`_ of
2041
snf-vncauthproxy for more information.
2042

    
2043
We have now finished with the basic Cyclades configuration.
2044

    
2045
Database Initialization
2046
-----------------------
2047

    
2048
Once Cyclades is configured, we sync the database:
2049

    
2050
.. code-block:: console
2051

    
2052
   $ snf-manage syncdb
2053
   $ snf-manage migrate
2054

    
2055
and load the initial server flavors:
2056

    
2057
.. code-block:: console
2058

    
2059
   $ snf-manage loaddata flavors
2060

    
2061
If everything returns successfully, our database is ready.
2062

    
2063
Add the Ganeti backend
2064
----------------------
2065

    
2066
In our installation we assume that we only have one Ganeti cluster, the one we
2067
setup earlier.  At this point you have to add this backend (Ganeti cluster) to
2068
Cyclades assuming that you have setup the :ref:`Rapi User <rapi-user>`
2069
correctly.
2070

    
2071
.. code-block:: console
2072

    
2073
   $ snf-manage backend-add --clustername=ganeti.node1.example.com --user=cyclades --pass=example_rapi_passw0rd
2074

    
2075
You can see everything has been setup correctly by running:
2076

    
2077
.. code-block:: console
2078

    
2079
   $ snf-manage backend-list
2080

    
2081
Enable the new backend by running:
2082

    
2083
.. code-block::
2084

    
2085
   $ snf-manage backend-modify --drained False 1
2086

    
2087
.. warning:: Since version 0.13, the backend is set to "drained" by default.
2088
    This means that you cannot add VMs to it. The reason for this is that the
2089
    nodes should be unavailable to Synnefo until the Administrator explicitly
2090
    releases them. To change this setting, use ``snf-manage backend-modify
2091
    --drained False <backend-id>``.
2092

    
2093
If something is not set correctly, you can modify the backend with the
2094
``snf-manage backend-modify`` command. If something has gone wrong, you could
2095
modify the backend to reflect the Ganeti installation by running:
2096

    
2097
.. code-block:: console
2098

    
2099
   $ snf-manage backend-modify --clustername "ganeti.node1.example.com"
2100
                               --user=cyclades
2101
                               --pass=example_rapi_passw0rd
2102
                               1
2103

    
2104
``clustername`` denotes the Ganeti-cluster's name. We provide the corresponding
2105
domain that resolves to the master IP, than the IP itself, to ensure Cyclades
2106
can talk to Ganeti even after a Ganeti master-failover.
2107

    
2108
``user`` and ``pass`` denote the RAPI user's username and the RAPI user's
2109
password.  Once we setup the first backend to point at our Ganeti cluster, we
2110
update the Cyclades backends status by running:
2111

    
2112
.. code-block:: console
2113

    
2114
   $ snf-manage backend-update-status
2115

    
2116
Cyclades can manage multiple Ganeti backends, but for the purpose of this
2117
guide,we won't get into more detail regarding mulitple backends. If you want to
2118
learn more please see /*TODO*/.
2119

    
2120
Add a Public Network
2121
----------------------
2122

    
2123
Cyclades supports different Public Networks on different Ganeti backends.
2124
After connecting Cyclades with our Ganeti cluster, we need to setup a Public
2125
Network for this Ganeti backend (`id = 1`). The basic setup is to bridge every
2126
created NIC on a bridge.
2127

    
2128
.. code-block:: console
2129

    
2130
   $ snf-manage network-create --subnet=10.0.0.0/24 \
2131
                               --gateway=10.0.0.1 \
2132
                               --public --dhcp=True --flavor=CUSTOM \
2133
                               --link=br1 --mode=bridged \
2134
                               --name=public_network \
2135
                               --backend-id=1
2136

    
2137
This will create the Public Network on both Cyclades and the Ganeti backend. To
2138
make sure everything was setup correctly, also run:
2139

    
2140
.. code-block:: console
2141

    
2142
   # snf-manage reconcile-networks
2143

    
2144
You can use ``snf-manage reconcile-networks --fix-all`` to fix any
2145
inconsistencies that may have arisen.
2146

    
2147
You can see all available networks by running:
2148

    
2149
.. code-block:: console
2150

    
2151
   # snf-manage network-list
2152

    
2153
and inspect each network's state by running:
2154

    
2155
.. code-block:: console
2156

    
2157
   # snf-manage network-inspect <net_id>
2158

    
2159
Finally, you can see the networks from the Ganeti perspective by running on the
2160
Ganeti MASTER:
2161

    
2162
.. code-block:: console
2163

    
2164
   # gnt-network list
2165
   # gnt-network info <network_name>
2166

    
2167
Create pools for Private Networks
2168
---------------------------------
2169

    
2170
To prevent duplicate assignment of resources to different private networks,
2171
Cyclades supports two types of pools:
2172

    
2173
 - MAC prefix Pool
2174
 - Bridge Pool
2175

    
2176
As long as those resourses have been provisioned, admin has to define two
2177
these pools in Synnefo:
2178

    
2179

    
2180
.. code-block:: console
2181

    
2182
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
2183

    
2184
Also, change the Synnefo setting in :file:`/etc/synnefo/20-snf-cyclades-app-api.conf`:
2185

    
2186
.. code-block:: console
2187

    
2188
   DEFAULT_MAC_FILTERED_BRIDGE = 'br2'
2189

    
2190
Servers restart
2191
---------------
2192

    
2193
Restart gunicorn on node1:
2194

    
2195
.. code-block:: console
2196

    
2197
   # /etc/init.d/gunicorn restart
2198

    
2199
Now let's do the final connections of Cyclades with Ganeti.
2200

    
2201
``snf-dispatcher`` initialization
2202
---------------------------------
2203

    
2204
``snf-dispatcher`` dispatches all messages published to the Message Queue and
2205
manages the Cyclades database accordingly. It also initializes all exchanges. By
2206
default it is not enabled during installation of Cyclades, so let's enable it in
2207
its configuration file ``/etc/default/snf-dispatcher``:
2208

    
2209
.. code-block:: console
2210

    
2211
   SNF_DSPTCH_ENABLE=true
2212

    
2213
and start the daemon:
2214

    
2215
.. code-block:: console
2216

    
2217
   # /etc/init.d/snf-dispatcher start
2218

    
2219
You can see that everything works correctly by tailing its log file
2220
``/var/log/synnefo/dispatcher.log``.
2221

    
2222
``snf-ganeti-eventd`` on GANETI MASTER
2223
--------------------------------------
2224

    
2225
The last step of the Cyclades setup is enabling the ``snf-ganeti-eventd``
2226
daemon (part of the :ref:`Cyclades Ganeti tools <cyclades-gtools>` package).
2227
The daemon is already installed on the GANETI MASTER (node1 in our case).
2228
``snf-ganeti-eventd`` is disabled by default during the ``snf-cyclades-gtools``
2229
installation, so we enable it in its configuration file
2230
``/etc/default/snf-ganeti-eventd``:
2231

    
2232
.. code-block:: console
2233

    
2234
   SNF_EVENTD_ENABLE=true
2235

    
2236
and start the daemon:
2237

    
2238
.. code-block:: console
2239

    
2240
   # /etc/init.d/snf-ganeti-eventd start
2241

    
2242
.. warning:: Make sure you start ``snf-ganeti-eventd`` *ONLY* on GANETI MASTER
2243

    
2244
Apply Quota
2245
-----------
2246

    
2247
The following commands will check and fix the integrity of user quota.
2248
In a freshly installed system, these commands have no effect and can be
2249
skipped.
2250

    
2251
.. code-block:: console
2252

    
2253
   node1 # snf-manage quota --sync
2254
   node1 # snf-manage reconcile-resources-astakos --fix
2255
   node2 # snf-manage reconcile-resources-pithos --fix
2256
   node1 # snf-manage reconcile-resources-cyclades --fix
2257

    
2258
VM stats configuration
2259
----------------------
2260

    
2261
Please refer to the documentation in the :ref:`admin guide <admin-guide-stats>`
2262
for deploying and configuring snf-stats-app and collectd.
2263

    
2264

    
2265
If all the above return successfully, then you have finished with the Cyclades
2266
installation and setup.
2267

    
2268
Let's test our installation now.
2269

    
2270

    
2271
Testing of Cyclades
2272
===================
2273

    
2274
Cyclades Web UI
2275
---------------
2276

    
2277
First of all we need to test that our Cyclades Web UI works correctly. Open your
2278
browser and go to the Astakos home page. Login and then click 'Cyclades' on the
2279
top cloud bar. This should redirect you to:
2280

    
2281
 `http://node1.example.com/cyclades/ui/`
2282

    
2283
and the Cyclades home page should appear. If not, please go back and find what
2284
went wrong. Do not proceed if you don't see the Cyclades home page.
2285

    
2286
If the Cyclades home page appears, click on the orange button 'New machine'. The
2287
first step of the 'New machine wizard' will appear. This step shows all the
2288
available Images from which you can spawn new VMs. The list should be currently
2289
empty, as we haven't registered any Images yet. Close the wizard and browse the
2290
interface (not many things to see yet). If everything seems to work, let's
2291
register our first Image file.
2292

    
2293
Cyclades Images
2294
---------------
2295

    
2296
To test our Cyclades installation, we will use an Image stored on Pithos to
2297
spawn a new VM from the Cyclades interface. We will describe all steps, even
2298
though you may already have uploaded an Image on Pithos from a :ref:`previous
2299
<snf-image-images>` section:
2300

    
2301
 * Upload an Image file to Pithos
2302
 * Register that Image file to Cyclades
2303
 * Spawn a new VM from that Image from the Cyclades Web UI
2304

    
2305
We will use the `kamaki <http://www.synnefo.org/docs/kamaki/latest/index.html>`_
2306
command line client to do the uploading and registering of the Image.
2307

    
2308
Installation of `kamaki`
2309
~~~~~~~~~~~~~~~~~~~~~~~~
2310

    
2311
You can install `kamaki` anywhere you like, since it is a standalone client of
2312
the APIs and talks to the installation over `http`. For the purpose of this
2313
guide we will assume that we have downloaded the `Debian Squeeze Base Image
2314
<https://pithos.okeanos.grnet.gr/public/9epgb>`_ and stored it under node1's
2315
``/srv/images`` directory. For that reason we will install `kamaki` on node1,
2316
too. We do this by running:
2317

    
2318
.. code-block:: console
2319

    
2320
   # apt-get install kamaki
2321

    
2322
Configuration of kamaki
2323
~~~~~~~~~~~~~~~~~~~~~~~
2324

    
2325
Now we need to setup kamaki, by adding the appropriate URLs and tokens of our
2326
installation. We do this by running:
2327

    
2328
.. code-block:: console
2329

    
2330
   $ kamaki config set cloud.default.url \
2331
       "https://node1.example.com/astakos/identity/v2.0"
2332
   $ kamaki config set cloud.default.token USER_TOKEN
2333

    
2334
Both the Authentication URL and the USER_TOKEN appear on the user's
2335
`API access` web page on the Astakos Web UI.
2336

    
2337
You can see that the new configuration options have been applied correctly,
2338
either by checking the editable file ``~/.kamakirc`` or by running:
2339

    
2340
.. code-block:: console
2341

    
2342
   $ kamaki config list
2343

    
2344
A quick test to check that kamaki is configured correctly, is to try to
2345
authenticate a user based on his/her token (in this case the user is you):
2346

    
2347
.. code-block:: console
2348

    
2349
  $ kamaki user authenticate
2350

    
2351
The above operation provides various user information, e.g. UUID (the unique
2352
user id) which might prove useful in some operations.
2353

    
2354
Upload an Image file to Pithos
2355
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2356

    
2357
Now, that we have set up `kamaki` we will upload the Image that we have
2358
downloaded and stored under ``/srv/images/``. Although we can upload the Image
2359
under the root ``Pithos`` container (as you may have done when uploading the
2360
Image from the Pithos Web UI), we will create a new container called ``images``
2361
and store the Image under that container. We do this for two reasons:
2362

    
2363
a) To demonstrate how to create containers other than the default ``Pithos``.
2364
   This can be done only with the `kamaki` client and not through the Web UI.
2365

    
2366
b) As a best organization practise, so that you won't have your Image files
2367
   tangled along with all your other Pithos files and directory structures.
2368

    
2369
We create the new ``images`` container by running:
2370

    
2371
.. code-block:: console
2372

    
2373
   $ kamaki container create images
2374

    
2375
To check if the container has been created, list all containers of your
2376
account:
2377

    
2378
.. code-block:: console
2379

    
2380
  $ kamaki file list /images
2381

    
2382
Then, we upload the Image file to that container:
2383

    
2384
.. code-block:: console
2385

    
2386
   $ kamaki file upload /srv/images/debian_base-6.0-7-x86_64.diskdump /images
2387

    
2388
The first is the local path and the second is the remote container on Pithos.
2389
Check if the file has been uploaded, by listing the container contents:
2390

    
2391
.. code-block:: console
2392

    
2393
  $ kamaki file list /images
2394

    
2395
Alternatively check if the new container and file appear on the Pithos Web UI.
2396

    
2397
Register an existing Image file to Cyclades
2398
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2399

    
2400
For the purposes of the following example, we assume that the user has uploaded
2401
a file in container ``pithos`` called ``debian_base-6.0-x86_64``. Moreover,
2402
he should have the appropriate `metadata file <http://cdn.synnefo.org/debian_base-6.0-x86_64.diskdump.meta>`_.
2403

    
2404
Once the Image file has been successfully uploaded on Pithos then we register
2405
it to Cyclades, by running:
2406

    
2407
.. code-block:: console
2408

    
2409
   $ kamaki image register --name "Debian Base" \
2410
                           --location /images/debian_base-6.0-11-x86_64.diskdump \
2411
                           --public \
2412
                           --disk-format=diskdump \
2413
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
2414
                           --property description="Debian Squeeze Base System" \
2415
                           --property size=451 --property kernel=2.6.32 --property GUI="No GUI" \
2416
                           --property sortorder=1 --property USERS=root --property OS=debian
2417

    
2418
This command registers a Pithos file as an Image in Cyclades. This Image will
2419
be public (``--public``), so all users will be able to spawn VMs from it.
2420

    
2421
Spawn a VM from the Cyclades Web UI
2422
-----------------------------------
2423

    
2424
If the registration completes successfully, then go to the Cyclades Web UI from
2425
your browser at:
2426

    
2427
 `https://node1.example.com/cyclades/ui/`
2428

    
2429
Click on the 'New Machine' button and the first step of the wizard will appear.
2430
Click on 'My Images' (right after 'System' Images) on the left pane of the
2431
wizard. Your previously registered Image "Debian Base" should appear under
2432
'Available Images'. If not, something has gone wrong with the registration. Make
2433
sure you can see your Image file on the Pithos Web UI and ``kamaki image
2434
register`` returns successfully with all options and properties as shown above.
2435

    
2436
If the Image appears on the list, select it and complete the wizard by selecting
2437
a flavor and a name for your VM. Then finish by clicking 'Create'. Make sure you
2438
write down your password, because you *WON'T* be able to retrieve it later.
2439

    
2440
If everything was setup correctly, after a few minutes your new machine will go
2441
to state 'Running' and you will be able to use it. Click 'Console' to connect
2442
through VNC out of band, or click on the machine's icon to connect directly via
2443
SSH or RDP (for windows machines).
2444

    
2445
Congratulations. You have successfully installed the whole Synnefo stack and
2446
connected all components.