Statistics
| Branch: | Tag: | Revision:

root / docs / admin-guide.rst @ 1cd3daa1

History | View | Annotate | Download (37.1 kB)

1
.. _admin-guide:
2

    
3
Synnefo Administrator's Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the complete Synnefo Administrator's Guide.
7

    
8

    
9

    
10
General Synnefo Architecture
11
============================
12

    
13
The following graph shows the whole Synnefo architecture and how it interacts
14
with multiple Ganeti clusters. We hope that after reading the Administrator's
15
Guide you will be able to understand every component and all the interactions
16
between them. It is a good idea to first go through the Quick Administrator's
17
Guide before proceeding.
18

    
19
.. image:: images/synnefo-arch2.png
20
   :width: 100%
21
   :target: _images/synnefo-arch2.png
22

    
23

    
24

    
25
Identity Service (Astakos)
26
==========================
27

    
28

    
29
Overview
30
--------
31

    
32
Authentication methods
33
~~~~~~~~~~~~~~~~~~~~~~
34

    
35
Local Authentication
36
````````````````````
37

    
38
LDAP Authentication
39
```````````````````
40

    
41
.. _shibboleth-auth:
42

    
43
Shibboleth Authentication
44
`````````````````````````
45

    
46
Astakos can delegate user authentication to a Shibboleth federation.
47

    
48
To setup shibboleth, install package::
49

    
50
  apt-get install libapache2-mod-shib2
51

    
52
Change appropriately the configuration files in ``/etc/shibboleth``.
53

    
54
Add in ``/etc/apache2/sites-available/synnefo-ssl``::
55

    
56
  ShibConfig /etc/shibboleth/shibboleth2.xml
57
  Alias      /shibboleth-sp /usr/share/shibboleth
58

    
59
  <Location /im/login/shibboleth>
60
    AuthType shibboleth
61
    ShibRequireSession On
62
    ShibUseHeaders On
63
    require valid-user
64
  </Location>
65

    
66
and before the line containing::
67

    
68
  ProxyPass        / http://localhost:8080/ retry=0
69

    
70
add::
71

    
72
  ProxyPass /Shibboleth.sso !
73

    
74
Then, enable the shibboleth module::
75

    
76
  a2enmod shib2
77

    
78
After passing through the apache module, the following tokens should be
79
available at the destination::
80

    
81
  eppn # eduPersonPrincipalName
82
  Shib-InetOrgPerson-givenName
83
  Shib-Person-surname
84
  Shib-Person-commonName
85
  Shib-InetOrgPerson-displayName
86
  Shib-EP-Affiliation
87
  Shib-Session-ID
88

    
89
Finally, add 'shibboleth' in ``ASTAKOS_IM_MODULES`` list. The variable resides
90
inside the file ``/etc/synnefo/20-snf-astakos-app-settings.conf``
91

    
92
Architecture
93
------------
94

    
95
Prereqs
96
-------
97

    
98
Installation
99
------------
100

    
101
Configuration
102
-------------
103

    
104
Working with Astakos
105
--------------------
106

    
107
User activation methods
108
~~~~~~~~~~~~~~~~~~~~~~~
109

    
110
When a new user signs up, he/she is not marked as active. You can see his/her
111
state by running (on the machine that runs the Astakos app):
112

    
113
.. code-block:: console
114

    
115
   $ snf-manage user-list
116

    
117
There are two different ways to activate a new user. Both need access to a
118
running :ref:`mail server <mail-server>`.
119

    
120
Manual activation
121
`````````````````
122

    
123
You can manually activate a new user that has already signed up, by sending
124
him/her an activation email. The email will contain an approriate activation
125
link, which will complete the activation process if followed. You can send the
126
email by running:
127

    
128
.. code-block:: console
129

    
130
   $ snf-manage user-activation-send <user ID or email>
131

    
132
Be sure to have already setup your mail server and defined it in your Synnefo
133
settings, before running the command.
134

    
135
Automatic activation
136
````````````````````
137

    
138
FIXME: Describe Regex activation method
139

    
140
Astakos advanced operations
141
---------------------------
142

    
143
Adding "Terms of Use"
144
~~~~~~~~~~~~~~~~~~~~~
145

    
146
Astakos supports versioned terms-of-use. First of all you need to create an
147
html file that will contain your terms. For example, create the file
148
``/usr/share/synnefo/sample-terms.html``, which contains the following:
149

    
150
.. code-block:: console
151

    
152
   <h1>~okeanos terms</h1>
153

    
154
   These are the example terms for ~okeanos
155

    
156
Then, add those terms-of-use with the snf-manage command:
157

    
158
.. code-block:: console
159

    
160
   $ snf-manage term-add /usr/share/synnefo/sample-terms.html
161

    
162
Your terms have been successfully added and you will see the corresponding link
163
appearing in the Astakos web pages' footer.
164

    
165
Enabling reCAPTCHA
166
~~~~~~~~~~~~~~~~~~
167

    
168
Astakos supports the `reCAPTCHA <http://www.google.com/recaptcha>`_ feature.
169
If enabled, it protects the Astakos forms from bots. To enable the feature, go
170
to https://www.google.com/recaptcha/admin/create and create your own reCAPTCHA
171
key pair. Then edit ``/etc/synnefo/20-snf-astakos-app-settings.conf`` and set
172
the corresponding variables to reflect your newly created key pair. Finally, set
173
the ``ASTAKOS_RECAPTCHA_ENABLED`` variable to ``True``:
174

    
175
.. code-block:: console
176

    
177
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
178
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
179

    
180
   ASTAKOS_RECAPTCHA_ENABLED = True
181

    
182
Restart the service on the Astakos node(s) and you are ready:
183

    
184
.. code-block:: console
185

    
186
   # /etc/init.d/gunicorn restart
187

    
188
Checkout your new Sign up page. If you see the reCAPTCHA box, you have setup
189
everything correctly.
190

    
191

    
192

    
193
File Storage Service (Pithos)
194
=============================
195

    
196
Overview
197
--------
198

    
199
Architecture
200
------------
201

    
202
Prereqs
203
-------
204

    
205
Installation
206
------------
207

    
208
Configuration
209
-------------
210

    
211
Working with Pithos
212
-------------------
213

    
214
Pithos advanced operations
215
--------------------------
216

    
217

    
218

    
219
Compute/Network/Image Service (Cyclades)
220
========================================
221

    
222
Compute Overview
223
----------------
224

    
225
Network Overview
226
----------------
227

    
228
Image Overview
229
--------------
230

    
231
Architecture
232
------------
233

    
234
Asynchronous communication with Ganeti backends
235
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
236
Synnefo uses Google Ganeti backends for VM cluster management. In order for
237
Cyclades to be able to handle thousands of user requests, Cyclades and Ganeti
238
communicate asynchronously. Briefly, requests are submitted to Ganeti through
239
Ganeti's RAPI/HTTP interface, and then asynchronous notifications about the
240
progress of Ganeti jobs are being created and pushed upwards to Cyclades. The
241
architecture and communication with a Ganeti backend is shown in the graph
242
below:
243

    
244
.. image:: images/cyclades-ganeti-communication.png
245
   :width: 50%
246
   :target: _images/cyclades-ganeti-communication.png
247

    
248
The Cyclades API server is responsible for handling user requests. Read-only
249
requests are directly served by looking up the Cyclades DB. If the request
250
needs an action in the Ganeti backend, Cyclades submit jobs to the Ganeti
251
master using the `Ganeti RAPI interface
252
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html>`_.
253

    
254
While Ganeti executes the job, `snf-ganeti-eventd`, `snf-ganeti-hook` and
255
`snf-progress-monitor` are monitoring the progress of the job and send
256
corresponding messages to the RabbitMQ servers. These components are part
257
of `snf-cyclades-gtools` and must be installed on all Ganeti nodes. Specially:
258

    
259
* *snf-ganeti-eventd* sends messages about operations affecting the operating
260
  state of instances and networks. Works by monitoring the Ganeti job queue.
261
* *snf-ganeti_hook* sends messages about the NICs of instances. It includes a
262
  number of `Ganeti hooks <http://docs.ganeti.org/ganeti/2.2/html/hooks.html>`_
263
  for customisation of operations.
264
* *snf-progress_monitor* sends messages about the progress of the Image deployment
265
  phase which is done by the Ganeti OS Definition `snf-image`.
266

    
267
Finally, `snf-dispatcher` consumes messages from the RabbitMQ queues, processes
268
these messages and properly updates the state of the Cyclades DB. Subsequent
269
requests to the Cyclades API, will retrieve the updated state from the DB.
270

    
271

    
272
Prereqs
273
-------
274

    
275
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
276

    
277
Installation
278
------------
279

    
280
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
281

    
282
Configuration
283
-------------
284

    
285
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
286

    
287
Working with Cyclades
288
---------------------
289

    
290
Managing Ganeti Backends
291
~~~~~~~~~~~~~~~~~~~~~~~~
292

    
293
Since v0.11, Synnefo is able to manage multiple Ganeti clusters (backends)
294
making it capable to scale linearly to tens of thousands of VMs. Backends
295
can be dynamically added or removed via `snf-manage` commands.
296

    
297
Each newly created VM is allocated to a Ganeti backend by the Cyclades backend
298
allocator. The VM is "pinned" to this backend, and can not change through its
299
lifetime. The backend allocator decides in which backend to spawn the VM based
300
on the available resources of each backend, trying to balance the load between
301
them.
302

    
303
Handling of Networks, as far as backends are concerned, is based on whether the
304
network is public or not. Public networks are created through the `snf-manage
305
network-create` command, and are only created on one backend. Private networks
306
are created on all backends, in order to ensure that VMs residing on different
307
backends can be connected to the same private network.
308

    
309
Listing existing backends
310
`````````````````````````
311
To list all the Ganeti backends known to Synnefo, we run:
312

    
313
.. code-block:: console
314

    
315
   $ snf-manage backend-list
316

    
317
Adding a new Ganeti backend
318
```````````````````````````
319
Backends are dynamically added under the control of Synnefo with `snf-manage
320
backend-add` command. In this section it is assumed that a Ganeti cluster,
321
named ``cluster.example.com`` is already up and running and configured to be
322
able to host Synnefo VMs.
323

    
324
To add this Ganeti cluster, we run:
325

    
326
.. code-block:: console
327

    
328
   $ snf-manage backend-add --clustername=cluster.example.com --user="synnefo_user" --pass="synnefo_pass"
329

    
330
where ``clustername`` is the Cluster hostname of the Ganeti cluster, and
331
``user`` and ``pass`` are the credentials for the `Ganeti RAPI user
332
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html#users-and-passwords>`_.  All
333
backend attributes can be also changed dynamically using the `snf-manage
334
backend-modify` command.
335

    
336
``snf-manage backend-add`` will also create all existing private networks to
337
the new backend. You can verify that the backend is added, by running
338
`snf-manage backend-list`.
339

    
340
Note that no VMs will be spawned to this backend, since by default it is in a
341
``drained`` state after addition and also it has no public network assigned to
342
it.
343

    
344
So, first you need to create its public network, make sure everything works as
345
expected and finally make it active by un-setting the ``drained`` flag. You can
346
do this by running:
347

    
348
.. code-block:: console
349

    
350
   $ snf-manage backend-modify --drained=False <backend_id>
351

    
352
Removing an existing Ganeti backend
353
```````````````````````````````````
354
In order to remove an existing backend from Synnefo, we run:
355

    
356
.. code-block:: console
357

    
358
   # snf-manage backend-remove <backend_id>
359

    
360
This command will fail if there are active VMs on the backend. Also, the
361
backend is not cleaned before removal, so all the Synnefo private networks
362
will be left on the Ganeti nodes. You need to remove them manually.
363

    
364
Allocation of VMs in Ganeti backends
365
````````````````````````````````````
366
As already mentioned, the Cyclades backend allocator is responsible for
367
allocating new VMs to backends. This allocator does not choose the exact Ganeti
368
node that will host the VM but just the Ganeti backend. The exact node is
369
chosen by the Ganeti cluster's allocator (hail).
370

    
371
The decision about which backend will host a VM is based on the available
372
resources. The allocator computes a score for each backend, that shows its load
373
factor, and the one with the minimum score is chosen. The admin can exclude
374
backends from the allocation phase by marking them as ``drained`` by running:
375

    
376
.. code-block:: console
377

    
378
   $ snf-manage backend-modify --drained=True <backend_id>
379

    
380
The backend resources are periodically updated, at a period defined by
381
the ``BACKEND_REFRESH_MIN`` setting, or by running `snf-manage backend-update-status`
382
command. It is advised to have a cron job running this command at a smaller
383
interval than ``BACKEND_REFRESH_MIN`` in order to remove the load of refreshing
384
the backends stats from the VM creation phase.
385

    
386
Finally, the admin can decide to have a user's VMs being allocated to a
387
specific backend, with the ``BACKEND_PER_USER`` setting. This is a mapping
388
between users and backends. If the user is found in ``BACKEND_PER_USER``, then
389
Synnefo allocates all his/hers VMs to the specific backend in the variable,
390
even if is marked as drained (useful for testing).
391

    
392

    
393
Managing Virtual Machines
394
~~~~~~~~~~~~~~~~~~~~~~~~~
395

    
396
As mentioned, Cyclades uses Ganeti for management of VMs. The administrator can
397
handle Cyclades VMs just like any other Ganeti instance, via `gnt-instance`
398
commands. All Ganeti instances that belong to Synnefo, are separated from
399
others, by a prefix in their names. This prefix is defined in
400
``BACKEND_PREFIX_ID`` setting in
401
``/etc/synnefo/20-snf-cyclades-app-backend.conf``.
402

    
403
Apart from handling instances directly in the Ganeti level, a number of `snf-manage`
404
commands are available:
405

    
406
* ``snf-manage server-list``: List servers
407
* ``snf-manage server-show``: Show information about a server in the Cyclades DB
408
* ``snf-manage server-inspect``: Inspect the state of a server both in DB and Ganeti
409
* ``snf-manage server-modify``: Modify the state of a server in the Cycldes DB
410
* ``snf-manage server-create``: Create a new server
411
* ``snf-manage server-import``: Import an existing Ganeti instance to Cyclades
412

    
413

    
414
Managing Virtual Networks
415
~~~~~~~~~~~~~~~~~~~~~~~~~
416

    
417
Cyclades is able to create and manage Virtual Networks. Networking is
418
desployment specific and must be customized based on the specific needs of the
419
system administrator. For better understanding of networking please refer to
420
the :ref:`Network <networks>` section.
421

    
422
Exactly as Cyclades VMs can be handled like Ganeti instances, Cyclades Networks
423
can also by handled as Ganeti networks, via `gnt-network commands`. All Ganeti
424
networks that belong to Synnefo are named with the prefix
425
`${BACKEND_PREFIX_ID}-net-`.
426

    
427
There are also the following `snf-manage` commands for managing networks:
428

    
429
* ``snf-manage network-list``: List networks
430
* ``snf-manage network-show``: Show information about a network in the Cyclades DB
431
* ``snf-manage network-inspect``: Inspect the state of the network in DB and Ganeti backends
432
* ``snf-manage network-modify``: Modify the state of a network in the Cycldes DB
433
* ``snf-manage network-create``: Create a new network
434
* ``snf-manage network-remove``: Remove an existing network
435

    
436
Managing Network Resources
437
``````````````````````````
438

    
439
Proper operation of the Cyclades Network Service depends on the unique
440
assignment of specific resources to each type of virtual network. Specifically,
441
these resources are:
442

    
443
* IP addresses. Cyclades creates a Pool of IPs for each Network, and assigns a
444
  unique IP address to each VM, thus connecting it to this Network. You can see
445
  the IP pool of each network by running `snf-manage network-inspect
446
  <network_ID>`. IP pools are automatically created and managed by Cyclades,
447
  depending on the subnet of the Network.
448
* Bridges corresponding to physical VLANs, which are required for networks of
449
  type `PRIVATE_PHYSICAL_VLAN`.
450
* One Bridge corresponding to one physical VLAN which is required for networks of
451
  type `PRIVATE_MAC_PREFIX`.
452

    
453
Cyclades allocates those resources from pools that are created by the
454
administrator with the `snf-manage pool-create` management command.
455

    
456
Pool Creation
457
`````````````
458
Pools are created using the `snf-manage pool-create` command:
459

    
460
.. code-block:: console
461

    
462
   # snf-manage pool-create --type=bridge --base=prv --size=20
463

    
464
will create a pool of bridges, containing bridges prv1, prv2,..prv21.
465

    
466
You can verify the creation of the pool, and check its contents by running:
467

    
468
.. code-block:: console
469

    
470
   # snf-manage pool-list
471
   # snf-manage pool-show --type=bridge 1
472

    
473
With the same commands you can handle a pool of MAC prefixes. For example:
474

    
475
.. code-block:: console
476

    
477
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
478

    
479
will create a pool of MAC prefixes from ``aa:00:1`` to ``b9:ff:f``. The MAC
480
prefix pool is responsible for providing only unicast and locally administered
481
MAC addresses, so many of these prefixes will be externally reserved, to
482
exclude from allocation.
483

    
484
Cyclades advanced operations
485
----------------------------
486

    
487
Reconciliation mechanism
488
~~~~~~~~~~~~~~~~~~~~~~~~
489

    
490
On certain occasions, such as a Ganeti or RabbitMQ failure, the state of
491
Cyclades database may differ from the real state of VMs and networks in the
492
Ganeti backends. The reconciliation process is designed to synchronize
493
the state of the Cyclades DB with Ganeti. There are two management commands
494
for reconciling VMs and Networks
495

    
496
Reconciling Virtual Machines
497
````````````````````````````
498

    
499
Reconciliation of VMs detects the following conditions:
500

    
501
 * Stale DB servers without corresponding Ganeti instances
502
 * Orphan Ganeti instances, without corresponding DB entries
503
 * Out-of-sync state for DB entries wrt to Ganeti instances
504

    
505
To detect all inconsistencies you can just run:
506

    
507
.. code-block:: console
508

    
509
  $ snf-manage reconcile-servers
510

    
511
Adding the `--fix-all` option, will do the actual synchronization:
512

    
513
.. code-block:: console
514

    
515
  $ snf-manage reconcile --fix-all
516

    
517
Please see ``snf-manage reconcile --help`` for all the details.
518

    
519

    
520
Reconciling Networks
521
````````````````````
522

    
523
Reconciliation of Networks detects the following conditions:
524

    
525
  * Stale DB networks without corresponding Ganeti networks
526
  * Orphan Ganeti networks, without corresponding DB entries
527
  * Private networks that are not created to all Ganeti backends
528
  * Unsynchronized IP pools
529

    
530
To detect all inconsistencies you can just run:
531

    
532
.. code-block:: console
533

    
534
  $ snf-manage reconcile-networks
535

    
536
Adding the `--fix-all` option, will do the actual synchronization:
537

    
538
.. code-block:: console
539

    
540
  $ snf-manage reconcile-networks --fix-all
541

    
542
Please see ``snf-manage reconcile-networks --help`` for all the details.
543

    
544

    
545

    
546
Block Storage Service (Archipelago)
547
===================================
548

    
549
Overview
550
--------
551
Archipelago offers Copy-On-Write snapshotable volumes. Pithos images can be used
552
to provision a volume with Copy-On-Write semantics (i.e. a clone). Snapshots
553
offer a unique deduplicated image of a volume, that reflects the volume state
554
during snapshot creation and are indistinguishable from a Pithos image.
555

    
556
Archipelago is used by Cyclades and Ganeti for fast provisioning of VMs based on
557
CoW volumes. Moreover, it enables live migration of thinly-provisioned VMs with
558
no physically shared storage.
559

    
560
Archipelago Architecture
561
------------------------
562

    
563
.. image:: images/archipelago-architecture.png
564
   :width: 50%
565
   :target: _images/archipelago-architecture.png
566

    
567
.. _syn+archip+rados:
568

    
569
Overview of Synnefo + Archipelago + RADOS
570
-----------------------------------------
571

    
572
.. image:: images/synnefo-arch3.png
573
   :width: 100%
574
   :target: _images/synnefo-arch3.png
575

    
576
Prereqs
577
-------
578

    
579
The administrator must initialize the storage backend where archipelago volume
580
blocks will reside.
581

    
582
In case of a files backend, the administrator must create two directories. One
583
for the archipelago data blocks and one for the archipelago map blocks. These
584
should probably be over shared storage to enable sharing archipelago volumes
585
between multiple nodes. He or she, must also be able to supply a directory where
586
the pithos data and map blocks reside.
587

    
588
In case of a RADOS backend, the administrator must create two rados pools, one
589
for data blocks, and one for the map blocks. These pools, must be the same pools
590
used in pithos, in order to enable volume creation based on pithos images.
591

    
592
Installation
593
------------
594

    
595
Archipelago consists of
596

    
597
* ``libxseg0``: libxseg used to communicate over shared memory segments
598
* ``python-xseg``: python bindings for libxseg
599
* ``archipelago-kernel-dkms``: contains archipelago kernel modules to provide
600
  block devices to be used as vm disks
601
* ``python-archipelago``: archipelago python module. Includes archipelago and
602
  vlmc functionality.
603
* ``archipelago``: user space tools and peers for the archipelago management and
604
  volume composition
605
* ``archipelago-ganeti``: ganeti ext storage scripts, that enable ganeti to
606
  provision VMs over archipelago
607

    
608
Performing
609

    
610
.. code-block:: console
611

    
612
  $ apt-get install archipelago-ganeti 
613

    
614
should fetch all the required packages and get you up 'n going with archipelago
615

    
616
Bare in mind, that custom librados is required, which is provided in the apt
617
repo of GRNet.
618

    
619

    
620
For now, librados is a dependency of archipelago, even if you do not intend to
621
use archipelago over RADOS.
622

    
623
Configuration
624
-------------
625
Archipelago should work out of the box with a RADOS backend, but basic
626
configuration can be done in ``/etc/default/archipelago`` .
627

    
628
If you wish to change the storage backend to files, set
629

    
630
.. code-block:: console
631

    
632
   STORAGE="files"
633

    
634
and provide the appropriate settings for files storage backend in the conf file.
635

    
636
These are:
637

    
638
* ``FILED_IMAGES``: directory for archipelago data blocks.
639
* ``FILED_MAPS``: directory for archipelago map blocks.
640
* ``PITHOS``: directory of pithos data blocks.
641
* ``PITHOSMAPS``: directory of pithos map blocks.
642

    
643
The settings for RADOS storage backend are:
644

    
645
* ``RADOS_POOL_MAPS``: The pool where archipelago and pithos map blocks reside.
646
* ``RADOS_POOL_BLOCKS``: The pool where archipelago and pithos data blocks
647
  reside.
648

    
649
Examples can be found in the conf file.
650

    
651
Be aware that archipelago infrastructure doesn't provide default values for this
652
settings. If they are not set in the conf file, archipelago will not be able to
653
function.
654

    
655
Archipelago also provides ``VERBOSITY`` config options to control the output
656
generated by the userspace peers.
657

    
658
The available options are:
659

    
660
* ``VERBOSITY_BLOCKERB``
661
* ``VERBOSITY_BLOCKERM``
662
* ``VERBOSITY_MAPPER``
663
* ``VERBOSITY_VLMC``
664

    
665
and the available values are:
666

    
667
* 0 : Error only logging.
668
* 1 : Warning logging.
669
* 2 : Info logging.
670
* 3 : Debug logging. WARNING: This options produces tons of output, but the
671
  logrotate daemon should take care of it.
672

    
673
Working with Archipelago
674
------------------------
675

    
676
``archipelago`` provides basic functionality for archipelago.
677

    
678
Usage:
679

    
680
.. code-block:: console
681

    
682
  $ archipelago [-u] command
683

    
684

    
685
Currently it supports the following commands:
686

    
687
* ``start [peer]``
688
  Starts archipelago or the specified peer.
689
* ``stop [peer]``
690
  Stops archipelago or the specified peer.
691
* ``restart [peer]``
692
  Restarts archipelago or the specified peer.
693
* ``status``
694
  Show the status of archipelago.
695

    
696
Available peers: ``blockerm``, ``blockerb``, ``mapperd``, ``vlmcd``.
697

    
698

    
699
``start``, ``stop``, ``restart`` can be combined with the ``-u / --user`` option
700
to affect only the userspace peers supporting archipelago.
701

    
702

    
703

    
704
Archipelago advanced operations
705
-------------------------------
706
The ``vlmc`` tool provides a way to interact with archipelago volumes
707

    
708
* ``vlmc map <volumename>``: maps the volume to a xsegbd device.
709

    
710
* ``vlmc unmap </dev/xsegbd[1-..]>``: unmaps the specified device from the
711
  system.
712

    
713
* ``vlmc create <volumename> --snap <snapname> --size <size>``: creates a new
714
  volume named <volumename> from snapshot name <snapname> with size <size>.
715
  The ``--snap`` and ``--size`` are optional, but at least one of them is
716
  mandatory. e.g:
717

    
718
  ``vlmc create <volumename> --snap <snapname>`` creates a volume named
719
  volumename from snapshot snapname. The size of the volume is the same as
720
  the size of the snapshot.
721

    
722
  ``vlmc create <volumename> --size <size>`` creates an empty volume of size
723
  <size> named <volumename>.
724

    
725
* ``vlmc remove <volumename>``: removes the volume and all the related
726
  archipelago blocks from storage.
727

    
728
* ``vlmc list``: provides a list of archipelago volumes. Currently only works
729
  with RADOS storage backend.
730

    
731
* ``vlmc info <volumename>``: shows volume information. Currently returns only
732
  volume size.
733

    
734
* ``vlmc open <volumename>``: opens an archipelago volume. That is, taking all
735
  the necessary locks and also make the rest of the infrastructure aware of the
736
  operation.
737

    
738
  This operation succeeds if the volume is alread opened.
739

    
740
* ``vlmc close <volumename>``: closes an archipelago volume. That is, performing
741
  all the necessary functions in the insfrastrure to successfully release the
742
  volume. Also releases all the acquired locks.
743

    
744
  ``vlmc close`` should be performed after a ``vlmc open`` operation.
745

    
746
* ``vlmc lock <volumename>``: locks a volume. This step allow the administrator
747
  to lock an archipelago volume, independently from the rest of the
748
  infrastrure.
749

    
750
* ``vlmc unlock [-f] <volumename>``: unlocks a volume. This allow the
751
  administrator to unlock a volume, independently from the rest of the
752
  infrastructure.
753
  The unlock option can be performed only by the blocker that acquired the lock
754
  in the first place. To unlock a volume from another blocker, ``-f`` option
755
  must be used to break the lock.
756

    
757

    
758
The "kamaki" API client
759
=======================
760

    
761
To upload, register or modify an image you will need the **kamaki** tool.
762
Before proceeding make sure that it is configured properly. Verify that
763
*image_url*, *storage_url*, and *token* are set as needed:
764

    
765
.. code-block:: console
766

    
767
   $ kamaki config list
768

    
769
To chage a setting use ``kamaki config set``:
770

    
771
.. code-block:: console
772

    
773
   $ kamaki config set image_url https://cyclades.example.com/plankton
774
   $ kamaki config set storage_url https://pithos.example.com/v1
775
   $ kamaki config set token ...
776

    
777
Upload Image
778
------------
779

    
780
As a shortcut, you can configure a default account and container that will be
781
used by the ``kamaki store`` commands:
782

    
783
.. code-block:: console
784

    
785
   $ kamaki config set storage_account images@example.com
786
   $ kamaki config set storage_container images
787

    
788
If the container does not exist, you will have to create it before uploading
789
any images:
790

    
791
.. code-block:: console
792

    
793
   $ kamaki store create images
794

    
795
You are now ready to upload an image. You can upload it with a Pithos+ client,
796
or use kamaki directly:
797

    
798
.. code-block:: console
799

    
800
   $ kamaki store upload ubuntu.iso
801

    
802
You can use any Pithos+ client to verify that the image was uploaded correctly.
803
The full Pithos URL for the previous example will be
804
``pithos://images@example.com/images/ubuntu.iso``.
805

    
806

    
807
Register Image
808
--------------
809

    
810
To register an image you will need to use the full Pithos+ URL. To register as
811
a public image the one from the previous example use:
812

    
813
.. code-block:: console
814

    
815
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso --public
816

    
817
The ``--public`` flag is important, if missing the registered image will not
818
be listed by ``kamaki glance list``.
819

    
820
Use ``kamaki glance register`` with no arguments to see a list of available
821
options. A more complete example would be the following:
822

    
823
.. code-block:: console
824

    
825
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso \
826
            --public --disk-format diskdump --property kernel=3.1.2
827

    
828
To verify that the image was registered successfully use:
829

    
830
.. code-block:: console
831

    
832
   $ kamaki glance list -l
833

    
834

    
835

    
836
Miscellaneous
837
=============
838

    
839
.. RabbitMQ
840

    
841
RabbitMQ Broker
842
---------------
843

    
844
Queue nodes run the RabbitMQ sofware, which provides AMQP functionality. To
845
guarantee high-availability, more than one Queue nodes should be deployed, each
846
of them belonging to the same `RabbitMQ cluster
847
<http://www.rabbitmq.com/clustering.html>`_. Synnefo uses the RabbitMQ
848
active/active `High Available Queues <http://www.rabbitmq.com/ha.html>`_ which
849
are mirrored between two nodes within a RabbitMQ cluster.
850

    
851
The RabbitMQ nodes that form the cluster, are declared to Synnefo through the
852
`AMQP_HOSTS` setting. Each time a Synnefo component needs to connect to
853
RabbitMQ, one of these nodes is chosen in a random way. The client that Synnefo
854
uses to connect to RabbitMQ, handles connection failures transparently and
855
tries to reconnect to a different node. As long as one of these nodes are up
856
and running, functionality of Synnefo should not be downgraded by the RabbitMQ
857
node failures.
858

    
859
All the queues that are being used are declared as durable, meaning that
860
messages are persistently stored to RabbitMQ, until they get successfully
861
processed by a client.
862

    
863
Currently, RabbitMQ is used by the following components:
864

    
865
* `snf-ganeti-eventd`, `snf-ganeti-hook` and `snf-progress-monitor`:
866
  These components send messages concerning the status and progress of
867
  jobs in the Ganeti backend.
868
* `snf-dispatcher`: This daemon, consumes the messages that are sent from
869
  the above components, and updates the Cyclades DB accordingly.
870

    
871

    
872
Installation
873
~~~~~~~~~~~~
874

    
875
Please check the RabbitMQ documentation which covers extensively the
876
`installation of RabbitMQ server <http://www.rabbitmq.com/download.html>`_ and
877
the setup of a `RabbitMQ cluster <http://www.rabbitmq.com/clustering.html>`_.
878
Also, check out the `web management plugin
879
<http://www.rabbitmq.com/management.html>`_ that can be useful for managing and
880
monitoring RabbitMQ.
881

    
882
For a basic installation of RabbitMQ on two nodes (node1 and node2) you can do
883
the following:
884

    
885
On both nodes, install rabbitmq-server and create a Synnefo user:
886

    
887
.. code-block:: console
888

    
889
  $ apt-get install rabbitmq-server
890
  $ rabbitmqctl add_user synnefo "example_pass"
891
  $ rabbitmqctl set_permissions synnefo  ".*" ".*" ".*"
892

    
893
Also guarantee that both nodes share the same cookie, by running:
894

    
895
.. code-block:: console
896

    
897
  $ scp node1:/var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/.erlang.cookie
898

    
899
and restart the nodes:
900

    
901
.. code-block:: console
902

    
903
  $ /etc/init.d/rabbitmq-server restart
904

    
905

    
906
To setup the RabbitMQ cluster run:
907

    
908
.. code-block:: console
909

    
910
  root@node2: rabbitmqctl stop_app
911
  root@node2: rabbitmqctl reset
912
  root@node2: rabbitmqctl cluster rabbit@node1 rabbit@node2
913
  root@node2: rabbitmqctl start_app
914

    
915
You can verify that the cluster is set up correctly by running:
916

    
917
.. code-block:: console
918

    
919
  root@node2: rabbitmqctl cluster_status
920

    
921

    
922

    
923

    
924

    
925
Admin tool: snf-manage
926
----------------------
927

    
928
``snf-manage`` is a tool used to perform various administrative tasks. It needs
929
to be able to access the django database, so the following should be able to
930
import the Django settings.
931

    
932
Additionally, administrative tasks can be performed via the admin web interface
933
located in /admin. Only users of type ADMIN can access the admin pages. To
934
change the type of a user to ADMIN, snf-manage can be used:
935

    
936
.. code-block:: console
937

    
938
   $ snf-manage user-modify 42 --type ADMIN
939

    
940
Logging
941
-------
942

    
943
Logging in Synnefo is using Python's logging module. The module is configured
944
using dictionary configuration, whose format is described here:
945

    
946
http://docs.python.org/release/2.7.1/library/logging.html#logging-config-dictschema
947

    
948
Note that this is a feature of Python 2.7 that we have backported for use in
949
Python 2.6.
950

    
951
The logging configuration dictionary is defined in
952
``/etc/synnefo/10-snf-webproject-logging.conf``
953

    
954
The administrator can have finer logging control by modifying the
955
``LOGGING_SETUP`` dictionary, and defining subloggers with different handlers
956
and log levels.  e.g. To enable debug messages only for the API set the level
957
of 'synnefo.api' to ``DEBUG``
958

    
959
By default, the Django webapp and snf-manage logs to syslog, while
960
`snf-dispatcher` logs to `/var/log/synnefo/dispatcher.log`.
961

    
962

    
963
.. _scale-up:
964

    
965
Scaling up to multiple nodes
966
============================
967

    
968
Here we will describe how should a large scale Synnefo deployment look like. Make
969
sure you are familiar with Synnefo and Ganeti before proceeding with this section.
970
This means you should at least have already set up successfully a working Synnefo
971
deployment as described in the :ref:`Admin's Quick Installation Guide
972
<quick-install-admin-guide>` and also read the Administrator's Guide until this
973
section.
974

    
975
Graph of a scale-out Synnefo deployment
976
---------------------------------------
977

    
978
Each box in the following graph corresponds to a distinct physical node:
979

    
980
.. image:: images/synnefo-arch2-roles.png
981
   :width: 100%
982
   :target: _images/synnefo-arch2-roles.png
983

    
984
The above graph is actually the same with the one at the beginning of this
985
:ref:`guide <admin-guide>`, with the only difference that here we show the
986
Synnefo roles of each physical node. These roles are described in the
987
following section.
988

    
989
.. _physical-node-roles:
990

    
991
Physical Node roles
992
-------------------
993

    
994
As appears in the previous graph, a scale-out Synnefo deployment consists of
995
multiple physical nodes that have the following roles:
996

    
997
* **WEBSERVER**: A web server running in front of gunicorn (e.g.: Apache, nginx)
998
* **ASTAKOS**: The Astakos application (gunicorn)
999
* **ASTAKOS_DB**: The Astakos database (postgresql)
1000
* **PITHOS**: The Pithos application (gunicorn)
1001
* **PITHOS_DB**: The Pithos database (postgresql)
1002
* **CYCLADES**: The Cyclades application (gunicorn)
1003
* **CYCLADES_DB**: The Cyclades database (postgresql)
1004
* **MQ**: The message queue (RabbitMQ)
1005
* **GANETI_MASTER**: The Ganeti master of a Ganeti cluster
1006
* **GANETI_NODE** : A VM-capable Ganeti node of a Ganeti cluster
1007

    
1008
You will probably also have:
1009

    
1010
* **CMS**: The CMS used as a frotend portal for the Synnefo services
1011
* **NS**: A nameserver serving all other Synnefo nodes and resolving Synnefo FQDNs
1012
* **CLIENT**: A machine that runs the Synnefo clients (e.g.: kamaki, Web UI),
1013
              most of the times, the end user's local machine
1014

    
1015
From this point we will also refer to the following groups of roles:
1016

    
1017
* **SYNNEFO**: [ **ASTAKOS**, **ASTAKOS_DB**, **PITHOS**, **PITHOS_DB**, **CYCLADES**, **CYCLADES_DB**, **MQ**, **CMS**]
1018
* **G_BACKEND**: [**GANETI_MASTER**, **GANETI_NODE**]
1019

    
1020
Of course, when deploying Synnefo you can combine multiple of the above roles on a
1021
single physical node, but if you are trying to scale out, the above separation
1022
gives you significant advantages.
1023

    
1024
So, in the next section we will take a look on what components you will have to
1025
install on each physical node depending on its Synnefo role. We assume the graph's
1026
architecture.
1027

    
1028
Components for each role
1029
------------------------
1030

    
1031
When deploying Synnefo in large scale, you need to install different Synnefo
1032
or/and third party components on different physical nodes according to their
1033
Synnefo role, as stated in the previous section.
1034

    
1035
Specifically:
1036

    
1037
Role **WEBSERVER**
1038
    * Synnefo components: `None`
1039
    * 3rd party components: Apache
1040
Role **ASTAKOS**
1041
    * Synnefo components: `snf-webproject`, `snf-astakos-app`
1042
    * 3rd party components: Django, Gunicorn
1043
Role **ASTAKOS_DB**
1044
    * Synnefo components: `None`
1045
    * 3rd party components: PostgreSQL
1046
Role **PITHOS**
1047
    * Synnefo components: `snf-webproject`, `snf-pithos-app`, `snf-pithos-webclient`
1048
    * 3rd party components: Django, Gunicorn
1049
Role **PITHOS_DB**
1050
    * Synnefo components: `None`
1051
    * 3rd party components: PostgreSQL
1052
Role **CYCLADES**
1053
    * Synnefo components: `snf-webproject`, `snf-cyclades-app`, `snf-vncauthproxy`
1054
    * 3rd party components: Django Gunicorn
1055
Role **CYCLADES_DB**
1056
    * Synnefo components: `None`
1057
    * 3rd party components: PostgreSQL
1058
Role **MQ**
1059
    * Synnefo components: `None`
1060
    * 3rd party components: RabbitMQ
1061
Role **GANETI_MASTER**
1062
    * Synnefo components: `snf-cyclades-gtools`
1063
    * 3rd party components: Ganeti
1064
Role **GANETI_NODE**
1065
    * Synnefo components: `snf-cyclades-gtools`, `snf-network`, `snf-image`, `nfdhcpd`
1066
    * 3rd party components: Ganeti
1067
Role **CMS**
1068
    * Synnefo components: `snf-webproject`, `snf-cloudcms`
1069
    * 3rd party components: Django, Gunicorn
1070
Role **NS**
1071
    * Synnefo components: `None`
1072
    * 3rd party components: BIND
1073
Role **CLIENT**
1074
    * Synnefo components: `kamaki`, `snf-image-creator`
1075
    * 3rd party components: `None`
1076

    
1077
Example scale out installation
1078
------------------------------
1079

    
1080
In this section we describe an example of a medium scale installation which
1081
combines multiple roles on 10 different physical nodes. We also provide a
1082
:ref:`guide <i-synnefo>` to help with such an install.
1083

    
1084
We assume that we have the following 10 physical nodes with the corresponding
1085
roles:
1086

    
1087
Node1:
1088
    **WEBSERVER**, **ASTAKOS**
1089
      Guide sections:
1090
        * :ref:`apt <i-apt>`
1091
        * :ref:`gunicorn <i-gunicorn>`
1092
        * :ref:`apache <i-apache>`
1093
        * :ref:`snf-webproject <i-webproject>`
1094
        * :ref:`snf-astakos-app <i-astakos>`
1095
Node2:
1096
    **WEBSERVER**, **PITHOS**
1097
      Guide sections:
1098
        * :ref:`apt <i-apt>`
1099
        * :ref:`gunicorn <i-gunicorn>`
1100
        * :ref:`apache <i-apache>`
1101
        * :ref:`snf-webproject <i-webproject>`
1102
        * :ref:`snf-pithos-app <i-pithos>`
1103
        * :ref:`snf-pithos-webclient <i-pithos>`
1104
Node3:
1105
    **WEBSERVER**, **CYCLADES**
1106
      Guide sections:
1107
        * :ref:`apt <i-apt>`
1108
        * :ref:`gunicorn <i-gunicorn>`
1109
        * :ref:`apache <i-apache>`
1110
        * :ref:`snf-webproject <i-webproject>`
1111
        * :ref:`snf-cyclades-app <i-cyclades>`
1112
        * :ref:`snf-vncauthproxy <i-cyclades>`
1113
Node4:
1114
    **WEBSERVER**, **CMS**
1115
      Guide sections:
1116
        * :ref:`apt <i-apt>`
1117
        * :ref:`gunicorn <i-gunicorn>`
1118
        * :ref:`apache <i-apache>`
1119
        * :ref:`snf-webproject <i-webproject>`
1120
        * :ref:`snf-cloudcms <i-cms>`
1121
Node5:
1122
    **ASTAKOS_DB**, **PITHOS_DB**, **CYCLADES_DB**
1123
      Guide sections:
1124
        * :ref:`apt <i-apt>`
1125
        * :ref:`postgresql <i-db>`
1126
Node6:
1127
    **MQ**
1128
      Guide sections:
1129
        * :ref:`apt <i-apt>`
1130
        * :ref:`rabbitmq <i-mq>`
1131
Node7:
1132
    **GANETI_MASTER**, **GANETI_NODE**
1133
      Guide sections:
1134
        * :ref:`apt <i-apt>`
1135
        * :ref:`general <i-backends>`
1136
        * :ref:`ganeti <i-ganeti>`
1137
        * :ref:`snf-cyclades-gtools <i-gtools>`
1138
        * :ref:`snf-network <i-network>`
1139
        * :ref:`snf-image <i-image>`
1140
        * :ref:`nfdhcpd <i-network>`
1141
Node8:
1142
    **GANETI_NODE**
1143
      Guide sections:
1144
        * :ref:`apt <i-apt>`
1145
        * :ref:`general <i-backends>`
1146
        * :ref:`ganeti <i-ganeti>`
1147
        * :ref:`snf-cyclades-gtools <i-gtools>`
1148
        * :ref:`snf-network <i-network>`
1149
        * :ref:`snf-image <i-image>`
1150
        * :ref:`nfdhcpd <i-network>`
1151
Node9:
1152
    **GANETI_NODE**
1153
      Guide sections:
1154
        `Same as Node8`
1155
Node10:
1156
    **GANETI_NODE**
1157
      Guide sections:
1158
        `Same as Node8`
1159

    
1160
All sections: :ref:`Scale out Guide <i-synnefo>`
1161

    
1162

    
1163
Synnefo Upgrade Notes
1164
=====================
1165

    
1166
.. toctree::
1167
   :maxdepth: 1
1168

    
1169
   v0.12 -> v0.13 <upgrade/upgrade-0.13>
1170

    
1171
Older Cyclades Upgrade Notes
1172
============================
1173

    
1174
.. toctree::
1175
   :maxdepth: 2
1176

    
1177
   upgrade/cyclades-upgrade
1178

    
1179
Changelog
1180
=========