Statistics
| Branch: | Tag: | Revision:

root / docs / admin-guide.rst @ 5226b38f

History | View | Annotate | Download (31 kB)

1
.. _admin-guide:
2

    
3
Synnefo Administrator's Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the complete Synnefo Administrator's Guide.
7

    
8

    
9

    
10
General Synnefo Architecture
11
============================
12

    
13
The following graph shows the whole Synnefo architecture and how it interacts
14
with multiple Ganeti clusters. We hope that after reading the Administrator's
15
Guide you will be able to understand every component and all the interactions
16
between them. It is a good idea to first go through the Quick Administrator's
17
Guide before proceeding.
18

    
19
.. image:: images/synnefo-architecture1.png
20
   :width: 100%
21
   :target: _images/synnefo-architecture1.png
22

    
23

    
24

    
25
Identity Service (Astakos)
26
==========================
27

    
28

    
29
Overview
30
--------
31

    
32
Authentication methods
33
~~~~~~~~~~~~~~~~~~~~~~
34

    
35
Local Authentication
36
````````````````````
37

    
38
LDAP Authentication
39
```````````````````
40

    
41
.. _shibboleth-auth:
42

    
43
Shibboleth Authentication
44
`````````````````````````
45

    
46
Astakos can delegate user authentication to a Shibboleth federation.
47

    
48
To setup shibboleth, install package::
49

    
50
  apt-get install libapache2-mod-shib2
51

    
52
Change appropriately the configuration files in ``/etc/shibboleth``.
53

    
54
Add in ``/etc/apache2/sites-available/synnefo-ssl``::
55

    
56
  ShibConfig /etc/shibboleth/shibboleth2.xml
57
  Alias      /shibboleth-sp /usr/share/shibboleth
58

    
59
  <Location /im/login/shibboleth>
60
    AuthType shibboleth
61
    ShibRequireSession On
62
    ShibUseHeaders On
63
    require valid-user
64
  </Location>
65

    
66
and before the line containing::
67

    
68
  ProxyPass        / http://localhost:8080/ retry=0
69

    
70
add::
71

    
72
  ProxyPass /Shibboleth.sso !
73

    
74
Then, enable the shibboleth module::
75

    
76
  a2enmod shib2
77

    
78
After passing through the apache module, the following tokens should be
79
available at the destination::
80

    
81
  eppn # eduPersonPrincipalName
82
  Shib-InetOrgPerson-givenName
83
  Shib-Person-surname
84
  Shib-Person-commonName
85
  Shib-InetOrgPerson-displayName
86
  Shib-EP-Affiliation
87
  Shib-Session-ID
88

    
89
Finally, add 'shibboleth' in ``ASTAKOS_IM_MODULES`` list. The variable resides
90
inside the file ``/etc/synnefo/20-snf-astakos-app-settings.conf``
91

    
92
Architecture
93
------------
94

    
95
Prereqs
96
-------
97

    
98
Installation
99
------------
100

    
101
Configuration
102
-------------
103

    
104
Working with Astakos
105
--------------------
106

    
107
User activation methods
108
~~~~~~~~~~~~~~~~~~~~~~~
109

    
110
When a new user signs up, he/she is not marked as active. You can see his/her
111
state by running (on the machine that runs the Astakos app):
112

    
113
.. code-block:: console
114

    
115
   $ snf-manage user-list
116

    
117
There are two different ways to activate a new user. Both need access to a
118
running :ref:`mail server <mail-server>`.
119

    
120
Manual activation
121
`````````````````
122

    
123
You can manually activate a new user that has already signed up, by sending
124
him/her an activation email. The email will contain an approriate activation
125
link, which will complete the activation process if followed. You can send the
126
email by running:
127

    
128
.. code-block:: console
129

    
130
   $ snf-manage user-activation-send <user ID or email>
131

    
132
Be sure to have already setup your mail server and defined it in your Synnefo
133
settings, before running the command.
134

    
135
Automatic activation
136
````````````````````
137

    
138
FIXME: Describe Regex activation method
139

    
140
Astakos advanced operations
141
---------------------------
142

    
143
Adding "Terms of Use"
144
~~~~~~~~~~~~~~~~~~~~~
145

    
146
Astakos supports versioned terms-of-use. First of all you need to create an
147
html file that will contain your terms. For example, create the file
148
``/usr/share/synnefo/sample-terms.html``, which contains the following:
149

    
150
.. code-block:: console
151

    
152
   <h1>~okeanos terms</h1>
153

    
154
   These are the example terms for ~okeanos
155

    
156
Then, add those terms-of-use with the snf-manage command:
157

    
158
.. code-block:: console
159

    
160
   $ snf-manage term-add /usr/share/synnefo/sample-terms.html
161

    
162
Your terms have been successfully added and you will see the corresponding link
163
appearing in the Astakos web pages' footer.
164

    
165
Enabling reCAPTCHA
166
~~~~~~~~~~~~~~~~~~
167

    
168
Astakos supports the `reCAPTCHA <http://www.google.com/recaptcha>`_ feature.
169
If enabled, it protects the Astakos forms from bots. To enable the feature, go
170
to https://www.google.com/recaptcha/admin/create and create your own reCAPTCHA
171
key pair. Then edit ``/etc/synnefo/20-snf-astakos-app-settings.conf`` and set
172
the corresponding variables to reflect your newly created key pair. Finally, set
173
the ``ASTAKOS_RECAPTCHA_ENABLED`` variable to ``True``:
174

    
175
.. code-block:: console
176

    
177
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
178
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
179

    
180
   ASTAKOS_RECAPTCHA_ENABLED = True
181

    
182
Restart the service on the Astakos node(s) and you are ready:
183

    
184
.. code-block:: console
185

    
186
   # /etc/init.d/gunicorn restart
187

    
188
Checkout your new Sign up page. If you see the reCAPTCHA box, you have setup
189
everything correctly.
190

    
191

    
192

    
193
File Storage Service (Pithos)
194
=============================
195

    
196
Overview
197
--------
198

    
199
Architecture
200
------------
201

    
202
Prereqs
203
-------
204

    
205
Installation
206
------------
207

    
208
Configuration
209
-------------
210

    
211
Working with Pithos
212
-------------------
213

    
214
Pithos advanced operations
215
--------------------------
216

    
217

    
218

    
219
Compute/Network/Image Service (Cyclades)
220
========================================
221

    
222
Compute Overview
223
----------------
224

    
225
Network Overview
226
----------------
227

    
228
Image Overview
229
--------------
230

    
231
Architecture
232
------------
233

    
234
Asynchronous communication with Ganeti backends
235
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
236
Synnefo uses Google Ganeti backends for VM cluster management. In order for
237
Cyclades to be able to handle thousands of user requests, Cyclades and Ganeti
238
communicate asynchronously. Briefly, requests are submitted to Ganeti through
239
Ganeti's RAPI/HTTP interface, and then asynchronous notifications about the
240
progress of Ganeti jobs are being created and pushed upwards to Cyclades. The
241
architecture and communication with a Ganeti backend is shown in the graph
242
below:
243

    
244
.. image:: images/cyclades-ganeti-communication.png
245
   :width: 50%
246
   :target: _images/cyclades-ganeti-communication.png
247

    
248
The Cyclades API server is responsible for handling user requests. Read-only
249
requests are directly served by looking up the Cyclades DB. If the request
250
needs an action in the Ganeti backend, Cyclades submit jobs to the Ganeti
251
master using the `Ganeti RAPI interface
252
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html>`_.
253

    
254
While Ganeti executes the job, `snf-ganeti-eventd`, `snf-ganeti-hook` and
255
`snf-progress-monitor` are monitoring the progress of the job and send
256
corresponding messages to the RabbitMQ servers. These components are part
257
of `snf-cyclades-gtools` and must be installed on all Ganeti nodes. Specially:
258

    
259
* *snf-ganeti-eventd* sends messages about operations affecting the operating
260
  state of instances and networks. Works by monitoring the Ganeti job queue.
261
* *snf-ganeti_hook* sends messages about the NICs of instances. It includes a
262
  number of `Ganeti hooks <http://docs.ganeti.org/ganeti/2.2/html/hooks.html>`_
263
  for customisation of operations.
264
* *snf-progress_monitor* sends messages about the progress of the Image deployment
265
  phase which is done by the Ganeti OS Definition `snf-image`.
266

    
267
Finally, `snf-dispatcher` consumes messages from the RabbitMQ queues, processes
268
these messages and properly updates the state of the Cyclades DB. Subsequent
269
requests to the Cyclades API, will retrieve the updated state from the DB.
270

    
271

    
272
Prereqs
273
-------
274

    
275
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
276

    
277
Installation
278
------------
279

    
280
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
281

    
282
Configuration
283
-------------
284

    
285
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
286

    
287
Working with Cyclades
288
---------------------
289

    
290
Managing Ganeti Backends
291
~~~~~~~~~~~~~~~~~~~~~~~~
292

    
293
Since v0.11, Synnefo is able to manage multiple Ganeti clusters (backends)
294
making it capable to scale linearly to tens of thousands of VMs. Backends
295
can be dynamically added or removed via `snf-manage` commands.
296

    
297
Each newly created VM is allocated to a Ganeti backend by the Cyclades backend
298
allocator. The VM is "pinned" to this backend, and can not change through its
299
lifetime. The backend allocator decides in which backend to spawn the VM based
300
on the available resources of each backend, trying to balance the load between
301
them.
302

    
303
Handling of Networks, as far as backends are concerned, is based on whether the
304
network is public or not. Public networks are created through the `snf-manage
305
network-create` command, and are only created on one backend. Private networks
306
are created on all backends, in order to ensure that VMs residing on different
307
backends can be connected to the same private network.
308

    
309
Listing existing backends
310
`````````````````````````
311
To list all the Ganeti backends known to Synnefo, we run:
312

    
313
.. code-block:: console
314

    
315
   $ snf-manage backend-list
316

    
317
Adding a new Ganeti backend
318
```````````````````````````
319
Backends are dynamically added under the control of Synnefo with `snf-manage
320
backend-add` command. In this section it is assumed that a Ganeti cluster,
321
named ``cluster.example.com`` is already up and running and configured to be
322
able to host Synnefo VMs.
323

    
324
To add this Ganeti cluster, we run:
325

    
326
.. code-block:: console
327

    
328
   $ snf-manage backend-add --clustername=cluster.example.com --user="synnefo_user" --pass="synnefo_pass"
329

    
330
where ``clustername`` is the Cluster hostname of the Ganeti cluster, and
331
``user`` and ``pass`` are the credentials for the `Ganeti RAPI user
332
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html#users-and-passwords>`_.  All
333
backend attributes can be also changed dynamically using the `snf-manage
334
backend-modify` command.
335

    
336
``snf-manage backend-add`` will also create all existing private networks to
337
the new backend. You can verify that the backend is added, by running
338
`snf-manage backend-list`.
339

    
340
Note that no VMs will be spawned to this backend, since by default it is in a
341
``drained`` state after addition and also it has no public network assigned to
342
it.
343

    
344
So, first you need to create its public network, make sure everything works as
345
expected and finally make it active by un-setting the ``drained`` flag. You can
346
do this by running:
347

    
348
.. code-block:: console
349

    
350
   $ snf-manage backend-modify --drained=False <backend_id>
351

    
352
Removing an existing Ganeti backend
353
```````````````````````````````````
354
In order to remove an existing backend from Synnefo, we run:
355

    
356
.. code-block:: console
357

    
358
   # snf-manage backend-remove <backend_id>
359

    
360
This command will fail if there are active VMs on the backend. Also, the
361
backend is not cleaned before removal, so all the Synnefo private networks
362
will be left on the Ganeti nodes. You need to remove them manually.
363

    
364
Allocation of VMs in Ganeti backends
365
````````````````````````````````````
366
As already mentioned, the Cyclades backend allocator is responsible for
367
allocating new VMs to backends. This allocator does not choose the exact Ganeti
368
node that will host the VM but just the Ganeti backend. The exact node is
369
chosen by the Ganeti cluster's allocator (hail).
370

    
371
The decision about which backend will host a VM is based on the available
372
resources. The allocator computes a score for each backend, that shows its load
373
factor, and the one with the minimum score is chosen. The admin can exclude
374
backends from the allocation phase by marking them as ``drained`` by running:
375

    
376
.. code-block:: console
377

    
378
   $ snf-manage backend-modify --drained=True <backend_id>
379

    
380
The backend resources are periodically updated, at a period defined by
381
the ``BACKEND_REFRESH_MIN`` setting, or by running `snf-manage backend-update-status`
382
command. It is advised to have a cron job running this command at a smaller
383
interval than ``BACKEND_REFRESH_MIN`` in order to remove the load of refreshing
384
the backends stats from the VM creation phase.
385

    
386
Finally, the admin can decide to have a user's VMs being allocated to a
387
specific backend, with the ``BACKEND_PER_USER`` setting. This is a mapping
388
between users and backends. If the user is found in ``BACKEND_PER_USER``, then
389
Synnefo allocates all his/hers VMs to the specific backend in the variable,
390
even if is marked as drained (useful for testing).
391

    
392

    
393
Managing Virtual Machines
394
~~~~~~~~~~~~~~~~~~~~~~~~~
395

    
396
As mentioned, Cyclades uses Ganeti for management of VMs. The administrator can
397
handle Cyclades VMs just like any other Ganeti instance, via `gnt-instance`
398
commands. All Ganeti instances that belong to Synnefo, are separated from
399
others, by a prefix in their names. This prefix is defined in
400
``BACKEND_PREFIX_ID`` setting in
401
``/etc/synnefo/20-snf-cyclades-app-backend.conf``.
402

    
403
Apart from handling instances directly in the Ganeti level, a number of `snf-manage`
404
commands are available:
405

    
406
* ``snf-manage server-list``: List servers
407
* ``snf-manage server-show``: Show information about a server in the Cyclades DB
408
* ``snf-manage server-inspect``: Inspect the state of a server both in DB and Ganeti
409
* ``snf-manage server-modify``: Modify the state of a server in the Cycldes DB
410
* ``snf-manage server-create``: Create a new server
411
* ``snf-manage server-import``: Import an existing Ganeti instance to Cyclades
412

    
413

    
414
Managing Virtual Networks
415
~~~~~~~~~~~~~~~~~~~~~~~~~
416

    
417
Cyclades is able to create and manage Virtual Networks. Networking is
418
desployment specific and must be customized based on the specific needs of the
419
system administrator. For better understanding of networking please refer to
420
the :ref:`Network <networks>` section.
421

    
422
Exactly as Cyclades VMs can be handled like Ganeti instances, Cyclades Networks
423
can also by handled as Ganeti networks, via `gnt-network commands`. All Ganeti
424
networks that belong to Synnefo are named with the prefix
425
`${BACKEND_PREFIX_ID}-net-`.
426

    
427
There are also the following `snf-manage` commands for managing networks:
428

    
429
* ``snf-manage network-list``: List networks
430
* ``snf-manage network-show``: Show information about a network in the Cyclades DB
431
* ``snf-manage network-inspect``: Inspect the state of the network in DB and Ganeti backends
432
* ``snf-manage network-modify``: Modify the state of a network in the Cycldes DB
433
* ``snf-manage network-create``: Create a new network
434
* ``snf-manage network-remove``: Remove an existing network
435

    
436
Managing Network Resources
437
``````````````````````````
438

    
439
Proper operation of the Cyclades Network Service depends on the unique
440
assignment of specific resources to each type of virtual network. Specifically,
441
these resources are:
442

    
443
* IP addresses. Cyclades creates a Pool of IPs for each Network, and assigns a
444
  unique IP address to each VM, thus connecting it to this Network. You can see
445
  the IP pool of each network by running `snf-manage network-inspect
446
  <network_ID>`. IP pools are automatically created and managed by Cyclades,
447
  depending on the subnet of the Network.
448
* Bridges corresponding to physical VLANs, which are required for networks of
449
  type `PRIVATE_PHYSICAL_VLAN`.
450
* One Bridge corresponding to one physical VLAN which is required for networks of
451
  type `PRIVATE_MAC_PREFIX`.
452

    
453
Cyclades allocates those resources from pools that are created by the
454
administrator with the `snf-manage pool-create` management command.
455

    
456
Pool Creation
457
`````````````
458
Pools are created using the `snf-manage pool-create` command:
459

    
460
.. code-block:: console
461

    
462
   # snf-manage pool-create --type=bridge --base=prv --size=20
463

    
464
will create a pool of bridges, containing bridges prv1, prv2,..prv21.
465

    
466
You can verify the creation of the pool, and check its contents by running:
467

    
468
.. code-block:: console
469

    
470
   # snf-manage pool-list
471
   # snf-manage pool-show --type=bridge 1
472

    
473
With the same commands you can handle a pool of MAC prefixes. For example:
474

    
475
.. code-block:: console
476

    
477
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
478

    
479
will create a pool of MAC prefixes from ``aa:00:1`` to ``b9:ff:f``. The MAC
480
prefix pool is responsible for providing only unicast and locally administered
481
MAC addresses, so many of these prefixes will be externally reserved, to
482
exclude from allocation.
483

    
484
Cyclades advanced operations
485
----------------------------
486

    
487
Reconciliation mechanism
488
~~~~~~~~~~~~~~~~~~~~~~~~
489

    
490
On certain occasions, such as a Ganeti or RabbitMQ failure, the state of
491
Cyclades database may differ from the real state of VMs and networks in the
492
Ganeti backends. The reconciliation process is designed to synchronize
493
the state of the Cyclades DB with Ganeti. There are two management commands
494
for reconciling VMs and Networks
495

    
496
Reconciling Virtual Machines
497
````````````````````````````
498

    
499
Reconciliation of VMs detects the following conditions:
500

    
501
 * Stale DB servers without corresponding Ganeti instances
502
 * Orphan Ganeti instances, without corresponding DB entries
503
 * Out-of-sync state for DB entries wrt to Ganeti instances
504

    
505
To detect all inconsistencies you can just run:
506

    
507
.. code-block:: console
508

    
509
  $ snf-manage reconcile-servers
510

    
511
Adding the `--fix-all` option, will do the actual synchronization:
512

    
513
.. code-block:: console
514

    
515
  $ snf-manage reconcile --fix-all
516

    
517
Please see ``snf-manage reconcile --help`` for all the details.
518

    
519

    
520
Reconciling Networks
521
````````````````````
522

    
523
Reconciliation of Networks detects the following conditions:
524

    
525
  * Stale DB networks without corresponding Ganeti networks
526
  * Orphan Ganeti networks, without corresponding DB entries
527
  * Private networks that are not created to all Ganeti backends
528
  * Unsynchronized IP pools
529

    
530
To detect all inconsistencies you can just run:
531

    
532
.. code-block:: console
533

    
534
  $ snf-manage reconcile-networks
535

    
536
Adding the `--fix-all` option, will do the actual synchronization:
537

    
538
.. code-block:: console
539

    
540
  $ snf-manage reconcile-networks --fix-all
541

    
542
Please see ``snf-manage reconcile-networks --help`` for all the details.
543

    
544

    
545

    
546
Block Storage Service (Archipelago)
547
===================================
548

    
549
Overview
550
--------
551
Archipelago offers Copy-On-Write snapshotable volumes. Pithos images can be used
552
to provision a volume with Copy-On-Write semantics (i.e. a clone). Snapshots
553
offer a unique deduplicated image of a volume, that reflects the volume state
554
during snapshot creation and are indistinguishable from a Pithos image.
555

    
556
Archipelago is used by Cyclades and Ganeti for fast provisioning of VMs based on
557
CoW volumes.
558

    
559
Architecture
560
------------
561
.. image:: images/archipelago-architecture.png
562
   :width: 50%
563
   :target: _images/archipelago-architecture.png
564

    
565
Prereqs
566
-------
567
The administrator must initialize the storage backend where archipelago volume
568
blocks will reside.
569

    
570

    
571
In case of a files backend, the administrator must create two directories. One
572
for the archipelago data blocks and one for the archipelago map blocks. These
573
should probably be over shared storage to enable sharing archipelago volumes
574
between multiple nodes. He or she, must also be able to supply a directory where
575
the pithos data and map blocks reside.
576

    
577
In case of a RADOS backend, the administrator must create two rados pools, one
578
for data blocks, and one for the map blocks. These pools, must be the same pools
579
used in pithos, in order to enable volume creation based on pithos images.
580

    
581

    
582
Installation
583
------------
584
Archipelago consists of
585

    
586
* ``libxseg0``: libxseg used to communicate over shared memory segments
587
* ``python-xseg``: python bindings for libxseg
588
* ``archipelago-kernel-dkms``: contains archipelago kernel modules to provide
589
  block devices to be used as vm disks
590
* ``python-archipelago``: archipelago python module. Includes archipelago and
591
  vlmc functionality.
592
* ``archipelago``: user space tools and peers for the archipelago management and
593
  volume composition
594
* ``archipelago-ganeti``: ganeti ext storage scripts, that enable ganeti to
595
  provision VMs over archipelago
596

    
597
Performing
598

    
599
.. code-block:: console
600

    
601
  $ apt-get install archipelago-ganeti 
602

    
603
should fetch all the required packages and get you up 'n going with archipelago
604

    
605
Bare in mind, that custom librados is required, which is provided in the apt
606
repo of GRNet.
607

    
608

    
609
For now, librados is a dependency of archipelago, even if you do not intend to
610
use archipelago over RADOS.
611

    
612
Configuration
613
-------------
614
Archipelago should work out of the box with a RADOS backend, but basic
615
configuration can be done in ``/etc/default/archipelago`` .
616

    
617
If you wish to change the storage backend to files, set
618

    
619
.. code-block:: console
620

    
621
   STORAGE="files"
622

    
623
and provide the appropriate settings for files storage backend in the conf file.
624

    
625
These are:
626

    
627
* ``FILED_IMAGES``: directory for archipelago data blocks.
628
* ``FILED_MAPS``: directory for archipelago map blocks.
629
* ``PITHOS``: directory of pithos data blocks.
630
* ``PITHOSMAPS``: directory of pithos map blocks.
631

    
632
The settings for RADOS storage backend are:
633

    
634
* ``RADOS_POOL_MAPS``: The pool where archipelago and pithos map blocks reside.
635
* ``RADOS_POOL_BLOCKS``: The pool where archipelago and pithos data blocks
636
  reside.
637

    
638
Examples can be found in the conf file.
639

    
640
Be aware that archipelago infrastructure doesn't provide default values for this
641
settings. If they are not set in the conf file, archipelago will not be able to
642
function.
643

    
644
Archipelago also provides ``VERBOSITY`` config options to control the output
645
generated by the userspace peers.
646

    
647
The available options are:
648

    
649
* ``VERBOSITY_BLOCKERB``
650
* ``VERBOSITY_BLOCKERM``
651
* ``VERBOSITY_MAPPER``
652
* ``VERBOSITY_VLMC``
653

    
654
and the available values are:
655

    
656
* 0 : Error only logging.
657
* 1 : Warning logging.
658
* 2 : Info logging.
659
* 3 : Debug logging. WARNING: This options produces tons of output, but the
660
  logrotate daemon should take care of it.
661

    
662
Working with Archipelago
663
------------------------
664

    
665
``archipelago`` provides basic functionality for archipelago.
666

    
667
Usage:
668

    
669
.. code-block:: console
670

    
671
  $ archipelago [-u] command
672

    
673

    
674
Currently it supports the following commands:
675

    
676
* ``start [peer]``
677
  Starts archipelago or the specified peer.
678
* ``stop [peer]``
679
  Stops archipelago or the specified peer.
680
* ``restart [peer]``
681
  Restarts archipelago or the specified peer.
682
* ``status``
683
  Show the status of archipelago.
684

    
685
Available peers: ``blockerm``, ``blockerb``, ``mapperd``, ``vlmcd``.
686

    
687

    
688
``start``, ``stop``, ``restart`` can be combined with the ``-u / --user`` option
689
to affect only the userspace peers supporting archipelago.
690

    
691

    
692

    
693
Archipelago advanced operations
694
-------------------------------
695
The ``vlmc`` tool provides a way to interact with archipelago volumes
696

    
697
* ``vlmc map <volumename>``: maps the volume to a xsegbd device.
698

    
699
* ``vlmc unmap </dev/xsegbd[1-..]>``: unmaps the specified device from the
700
  system.
701

    
702
* ``vlmc create <volumename> --snap <snapname> --size <size>``: creates a new
703
  volume named <volumename> from snapshot name <snapname> with size <size>.
704
    The ``--snap`` and ``--size`` are optional, but at least one of them is
705
    mandatory. e.g:
706

    
707
    ``vlmc create <volumename> --snap <snapname>`` creates a volume named
708
    volumename from snapshot snapname. The size of the volume is the same as
709
    the size of the snapshot.
710

    
711
    ``vlmc create <volumename> --size <size>`` creates an empty volume of size
712
    <size> named <volumename>.
713

    
714
* ``vlmc remove <volumename>``: removes the volume and all the related
715
  archipelago blocks from storage.
716

    
717
* ``vlmc list``: provides a list of archipelago volumes. Currently only works
718
  with RADOS storage backend.
719

    
720
* ``vlmc info <volumename>``: shows volume information. Currently returns only
721
  volume size.
722

    
723
* ``vlmc open <volumename>``: opens an archipelago volume. That is, taking all
724
  the necessary locks and also make the rest of the infrastructure aware of the
725
  operation.
726

    
727
  This operation succeeds if the volume is alread opened.
728

    
729
* ``vlmc close <volumename>``: closes an archipelago volume. That is, performing
730
  all the necessary functions in the insfrastrure to successfully release the
731
  volume. Also releases all the acquired locks.
732

    
733
  ``vlmc close`` should be performed after a ``vlmc open`` operation.
734

    
735
* ``vlmc lock <volumename>``: locks a volume. This step allow the administrator
736
  to lock an archipelago volume, independently from the rest of the
737
  infrastrure.
738

    
739
* ``vlmc unlock [-f] <volumename>``: unlocks a volume. This allow the
740
  administrator to unlock a volume, independently from the rest of the
741
  infrastructure.
742
  The unlock option can be performed only by the blocker that acquired the lock
743
  in the first place. To unlock a volume from another blocker, ``-f`` option
744
  must be used to break the lock.
745

    
746

    
747
The "kamaki" API client
748
=======================
749

    
750
To upload, register or modify an image you will need the **kamaki** tool.
751
Before proceeding make sure that it is configured properly. Verify that
752
*image_url*, *storage_url*, and *token* are set as needed:
753

    
754
.. code-block:: console
755

    
756
   $ kamaki config list
757

    
758
To chage a setting use ``kamaki config set``:
759

    
760
.. code-block:: console
761

    
762
   $ kamaki config set image_url https://cyclades.example.com/plankton
763
   $ kamaki config set storage_url https://pithos.example.com/v1
764
   $ kamaki config set token ...
765

    
766
Upload Image
767
------------
768

    
769
As a shortcut, you can configure a default account and container that will be
770
used by the ``kamaki store`` commands:
771

    
772
.. code-block:: console
773

    
774
   $ kamaki config set storage_account images@example.com
775
   $ kamaki config set storage_container images
776

    
777
If the container does not exist, you will have to create it before uploading
778
any images:
779

    
780
.. code-block:: console
781

    
782
   $ kamaki store create images
783

    
784
You are now ready to upload an image. You can upload it with a Pithos+ client,
785
or use kamaki directly:
786

    
787
.. code-block:: console
788

    
789
   $ kamaki store upload ubuntu.iso
790

    
791
You can use any Pithos+ client to verify that the image was uploaded correctly.
792
The full Pithos URL for the previous example will be
793
``pithos://images@example.com/images/ubuntu.iso``.
794

    
795

    
796
Register Image
797
--------------
798

    
799
To register an image you will need to use the full Pithos+ URL. To register as
800
a public image the one from the previous example use:
801

    
802
.. code-block:: console
803

    
804
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso --public
805

    
806
The ``--public`` flag is important, if missing the registered image will not
807
be listed by ``kamaki glance list``.
808

    
809
Use ``kamaki glance register`` with no arguments to see a list of available
810
options. A more complete example would be the following:
811

    
812
.. code-block:: console
813

    
814
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso \
815
            --public --disk-format diskdump --property kernel=3.1.2
816

    
817
To verify that the image was registered successfully use:
818

    
819
.. code-block:: console
820

    
821
   $ kamaki glance list -l
822

    
823

    
824

    
825
Miscellaneous
826
=============
827

    
828
.. RabbitMQ
829

    
830
RabbitMQ Broker
831
---------------
832

    
833
Queue nodes run the RabbitMQ sofware, which provides AMQP functionality. To
834
guarantee high-availability, more than one Queue nodes should be deployed, each
835
of them belonging to the same `RabbitMQ cluster
836
<http://www.rabbitmq.com/clustering.html>`_. Synnefo uses the RabbitMQ
837
active/active `High Available Queues <http://www.rabbitmq.com/ha.html>`_ which
838
are mirrored between two nodes within a RabbitMQ cluster.
839

    
840
The RabbitMQ nodes that form the cluster, are declared to Synnefo through the
841
`AMQP_HOSTS` setting. Each time a Synnefo component needs to connect to
842
RabbitMQ, one of these nodes is chosen in a random way. The client that Synnefo
843
uses to connect to RabbitMQ, handles connection failures transparently and
844
tries to reconnect to a different node. As long as one of these nodes are up
845
and running, functionality of Synnefo should not be downgraded by the RabbitMQ
846
node failures.
847

    
848
All the queues that are being used are declared as durable, meaning that
849
messages are persistently stored to RabbitMQ, until they get successfully
850
processed by a client.
851

    
852
Currently, RabbitMQ is used by the following components:
853

    
854
* `snf-ganeti-eventd`, `snf-ganeti-hook` and `snf-progress-monitor`:
855
  These components send messages concerning the status and progress of
856
  jobs in the Ganeti backend.
857
* `snf-dispatcher`: This daemon, consumes the messages that are sent from
858
  the above components, and updates the Cyclades DB accordingly.
859

    
860
Installation
861
````````````
862
Please check the RabbitMQ documentation which covers extensively the
863
`installation of RabbitMQ server <http://www.rabbitmq.com/download.html>`_ and
864
the setup of a `RabbitMQ cluster <http://www.rabbitmq.com/clustering.html>`_.
865
Also, check out the `web management plugin
866
<http://www.rabbitmq.com/management.html>`_ that can be useful for managing and
867
monitoring RabbitMQ.
868

    
869
For a basic installation of RabbitMQ on two nodes (node1 and node2) you can do
870
the following:
871

    
872
On both nodes, install rabbitmq-server and create a Synnefo user:
873

    
874
.. code-block:: console
875

    
876
  $ apt-get install rabbitmq-server
877
  $ rabbitmqctl add_user synnefo "example_pass"
878
  $ rabbitmqctl set_permissions synnefo  ".*" ".*" ".*"
879

    
880
Also guarantee that both nodes share the same cookie, by running:
881

    
882
.. code-block:: console
883

    
884
  $ scp node1:/var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/.erlang.cookie
885

    
886
and restart the nodes:
887

    
888
.. code-block:: console
889

    
890
  $ /etc/init.d/rabbitmq-server restart
891

    
892

    
893
To setup the RabbitMQ cluster run:
894

    
895
.. code-block:: console
896

    
897
  root@node2: rabbitmqctl stop_app
898
  root@node2: rabbitmqctl reset
899
  root@node2: rabbitmqctl cluster rabbit@node1 rabbit@node2
900
  root@node2: rabbitmqctl start_app
901

    
902
You can verify that the cluster is set up correctly by running:
903

    
904
.. code-block:: console
905

    
906
  root@node2: rabbitmqctl cluster_status
907

    
908

    
909

    
910

    
911

    
912
Admin tool: snf-manage
913
----------------------
914

    
915
``snf-manage`` is a tool used to perform various administrative tasks. It needs
916
to be able to access the django database, so the following should be able to
917
import the Django settings.
918

    
919
Additionally, administrative tasks can be performed via the admin web interface
920
located in /admin. Only users of type ADMIN can access the admin pages. To
921
change the type of a user to ADMIN, snf-admin can be used:
922

    
923
.. code-block:: console
924

    
925
   $ snf-manage user-modify 42 --type ADMIN
926

    
927
Logging
928
-------
929

    
930
Logging in Synnefo is using Python's logging module. The module is configured
931
using dictionary configuration, whose format is described here:
932

    
933
http://docs.python.org/release/2.7.1/library/logging.html#logging-config-dictschema
934

    
935
Note that this is a feature of Python 2.7 that we have backported for use in
936
Python 2.6.
937

    
938
The logging configuration dictionary is defined in
939
``/etc/synnefo/10-snf-webproject-logging.conf``
940

    
941
The administrator can have finer logging control by modifying the
942
``LOGGING_SETUP`` dictionary, and defining subloggers with different handlers
943
and log levels.  e.g. To enable debug messages only for the API set the level
944
of 'synnefo.api' to ``DEBUG``
945

    
946
By default, the Django webapp and snf-manage logs to syslog, while
947
`snf-dispatcher` logs to `/var/log/synnefo/dispatcher.log`.
948

    
949

    
950
Scaling up to multiple nodes
951
============================
952

    
953
Here we will describe how to deploy all services, interconnected with each
954
other, on multiple physical nodes.
955

    
956
synnefo components
957
------------------
958

    
959
You need to install the appropriate synnefo software components on each node,
960
depending on its type, see :ref:`Architecture <cyclades-architecture>`.
961

    
962
Please see the page of each synnefo software component for specific
963
installation instructions, where applicable.
964

    
965
Install the following synnefo components:
966

    
967
Nodes of type :ref:`APISERVER <APISERVER_NODE>`
968
    Components
969
    :ref:`snf-common <snf-common>`,
970
    :ref:`snf-webproject <snf-webproject>`,
971
    :ref:`snf-cyclades-app <snf-cyclades-app>`
972
Nodes of type :ref:`GANETI-MASTER <GANETI_MASTER>` and :ref:`GANETI-NODE <GANETI_NODE>`
973
    Components
974
    :ref:`snf-common <snf-common>`,
975
    :ref:`snf-cyclades-gtools <snf-cyclades-gtools>`
976
Nodes of type :ref:`LOGIC <LOGIC_NODE>`
977
    Components
978
    :ref:`snf-common <snf-common>`,
979
    :ref:`snf-webproject <snf-webproject>`,
980
    :ref:`snf-cyclades-app <snf-cyclades-app>`.
981

    
982

    
983

    
984
Upgrade Notes
985
=============
986

    
987
Cyclades upgrade notes
988
----------------------
989

    
990
.. toctree::
991
   :maxdepth: 2
992

    
993
   cyclades-upgrade
994

    
995
Changelog
996
=========