Statistics
| Branch: | Tag: | Revision:

root / docs / admin-guide.rst @ bbcd3dd1

History | View | Annotate | Download (38.9 kB)

1
.. _admin-guide:
2

    
3
Synnefo Administrator's Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the complete Synnefo Administrator's Guide.
7

    
8

    
9

    
10
General Synnefo Architecture
11
============================
12

    
13
The following graph shows the whole Synnefo architecture and how it interacts
14
with multiple Ganeti clusters. We hope that after reading the Administrator's
15
Guide you will be able to understand every component and all the interactions
16
between them. It is a good idea to first go through the Quick Administrator's
17
Guide before proceeding.
18

    
19
.. image:: images/synnefo-arch2.png
20
   :width: 100%
21
   :target: _images/synnefo-arch2.png
22

    
23

    
24

    
25
Identity Service (Astakos)
26
==========================
27

    
28

    
29
Overview
30
--------
31

    
32
Authentication methods
33
~~~~~~~~~~~~~~~~~~~~~~
34

    
35
Local Authentication
36
````````````````````
37

    
38
LDAP Authentication
39
```````````````````
40

    
41
.. _shibboleth-auth:
42

    
43
Shibboleth Authentication
44
`````````````````````````
45

    
46
Astakos can delegate user authentication to a Shibboleth federation.
47

    
48
To setup shibboleth, install package::
49

    
50
  apt-get install libapache2-mod-shib2
51

    
52
Change appropriately the configuration files in ``/etc/shibboleth``.
53

    
54
Add in ``/etc/apache2/sites-available/synnefo-ssl``::
55

    
56
  ShibConfig /etc/shibboleth/shibboleth2.xml
57
  Alias      /shibboleth-sp /usr/share/shibboleth
58

    
59
  <Location /im/login/shibboleth>
60
    AuthType shibboleth
61
    ShibRequireSession On
62
    ShibUseHeaders On
63
    require valid-user
64
  </Location>
65

    
66
and before the line containing::
67

    
68
  ProxyPass        / http://localhost:8080/ retry=0
69

    
70
add::
71

    
72
  ProxyPass /Shibboleth.sso !
73

    
74
Then, enable the shibboleth module::
75

    
76
  a2enmod shib2
77

    
78
After passing through the apache module, the following tokens should be
79
available at the destination::
80

    
81
  eppn # eduPersonPrincipalName
82
  Shib-InetOrgPerson-givenName
83
  Shib-Person-surname
84
  Shib-Person-commonName
85
  Shib-InetOrgPerson-displayName
86
  Shib-EP-Affiliation
87
  Shib-Session-ID
88

    
89
Finally, add 'shibboleth' in ``ASTAKOS_IM_MODULES`` list. The variable resides
90
inside the file ``/etc/synnefo/20-snf-astakos-app-settings.conf``
91

    
92
Architecture
93
------------
94

    
95
Prereqs
96
-------
97

    
98
Installation
99
------------
100

    
101
Configuration
102
-------------
103

    
104
Working with Astakos
105
--------------------
106

    
107
User activation methods
108
~~~~~~~~~~~~~~~~~~~~~~~
109

    
110
When a new user signs up, he/she is not marked as active. You can see his/her
111
state by running (on the machine that runs the Astakos app):
112

    
113
.. code-block:: console
114

    
115
   $ snf-manage user-list
116

    
117
There are two different ways to activate a new user. Both need access to a
118
running :ref:`mail server <mail-server>`.
119

    
120
Manual activation
121
`````````````````
122

    
123
You can manually activate a new user that has already signed up, by sending
124
him/her an activation email. The email will contain an approriate activation
125
link, which will complete the activation process if followed. You can send the
126
email by running:
127

    
128
.. code-block:: console
129

    
130
   $ snf-manage user-activation-send <user ID or email>
131

    
132
Be sure to have already setup your mail server and defined it in your Synnefo
133
settings, before running the command.
134

    
135
Automatic activation
136
````````````````````
137

    
138
FIXME: Describe Regex activation method
139

    
140
Setting quota limits
141
~~~~~~~~~~~~~~~~~~~~
142

    
143
Set default quotas
144
``````````````````
145

    
146
In 20-snf-astakos-app-settings.conf, 
147
uncomment the default setting ``ASTAKOS_SERVICES``
148
and customize the ``'uplimit'`` values.
149
These are the default base quotas for all users.
150

    
151
To apply your configuration run::
152

    
153
    # snf-manage astakos-init --load-service-resources
154
    # snf-manage astakos-quota --sync
155

    
156
Set base quotas for individual users
157
````````````````````````````````````
158

    
159
For individual users that need different quotas than the default
160
you can set it for each resource like this::
161

    
162
    # use this to display quotas / uuid
163
    # snf-manage user-show 'uuid or email'
164

    
165
    # snf-manage user-set-initial-quota --set-capacity 'user-uuid' 'cyclades.vm' 10
166

    
167
    # this applies the configuration
168
    # snf-manage astakos-quota --sync --user 'user-uuid'
169

    
170

    
171
Enable the Projects feature
172
~~~~~~~~~~~~~~~~~~~~~~~~~~~
173

    
174
If you want to enable the projects feature so that users may apply
175
on their own for resources by creating and joining projects,
176
in ``20-snf-astakos-app-settings.conf`` set::
177

    
178
    # this will allow at most one pending project application per user
179
    ASTAKOS_PENDING_APPLICATION_LIMIT = 1
180
    # this will make the 'projects' page visible in the dashboard
181
    ASTAKOS_PROJECTS_VISIBLE = True
182

    
183
When users apply for projects they are not automatically granted
184
the resources. They must first be approved by the administrator.
185

    
186
To list pending project applications in astakos::
187

    
188
    # snf-manage project-list --pending
189

    
190
Note the last column, the application id. To approve it::
191

    
192
    # <app id> from the last column of project-list
193
    # snf-manage project-control --approve <app id>
194

    
195
To deny an application::
196

    
197
    # snf-manage project-control --deny <app id>
198

    
199

    
200

    
201
Astakos advanced operations
202
---------------------------
203

    
204
Adding "Terms of Use"
205
~~~~~~~~~~~~~~~~~~~~~
206

    
207
Astakos supports versioned terms-of-use. First of all you need to create an
208
html file that will contain your terms. For example, create the file
209
``/usr/share/synnefo/sample-terms.html``, which contains the following:
210

    
211
.. code-block:: console
212

    
213
   <h1>~okeanos terms</h1>
214

    
215
   These are the example terms for ~okeanos
216

    
217
Then, add those terms-of-use with the snf-manage command:
218

    
219
.. code-block:: console
220

    
221
   $ snf-manage term-add /usr/share/synnefo/sample-terms.html
222

    
223
Your terms have been successfully added and you will see the corresponding link
224
appearing in the Astakos web pages' footer.
225

    
226
Enabling reCAPTCHA
227
~~~~~~~~~~~~~~~~~~
228

    
229
Astakos supports the `reCAPTCHA <http://www.google.com/recaptcha>`_ feature.
230
If enabled, it protects the Astakos forms from bots. To enable the feature, go
231
to https://www.google.com/recaptcha/admin/create and create your own reCAPTCHA
232
key pair. Then edit ``/etc/synnefo/20-snf-astakos-app-settings.conf`` and set
233
the corresponding variables to reflect your newly created key pair. Finally, set
234
the ``ASTAKOS_RECAPTCHA_ENABLED`` variable to ``True``:
235

    
236
.. code-block:: console
237

    
238
   ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
239
   ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
240

    
241
   ASTAKOS_RECAPTCHA_ENABLED = True
242

    
243
Restart the service on the Astakos node(s) and you are ready:
244

    
245
.. code-block:: console
246

    
247
   # /etc/init.d/gunicorn restart
248

    
249
Checkout your new Sign up page. If you see the reCAPTCHA box, you have setup
250
everything correctly.
251

    
252

    
253

    
254
File Storage Service (Pithos)
255
=============================
256

    
257
Overview
258
--------
259

    
260
Architecture
261
------------
262

    
263
Prereqs
264
-------
265

    
266
Installation
267
------------
268

    
269
Configuration
270
-------------
271

    
272
Working with Pithos
273
-------------------
274

    
275
Pithos advanced operations
276
--------------------------
277

    
278

    
279

    
280
Compute/Network/Image Service (Cyclades)
281
========================================
282

    
283
Compute Overview
284
----------------
285

    
286
Network Overview
287
----------------
288

    
289
Image Overview
290
--------------
291

    
292
Architecture
293
------------
294

    
295
Asynchronous communication with Ganeti backends
296
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
297
Synnefo uses Google Ganeti backends for VM cluster management. In order for
298
Cyclades to be able to handle thousands of user requests, Cyclades and Ganeti
299
communicate asynchronously. Briefly, requests are submitted to Ganeti through
300
Ganeti's RAPI/HTTP interface, and then asynchronous notifications about the
301
progress of Ganeti jobs are being created and pushed upwards to Cyclades. The
302
architecture and communication with a Ganeti backend is shown in the graph
303
below:
304

    
305
.. image:: images/cyclades-ganeti-communication.png
306
   :width: 50%
307
   :target: _images/cyclades-ganeti-communication.png
308

    
309
The Cyclades API server is responsible for handling user requests. Read-only
310
requests are directly served by looking up the Cyclades DB. If the request
311
needs an action in the Ganeti backend, Cyclades submit jobs to the Ganeti
312
master using the `Ganeti RAPI interface
313
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html>`_.
314

    
315
While Ganeti executes the job, `snf-ganeti-eventd`, `snf-ganeti-hook` and
316
`snf-progress-monitor` are monitoring the progress of the job and send
317
corresponding messages to the RabbitMQ servers. These components are part
318
of `snf-cyclades-gtools` and must be installed on all Ganeti nodes. Specially:
319

    
320
* *snf-ganeti-eventd* sends messages about operations affecting the operating
321
  state of instances and networks. Works by monitoring the Ganeti job queue.
322
* *snf-ganeti_hook* sends messages about the NICs of instances. It includes a
323
  number of `Ganeti hooks <http://docs.ganeti.org/ganeti/2.2/html/hooks.html>`_
324
  for customisation of operations.
325
* *snf-progress_monitor* sends messages about the progress of the Image deployment
326
  phase which is done by the Ganeti OS Definition `snf-image`.
327

    
328
Finally, `snf-dispatcher` consumes messages from the RabbitMQ queues, processes
329
these messages and properly updates the state of the Cyclades DB. Subsequent
330
requests to the Cyclades API, will retrieve the updated state from the DB.
331

    
332

    
333
Prereqs
334
-------
335

    
336
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
337

    
338
Installation
339
------------
340

    
341
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
342

    
343
Configuration
344
-------------
345

    
346
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`.
347

    
348
Working with Cyclades
349
---------------------
350

    
351
Managing Ganeti Backends
352
~~~~~~~~~~~~~~~~~~~~~~~~
353

    
354
Since v0.11, Synnefo is able to manage multiple Ganeti clusters (backends)
355
making it capable to scale linearly to tens of thousands of VMs. Backends
356
can be dynamically added or removed via `snf-manage` commands.
357

    
358
Each newly created VM is allocated to a Ganeti backend by the Cyclades backend
359
allocator. The VM is "pinned" to this backend, and can not change through its
360
lifetime. The backend allocator decides in which backend to spawn the VM based
361
on the available resources of each backend, trying to balance the load between
362
them.
363

    
364
Handling of Networks, as far as backends are concerned, is based on whether the
365
network is public or not. Public networks are created through the `snf-manage
366
network-create` command, and are only created on one backend. Private networks
367
are created on all backends, in order to ensure that VMs residing on different
368
backends can be connected to the same private network.
369

    
370
Listing existing backends
371
`````````````````````````
372
To list all the Ganeti backends known to Synnefo, we run:
373

    
374
.. code-block:: console
375

    
376
   $ snf-manage backend-list
377

    
378
Adding a new Ganeti backend
379
```````````````````````````
380
Backends are dynamically added under the control of Synnefo with `snf-manage
381
backend-add` command. In this section it is assumed that a Ganeti cluster,
382
named ``cluster.example.com`` is already up and running and configured to be
383
able to host Synnefo VMs.
384

    
385
To add this Ganeti cluster, we run:
386

    
387
.. code-block:: console
388

    
389
   $ snf-manage backend-add --clustername=cluster.example.com --user="synnefo_user" --pass="synnefo_pass"
390

    
391
where ``clustername`` is the Cluster hostname of the Ganeti cluster, and
392
``user`` and ``pass`` are the credentials for the `Ganeti RAPI user
393
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html#users-and-passwords>`_.  All
394
backend attributes can be also changed dynamically using the `snf-manage
395
backend-modify` command.
396

    
397
``snf-manage backend-add`` will also create all existing private networks to
398
the new backend. You can verify that the backend is added, by running
399
`snf-manage backend-list`.
400

    
401
Note that no VMs will be spawned to this backend, since by default it is in a
402
``drained`` state after addition and also it has no public network assigned to
403
it.
404

    
405
So, first you need to create its public network, make sure everything works as
406
expected and finally make it active by un-setting the ``drained`` flag. You can
407
do this by running:
408

    
409
.. code-block:: console
410

    
411
   $ snf-manage backend-modify --drained=False <backend_id>
412

    
413
Removing an existing Ganeti backend
414
```````````````````````````````````
415
In order to remove an existing backend from Synnefo, we run:
416

    
417
.. code-block:: console
418

    
419
   # snf-manage backend-remove <backend_id>
420

    
421
This command will fail if there are active VMs on the backend. Also, the
422
backend is not cleaned before removal, so all the Synnefo private networks
423
will be left on the Ganeti nodes. You need to remove them manually.
424

    
425
Allocation of VMs in Ganeti backends
426
````````````````````````````````````
427
As already mentioned, the Cyclades backend allocator is responsible for
428
allocating new VMs to backends. This allocator does not choose the exact Ganeti
429
node that will host the VM but just the Ganeti backend. The exact node is
430
chosen by the Ganeti cluster's allocator (hail).
431

    
432
The decision about which backend will host a VM is based on the available
433
resources. The allocator computes a score for each backend, that shows its load
434
factor, and the one with the minimum score is chosen. The admin can exclude
435
backends from the allocation phase by marking them as ``drained`` by running:
436

    
437
.. code-block:: console
438

    
439
   $ snf-manage backend-modify --drained=True <backend_id>
440

    
441
The backend resources are periodically updated, at a period defined by
442
the ``BACKEND_REFRESH_MIN`` setting, or by running `snf-manage backend-update-status`
443
command. It is advised to have a cron job running this command at a smaller
444
interval than ``BACKEND_REFRESH_MIN`` in order to remove the load of refreshing
445
the backends stats from the VM creation phase.
446

    
447
Finally, the admin can decide to have a user's VMs being allocated to a
448
specific backend, with the ``BACKEND_PER_USER`` setting. This is a mapping
449
between users and backends. If the user is found in ``BACKEND_PER_USER``, then
450
Synnefo allocates all his/hers VMs to the specific backend in the variable,
451
even if is marked as drained (useful for testing).
452

    
453

    
454
Managing Virtual Machines
455
~~~~~~~~~~~~~~~~~~~~~~~~~
456

    
457
As mentioned, Cyclades uses Ganeti for management of VMs. The administrator can
458
handle Cyclades VMs just like any other Ganeti instance, via `gnt-instance`
459
commands. All Ganeti instances that belong to Synnefo, are separated from
460
others, by a prefix in their names. This prefix is defined in
461
``BACKEND_PREFIX_ID`` setting in
462
``/etc/synnefo/20-snf-cyclades-app-backend.conf``.
463

    
464
Apart from handling instances directly in the Ganeti level, a number of `snf-manage`
465
commands are available:
466

    
467
* ``snf-manage server-list``: List servers
468
* ``snf-manage server-show``: Show information about a server in the Cyclades DB
469
* ``snf-manage server-inspect``: Inspect the state of a server both in DB and Ganeti
470
* ``snf-manage server-modify``: Modify the state of a server in the Cycldes DB
471
* ``snf-manage server-create``: Create a new server
472
* ``snf-manage server-import``: Import an existing Ganeti instance to Cyclades
473

    
474

    
475
Managing Virtual Networks
476
~~~~~~~~~~~~~~~~~~~~~~~~~
477

    
478
Cyclades is able to create and manage Virtual Networks. Networking is
479
desployment specific and must be customized based on the specific needs of the
480
system administrator. For better understanding of networking please refer to
481
the :ref:`Network <networks>` section.
482

    
483
Exactly as Cyclades VMs can be handled like Ganeti instances, Cyclades Networks
484
can also by handled as Ganeti networks, via `gnt-network commands`. All Ganeti
485
networks that belong to Synnefo are named with the prefix
486
`${BACKEND_PREFIX_ID}-net-`.
487

    
488
There are also the following `snf-manage` commands for managing networks:
489

    
490
* ``snf-manage network-list``: List networks
491
* ``snf-manage network-show``: Show information about a network in the Cyclades DB
492
* ``snf-manage network-inspect``: Inspect the state of the network in DB and Ganeti backends
493
* ``snf-manage network-modify``: Modify the state of a network in the Cycldes DB
494
* ``snf-manage network-create``: Create a new network
495
* ``snf-manage network-remove``: Remove an existing network
496

    
497
Managing Network Resources
498
``````````````````````````
499

    
500
Proper operation of the Cyclades Network Service depends on the unique
501
assignment of specific resources to each type of virtual network. Specifically,
502
these resources are:
503

    
504
* IP addresses. Cyclades creates a Pool of IPs for each Network, and assigns a
505
  unique IP address to each VM, thus connecting it to this Network. You can see
506
  the IP pool of each network by running `snf-manage network-inspect
507
  <network_ID>`. IP pools are automatically created and managed by Cyclades,
508
  depending on the subnet of the Network.
509
* Bridges corresponding to physical VLANs, which are required for networks of
510
  type `PRIVATE_PHYSICAL_VLAN`.
511
* One Bridge corresponding to one physical VLAN which is required for networks of
512
  type `PRIVATE_MAC_PREFIX`.
513

    
514
Cyclades allocates those resources from pools that are created by the
515
administrator with the `snf-manage pool-create` management command.
516

    
517
Pool Creation
518
`````````````
519
Pools are created using the `snf-manage pool-create` command:
520

    
521
.. code-block:: console
522

    
523
   # snf-manage pool-create --type=bridge --base=prv --size=20
524

    
525
will create a pool of bridges, containing bridges prv1, prv2,..prv21.
526

    
527
You can verify the creation of the pool, and check its contents by running:
528

    
529
.. code-block:: console
530

    
531
   # snf-manage pool-list
532
   # snf-manage pool-show --type=bridge 1
533

    
534
With the same commands you can handle a pool of MAC prefixes. For example:
535

    
536
.. code-block:: console
537

    
538
   # snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536
539

    
540
will create a pool of MAC prefixes from ``aa:00:1`` to ``b9:ff:f``. The MAC
541
prefix pool is responsible for providing only unicast and locally administered
542
MAC addresses, so many of these prefixes will be externally reserved, to
543
exclude from allocation.
544

    
545
Cyclades advanced operations
546
----------------------------
547

    
548
Reconciliation mechanism
549
~~~~~~~~~~~~~~~~~~~~~~~~
550

    
551
On certain occasions, such as a Ganeti or RabbitMQ failure, the state of
552
Cyclades database may differ from the real state of VMs and networks in the
553
Ganeti backends. The reconciliation process is designed to synchronize
554
the state of the Cyclades DB with Ganeti. There are two management commands
555
for reconciling VMs and Networks
556

    
557
Reconciling Virtual Machines
558
````````````````````````````
559

    
560
Reconciliation of VMs detects the following conditions:
561

    
562
 * Stale DB servers without corresponding Ganeti instances
563
 * Orphan Ganeti instances, without corresponding DB entries
564
 * Out-of-sync state for DB entries wrt to Ganeti instances
565

    
566
To detect all inconsistencies you can just run:
567

    
568
.. code-block:: console
569

    
570
  $ snf-manage reconcile-servers
571

    
572
Adding the `--fix-all` option, will do the actual synchronization:
573

    
574
.. code-block:: console
575

    
576
  $ snf-manage reconcile --fix-all
577

    
578
Please see ``snf-manage reconcile --help`` for all the details.
579

    
580

    
581
Reconciling Networks
582
````````````````````
583

    
584
Reconciliation of Networks detects the following conditions:
585

    
586
  * Stale DB networks without corresponding Ganeti networks
587
  * Orphan Ganeti networks, without corresponding DB entries
588
  * Private networks that are not created to all Ganeti backends
589
  * Unsynchronized IP pools
590

    
591
To detect all inconsistencies you can just run:
592

    
593
.. code-block:: console
594

    
595
  $ snf-manage reconcile-networks
596

    
597
Adding the `--fix-all` option, will do the actual synchronization:
598

    
599
.. code-block:: console
600

    
601
  $ snf-manage reconcile-networks --fix-all
602

    
603
Please see ``snf-manage reconcile-networks --help`` for all the details.
604

    
605

    
606

    
607
Block Storage Service (Archipelago)
608
===================================
609

    
610
Overview
611
--------
612
Archipelago offers Copy-On-Write snapshotable volumes. Pithos images can be used
613
to provision a volume with Copy-On-Write semantics (i.e. a clone). Snapshots
614
offer a unique deduplicated image of a volume, that reflects the volume state
615
during snapshot creation and are indistinguishable from a Pithos image.
616

    
617
Archipelago is used by Cyclades and Ganeti for fast provisioning of VMs based on
618
CoW volumes. Moreover, it enables live migration of thinly-provisioned VMs with
619
no physically shared storage.
620

    
621
Archipelago Architecture
622
------------------------
623

    
624
.. image:: images/archipelago-architecture.png
625
   :width: 50%
626
   :target: _images/archipelago-architecture.png
627

    
628
.. _syn+archip+rados:
629

    
630
Overview of Synnefo + Archipelago + RADOS
631
-----------------------------------------
632

    
633
.. image:: images/synnefo-arch3.png
634
   :width: 100%
635
   :target: _images/synnefo-arch3.png
636

    
637
Prereqs
638
-------
639

    
640
The administrator must initialize the storage backend where archipelago volume
641
blocks will reside.
642

    
643
In case of a files backend, the administrator must create two directories. One
644
for the archipelago data blocks and one for the archipelago map blocks. These
645
should probably be over shared storage to enable sharing archipelago volumes
646
between multiple nodes. He or she, must also be able to supply a directory where
647
the pithos data and map blocks reside.
648

    
649
In case of a RADOS backend, the administrator must create two rados pools, one
650
for data blocks, and one for the map blocks. These pools, must be the same pools
651
used in pithos, in order to enable volume creation based on pithos images.
652

    
653
Installation
654
------------
655

    
656
Archipelago consists of
657

    
658
* ``libxseg0``: libxseg used to communicate over shared memory segments
659
* ``python-xseg``: python bindings for libxseg
660
* ``archipelago-kernel-dkms``: contains archipelago kernel modules to provide
661
  block devices to be used as vm disks
662
* ``python-archipelago``: archipelago python module. Includes archipelago and
663
  vlmc functionality.
664
* ``archipelago``: user space tools and peers for the archipelago management and
665
  volume composition
666
* ``archipelago-ganeti``: ganeti ext storage scripts, that enable ganeti to
667
  provision VMs over archipelago
668

    
669
Performing
670

    
671
.. code-block:: console
672

    
673
  $ apt-get install archipelago-ganeti 
674

    
675
should fetch all the required packages and get you up 'n going with archipelago
676

    
677
Bare in mind, that custom librados is required, which is provided in the apt
678
repo of GRNet.
679

    
680

    
681
For now, librados is a dependency of archipelago, even if you do not intend to
682
use archipelago over RADOS.
683

    
684
Configuration
685
-------------
686
Archipelago should work out of the box with a RADOS backend, but basic
687
configuration can be done in ``/etc/default/archipelago`` .
688

    
689
If you wish to change the storage backend to files, set
690

    
691
.. code-block:: console
692

    
693
   STORAGE="files"
694

    
695
and provide the appropriate settings for files storage backend in the conf file.
696

    
697
These are:
698

    
699
* ``FILED_IMAGES``: directory for archipelago data blocks.
700
* ``FILED_MAPS``: directory for archipelago map blocks.
701
* ``PITHOS``: directory of pithos data blocks.
702
* ``PITHOSMAPS``: directory of pithos map blocks.
703

    
704
The settings for RADOS storage backend are:
705

    
706
* ``RADOS_POOL_MAPS``: The pool where archipelago and pithos map blocks reside.
707
* ``RADOS_POOL_BLOCKS``: The pool where archipelago and pithos data blocks
708
  reside.
709

    
710
Examples can be found in the conf file.
711

    
712
Be aware that archipelago infrastructure doesn't provide default values for this
713
settings. If they are not set in the conf file, archipelago will not be able to
714
function.
715

    
716
Archipelago also provides ``VERBOSITY`` config options to control the output
717
generated by the userspace peers.
718

    
719
The available options are:
720

    
721
* ``VERBOSITY_BLOCKERB``
722
* ``VERBOSITY_BLOCKERM``
723
* ``VERBOSITY_MAPPER``
724
* ``VERBOSITY_VLMC``
725

    
726
and the available values are:
727

    
728
* 0 : Error only logging.
729
* 1 : Warning logging.
730
* 2 : Info logging.
731
* 3 : Debug logging. WARNING: This options produces tons of output, but the
732
  logrotate daemon should take care of it.
733

    
734
Working with Archipelago
735
------------------------
736

    
737
``archipelago`` provides basic functionality for archipelago.
738

    
739
Usage:
740

    
741
.. code-block:: console
742

    
743
  $ archipelago [-u] command
744

    
745

    
746
Currently it supports the following commands:
747

    
748
* ``start [peer]``
749
  Starts archipelago or the specified peer.
750
* ``stop [peer]``
751
  Stops archipelago or the specified peer.
752
* ``restart [peer]``
753
  Restarts archipelago or the specified peer.
754
* ``status``
755
  Show the status of archipelago.
756

    
757
Available peers: ``blockerm``, ``blockerb``, ``mapperd``, ``vlmcd``.
758

    
759

    
760
``start``, ``stop``, ``restart`` can be combined with the ``-u / --user`` option
761
to affect only the userspace peers supporting archipelago.
762

    
763

    
764

    
765
Archipelago advanced operations
766
-------------------------------
767
The ``vlmc`` tool provides a way to interact with archipelago volumes
768

    
769
* ``vlmc map <volumename>``: maps the volume to a xsegbd device.
770

    
771
* ``vlmc unmap </dev/xsegbd[1-..]>``: unmaps the specified device from the
772
  system.
773

    
774
* ``vlmc create <volumename> --snap <snapname> --size <size>``: creates a new
775
  volume named <volumename> from snapshot name <snapname> with size <size>.
776
  The ``--snap`` and ``--size`` are optional, but at least one of them is
777
  mandatory. e.g:
778

    
779
  ``vlmc create <volumename> --snap <snapname>`` creates a volume named
780
  volumename from snapshot snapname. The size of the volume is the same as
781
  the size of the snapshot.
782

    
783
  ``vlmc create <volumename> --size <size>`` creates an empty volume of size
784
  <size> named <volumename>.
785

    
786
* ``vlmc remove <volumename>``: removes the volume and all the related
787
  archipelago blocks from storage.
788

    
789
* ``vlmc list``: provides a list of archipelago volumes. Currently only works
790
  with RADOS storage backend.
791

    
792
* ``vlmc info <volumename>``: shows volume information. Currently returns only
793
  volume size.
794

    
795
* ``vlmc open <volumename>``: opens an archipelago volume. That is, taking all
796
  the necessary locks and also make the rest of the infrastructure aware of the
797
  operation.
798

    
799
  This operation succeeds if the volume is alread opened.
800

    
801
* ``vlmc close <volumename>``: closes an archipelago volume. That is, performing
802
  all the necessary functions in the insfrastrure to successfully release the
803
  volume. Also releases all the acquired locks.
804

    
805
  ``vlmc close`` should be performed after a ``vlmc open`` operation.
806

    
807
* ``vlmc lock <volumename>``: locks a volume. This step allow the administrator
808
  to lock an archipelago volume, independently from the rest of the
809
  infrastrure.
810

    
811
* ``vlmc unlock [-f] <volumename>``: unlocks a volume. This allow the
812
  administrator to unlock a volume, independently from the rest of the
813
  infrastructure.
814
  The unlock option can be performed only by the blocker that acquired the lock
815
  in the first place. To unlock a volume from another blocker, ``-f`` option
816
  must be used to break the lock.
817

    
818

    
819
The "kamaki" API client
820
=======================
821

    
822
To upload, register or modify an image you will need the **kamaki** tool.
823
Before proceeding make sure that it is configured properly. Verify that
824
*image_url*, *storage_url*, and *token* are set as needed:
825

    
826
.. code-block:: console
827

    
828
   $ kamaki config list
829

    
830
To chage a setting use ``kamaki config set``:
831

    
832
.. code-block:: console
833

    
834
   $ kamaki config set image_url https://cyclades.example.com/plankton
835
   $ kamaki config set storage_url https://pithos.example.com/v1
836
   $ kamaki config set token ...
837

    
838
Upload Image
839
------------
840

    
841
As a shortcut, you can configure a default account and container that will be
842
used by the ``kamaki store`` commands:
843

    
844
.. code-block:: console
845

    
846
   $ kamaki config set storage_account images@example.com
847
   $ kamaki config set storage_container images
848

    
849
If the container does not exist, you will have to create it before uploading
850
any images:
851

    
852
.. code-block:: console
853

    
854
   $ kamaki store create images
855

    
856
You are now ready to upload an image. You can upload it with a Pithos+ client,
857
or use kamaki directly:
858

    
859
.. code-block:: console
860

    
861
   $ kamaki store upload ubuntu.iso
862

    
863
You can use any Pithos+ client to verify that the image was uploaded correctly.
864
The full Pithos URL for the previous example will be
865
``pithos://images@example.com/images/ubuntu.iso``.
866

    
867

    
868
Register Image
869
--------------
870

    
871
To register an image you will need to use the full Pithos+ URL. To register as
872
a public image the one from the previous example use:
873

    
874
.. code-block:: console
875

    
876
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso --public
877

    
878
The ``--public`` flag is important, if missing the registered image will not
879
be listed by ``kamaki glance list``.
880

    
881
Use ``kamaki glance register`` with no arguments to see a list of available
882
options. A more complete example would be the following:
883

    
884
.. code-block:: console
885

    
886
   $ kamaki glance register Ubuntu pithos://images@example.com/images/ubuntu.iso \
887
            --public --disk-format diskdump --property kernel=3.1.2
888

    
889
To verify that the image was registered successfully use:
890

    
891
.. code-block:: console
892

    
893
   $ kamaki glance list -l
894

    
895

    
896

    
897
Miscellaneous
898
=============
899

    
900
.. RabbitMQ
901

    
902
RabbitMQ Broker
903
---------------
904

    
905
Queue nodes run the RabbitMQ sofware, which provides AMQP functionality. To
906
guarantee high-availability, more than one Queue nodes should be deployed, each
907
of them belonging to the same `RabbitMQ cluster
908
<http://www.rabbitmq.com/clustering.html>`_. Synnefo uses the RabbitMQ
909
active/active `High Available Queues <http://www.rabbitmq.com/ha.html>`_ which
910
are mirrored between two nodes within a RabbitMQ cluster.
911

    
912
The RabbitMQ nodes that form the cluster, are declared to Synnefo through the
913
`AMQP_HOSTS` setting. Each time a Synnefo component needs to connect to
914
RabbitMQ, one of these nodes is chosen in a random way. The client that Synnefo
915
uses to connect to RabbitMQ, handles connection failures transparently and
916
tries to reconnect to a different node. As long as one of these nodes are up
917
and running, functionality of Synnefo should not be downgraded by the RabbitMQ
918
node failures.
919

    
920
All the queues that are being used are declared as durable, meaning that
921
messages are persistently stored to RabbitMQ, until they get successfully
922
processed by a client.
923

    
924
Currently, RabbitMQ is used by the following components:
925

    
926
* `snf-ganeti-eventd`, `snf-ganeti-hook` and `snf-progress-monitor`:
927
  These components send messages concerning the status and progress of
928
  jobs in the Ganeti backend.
929
* `snf-dispatcher`: This daemon, consumes the messages that are sent from
930
  the above components, and updates the Cyclades DB accordingly.
931

    
932

    
933
Installation
934
~~~~~~~~~~~~
935

    
936
Please check the RabbitMQ documentation which covers extensively the
937
`installation of RabbitMQ server <http://www.rabbitmq.com/download.html>`_ and
938
the setup of a `RabbitMQ cluster <http://www.rabbitmq.com/clustering.html>`_.
939
Also, check out the `web management plugin
940
<http://www.rabbitmq.com/management.html>`_ that can be useful for managing and
941
monitoring RabbitMQ.
942

    
943
For a basic installation of RabbitMQ on two nodes (node1 and node2) you can do
944
the following:
945

    
946
On both nodes, install rabbitmq-server and create a Synnefo user:
947

    
948
.. code-block:: console
949

    
950
  $ apt-get install rabbitmq-server
951
  $ rabbitmqctl add_user synnefo "example_pass"
952
  $ rabbitmqctl set_permissions synnefo  ".*" ".*" ".*"
953

    
954
Also guarantee that both nodes share the same cookie, by running:
955

    
956
.. code-block:: console
957

    
958
  $ scp node1:/var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/.erlang.cookie
959

    
960
and restart the nodes:
961

    
962
.. code-block:: console
963

    
964
  $ /etc/init.d/rabbitmq-server restart
965

    
966

    
967
To setup the RabbitMQ cluster run:
968

    
969
.. code-block:: console
970

    
971
  root@node2: rabbitmqctl stop_app
972
  root@node2: rabbitmqctl reset
973
  root@node2: rabbitmqctl cluster rabbit@node1 rabbit@node2
974
  root@node2: rabbitmqctl start_app
975

    
976
You can verify that the cluster is set up correctly by running:
977

    
978
.. code-block:: console
979

    
980
  root@node2: rabbitmqctl cluster_status
981

    
982

    
983

    
984

    
985

    
986
Admin tool: snf-manage
987
----------------------
988

    
989
``snf-manage`` is a tool used to perform various administrative tasks. It needs
990
to be able to access the django database, so the following should be able to
991
import the Django settings.
992

    
993
Additionally, administrative tasks can be performed via the admin web interface
994
located in /admin. Only users of type ADMIN can access the admin pages. To
995
change the type of a user to ADMIN, snf-manage can be used:
996

    
997
.. code-block:: console
998

    
999
   $ snf-manage user-modify 42 --type ADMIN
1000

    
1001
Logging
1002
-------
1003

    
1004
Logging in Synnefo is using Python's logging module. The module is configured
1005
using dictionary configuration, whose format is described here:
1006

    
1007
http://docs.python.org/release/2.7.1/library/logging.html#logging-config-dictschema
1008

    
1009
Note that this is a feature of Python 2.7 that we have backported for use in
1010
Python 2.6.
1011

    
1012
The logging configuration dictionary is defined in
1013
``/etc/synnefo/10-snf-webproject-logging.conf``
1014

    
1015
The administrator can have finer logging control by modifying the
1016
``LOGGING_SETUP`` dictionary, and defining subloggers with different handlers
1017
and log levels.  e.g. To enable debug messages only for the API set the level
1018
of 'synnefo.api' to ``DEBUG``
1019

    
1020
By default, the Django webapp and snf-manage logs to syslog, while
1021
`snf-dispatcher` logs to `/var/log/synnefo/dispatcher.log`.
1022

    
1023

    
1024
.. _scale-up:
1025

    
1026
Scaling up to multiple nodes
1027
============================
1028

    
1029
Here we will describe how should a large scale Synnefo deployment look like. Make
1030
sure you are familiar with Synnefo and Ganeti before proceeding with this section.
1031
This means you should at least have already set up successfully a working Synnefo
1032
deployment as described in the :ref:`Admin's Quick Installation Guide
1033
<quick-install-admin-guide>` and also read the Administrator's Guide until this
1034
section.
1035

    
1036
Graph of a scale-out Synnefo deployment
1037
---------------------------------------
1038

    
1039
Each box in the following graph corresponds to a distinct physical node:
1040

    
1041
.. image:: images/synnefo-arch2-roles.png
1042
   :width: 100%
1043
   :target: _images/synnefo-arch2-roles.png
1044

    
1045
The above graph is actually the same with the one at the beginning of this
1046
:ref:`guide <admin-guide>`, with the only difference that here we show the
1047
Synnefo roles of each physical node. These roles are described in the
1048
following section.
1049

    
1050
.. _physical-node-roles:
1051

    
1052
Physical Node roles
1053
-------------------
1054

    
1055
As appears in the previous graph, a scale-out Synnefo deployment consists of
1056
multiple physical nodes that have the following roles:
1057

    
1058
* **WEBSERVER**: A web server running in front of gunicorn (e.g.: Apache, nginx)
1059
* **ASTAKOS**: The Astakos application (gunicorn)
1060
* **ASTAKOS_DB**: The Astakos database (postgresql)
1061
* **PITHOS**: The Pithos application (gunicorn)
1062
* **PITHOS_DB**: The Pithos database (postgresql)
1063
* **CYCLADES**: The Cyclades application (gunicorn)
1064
* **CYCLADES_DB**: The Cyclades database (postgresql)
1065
* **MQ**: The message queue (RabbitMQ)
1066
* **GANETI_MASTER**: The Ganeti master of a Ganeti cluster
1067
* **GANETI_NODE** : A VM-capable Ganeti node of a Ganeti cluster
1068

    
1069
You will probably also have:
1070

    
1071
* **CMS**: The CMS used as a frotend portal for the Synnefo services
1072
* **NS**: A nameserver serving all other Synnefo nodes and resolving Synnefo FQDNs
1073
* **CLIENT**: A machine that runs the Synnefo clients (e.g.: kamaki, Web UI),
1074
              most of the times, the end user's local machine
1075

    
1076
From this point we will also refer to the following groups of roles:
1077

    
1078
* **SYNNEFO**: [ **ASTAKOS**, **ASTAKOS_DB**, **PITHOS**, **PITHOS_DB**, **CYCLADES**, **CYCLADES_DB**, **MQ**, **CMS**]
1079
* **G_BACKEND**: [**GANETI_MASTER**, **GANETI_NODE**]
1080

    
1081
Of course, when deploying Synnefo you can combine multiple of the above roles on a
1082
single physical node, but if you are trying to scale out, the above separation
1083
gives you significant advantages.
1084

    
1085
So, in the next section we will take a look on what components you will have to
1086
install on each physical node depending on its Synnefo role. We assume the graph's
1087
architecture.
1088

    
1089
Components for each role
1090
------------------------
1091

    
1092
When deploying Synnefo in large scale, you need to install different Synnefo
1093
or/and third party components on different physical nodes according to their
1094
Synnefo role, as stated in the previous section.
1095

    
1096
Specifically:
1097

    
1098
Role **WEBSERVER**
1099
    * Synnefo components: `None`
1100
    * 3rd party components: Apache
1101
Role **ASTAKOS**
1102
    * Synnefo components: `snf-webproject`, `snf-astakos-app`
1103
    * 3rd party components: Django, Gunicorn
1104
Role **ASTAKOS_DB**
1105
    * Synnefo components: `None`
1106
    * 3rd party components: PostgreSQL
1107
Role **PITHOS**
1108
    * Synnefo components: `snf-webproject`, `snf-pithos-app`, `snf-pithos-webclient`
1109
    * 3rd party components: Django, Gunicorn
1110
Role **PITHOS_DB**
1111
    * Synnefo components: `None`
1112
    * 3rd party components: PostgreSQL
1113
Role **CYCLADES**
1114
    * Synnefo components: `snf-webproject`, `snf-cyclades-app`, `snf-vncauthproxy`
1115
    * 3rd party components: Django Gunicorn
1116
Role **CYCLADES_DB**
1117
    * Synnefo components: `None`
1118
    * 3rd party components: PostgreSQL
1119
Role **MQ**
1120
    * Synnefo components: `None`
1121
    * 3rd party components: RabbitMQ
1122
Role **GANETI_MASTER**
1123
    * Synnefo components: `snf-cyclades-gtools`
1124
    * 3rd party components: Ganeti
1125
Role **GANETI_NODE**
1126
    * Synnefo components: `snf-cyclades-gtools`, `snf-network`, `snf-image`, `nfdhcpd`
1127
    * 3rd party components: Ganeti
1128
Role **CMS**
1129
    * Synnefo components: `snf-webproject`, `snf-cloudcms`
1130
    * 3rd party components: Django, Gunicorn
1131
Role **NS**
1132
    * Synnefo components: `None`
1133
    * 3rd party components: BIND
1134
Role **CLIENT**
1135
    * Synnefo components: `kamaki`, `snf-image-creator`
1136
    * 3rd party components: `None`
1137

    
1138
Example scale out installation
1139
------------------------------
1140

    
1141
In this section we describe an example of a medium scale installation which
1142
combines multiple roles on 10 different physical nodes. We also provide a
1143
:ref:`guide <i-synnefo>` to help with such an install.
1144

    
1145
We assume that we have the following 10 physical nodes with the corresponding
1146
roles:
1147

    
1148
Node1:
1149
    **WEBSERVER**, **ASTAKOS**
1150
      Guide sections:
1151
        * :ref:`apt <i-apt>`
1152
        * :ref:`gunicorn <i-gunicorn>`
1153
        * :ref:`apache <i-apache>`
1154
        * :ref:`snf-webproject <i-webproject>`
1155
        * :ref:`snf-astakos-app <i-astakos>`
1156
Node2:
1157
    **WEBSERVER**, **PITHOS**
1158
      Guide sections:
1159
        * :ref:`apt <i-apt>`
1160
        * :ref:`gunicorn <i-gunicorn>`
1161
        * :ref:`apache <i-apache>`
1162
        * :ref:`snf-webproject <i-webproject>`
1163
        * :ref:`snf-pithos-app <i-pithos>`
1164
        * :ref:`snf-pithos-webclient <i-pithos>`
1165
Node3:
1166
    **WEBSERVER**, **CYCLADES**
1167
      Guide sections:
1168
        * :ref:`apt <i-apt>`
1169
        * :ref:`gunicorn <i-gunicorn>`
1170
        * :ref:`apache <i-apache>`
1171
        * :ref:`snf-webproject <i-webproject>`
1172
        * :ref:`snf-cyclades-app <i-cyclades>`
1173
        * :ref:`snf-vncauthproxy <i-cyclades>`
1174
Node4:
1175
    **WEBSERVER**, **CMS**
1176
      Guide sections:
1177
        * :ref:`apt <i-apt>`
1178
        * :ref:`gunicorn <i-gunicorn>`
1179
        * :ref:`apache <i-apache>`
1180
        * :ref:`snf-webproject <i-webproject>`
1181
        * :ref:`snf-cloudcms <i-cms>`
1182
Node5:
1183
    **ASTAKOS_DB**, **PITHOS_DB**, **CYCLADES_DB**
1184
      Guide sections:
1185
        * :ref:`apt <i-apt>`
1186
        * :ref:`postgresql <i-db>`
1187
Node6:
1188
    **MQ**
1189
      Guide sections:
1190
        * :ref:`apt <i-apt>`
1191
        * :ref:`rabbitmq <i-mq>`
1192
Node7:
1193
    **GANETI_MASTER**, **GANETI_NODE**
1194
      Guide sections:
1195
        * :ref:`apt <i-apt>`
1196
        * :ref:`general <i-backends>`
1197
        * :ref:`ganeti <i-ganeti>`
1198
        * :ref:`snf-cyclades-gtools <i-gtools>`
1199
        * :ref:`snf-network <i-network>`
1200
        * :ref:`snf-image <i-image>`
1201
        * :ref:`nfdhcpd <i-network>`
1202
Node8:
1203
    **GANETI_NODE**
1204
      Guide sections:
1205
        * :ref:`apt <i-apt>`
1206
        * :ref:`general <i-backends>`
1207
        * :ref:`ganeti <i-ganeti>`
1208
        * :ref:`snf-cyclades-gtools <i-gtools>`
1209
        * :ref:`snf-network <i-network>`
1210
        * :ref:`snf-image <i-image>`
1211
        * :ref:`nfdhcpd <i-network>`
1212
Node9:
1213
    **GANETI_NODE**
1214
      Guide sections:
1215
        `Same as Node8`
1216
Node10:
1217
    **GANETI_NODE**
1218
      Guide sections:
1219
        `Same as Node8`
1220

    
1221
All sections: :ref:`Scale out Guide <i-synnefo>`
1222

    
1223

    
1224
Upgrade Notes
1225
=============
1226

    
1227
.. toctree::
1228
   :maxdepth: 1
1229

    
1230
   v0.12 -> v0.13 <upgrade/upgrade-0.13>
1231

    
1232

    
1233
Changelog, News
1234
===============
1235

    
1236
* v0.13 :ref:`Changelog <Changelog-0.13>`, :ref:`NEWS <NEWS-0.13>`
1237

    
1238

    
1239
Older Cyclades Upgrade Notes
1240
============================
1241

    
1242
.. toctree::
1243
   :maxdepth: 2
1244

    
1245
   Upgrade <upgrade/cyclades-upgrade>