root / docs / admin-guide.rst @ 30096a2e
History | View | Annotate | Download (39.5 kB)
1 |
.. _admin-guide: |
---|---|
2 |
|
3 |
Synnefo Administrator's Guide |
4 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
5 |
|
6 |
This is the complete Synnefo Administrator's Guide. |
7 |
|
8 |
|
9 |
|
10 |
General Synnefo Architecture |
11 |
============================ |
12 |
|
13 |
The following graph shows the whole Synnefo architecture and how it interacts |
14 |
with multiple Ganeti clusters. We hope that after reading the Administrator's |
15 |
Guide you will be able to understand every component and all the interactions |
16 |
between them. It is a good idea to first go through the Quick Administrator's |
17 |
Guide before proceeding. |
18 |
|
19 |
.. image:: images/synnefo-arch2.png |
20 |
:width: 100% |
21 |
:target: _images/synnefo-arch2.png |
22 |
|
23 |
|
24 |
|
25 |
Identity Service (Astakos) |
26 |
========================== |
27 |
|
28 |
|
29 |
Overview |
30 |
-------- |
31 |
|
32 |
Authentication methods |
33 |
~~~~~~~~~~~~~~~~~~~~~~ |
34 |
|
35 |
Local Authentication |
36 |
```````````````````` |
37 |
|
38 |
LDAP Authentication |
39 |
``````````````````` |
40 |
|
41 |
.. _shibboleth-auth: |
42 |
|
43 |
Shibboleth Authentication |
44 |
````````````````````````` |
45 |
|
46 |
Astakos can delegate user authentication to a Shibboleth federation. |
47 |
|
48 |
To setup shibboleth, install package:: |
49 |
|
50 |
apt-get install libapache2-mod-shib2 |
51 |
|
52 |
Change appropriately the configuration files in ``/etc/shibboleth``. |
53 |
|
54 |
Add in ``/etc/apache2/sites-available/synnefo-ssl``:: |
55 |
|
56 |
ShibConfig /etc/shibboleth/shibboleth2.xml |
57 |
Alias /shibboleth-sp /usr/share/shibboleth |
58 |
|
59 |
<Location /im/login/shibboleth> |
60 |
AuthType shibboleth |
61 |
ShibRequireSession On |
62 |
ShibUseHeaders On |
63 |
require valid-user |
64 |
</Location> |
65 |
|
66 |
and before the line containing:: |
67 |
|
68 |
ProxyPass / http://localhost:8080/ retry=0 |
69 |
|
70 |
add:: |
71 |
|
72 |
ProxyPass /Shibboleth.sso ! |
73 |
|
74 |
Then, enable the shibboleth module:: |
75 |
|
76 |
a2enmod shib2 |
77 |
|
78 |
After passing through the apache module, the following tokens should be |
79 |
available at the destination:: |
80 |
|
81 |
eppn # eduPersonPrincipalName |
82 |
Shib-InetOrgPerson-givenName |
83 |
Shib-Person-surname |
84 |
Shib-Person-commonName |
85 |
Shib-InetOrgPerson-displayName |
86 |
Shib-EP-Affiliation |
87 |
Shib-Session-ID |
88 |
|
89 |
Finally, add 'shibboleth' in ``ASTAKOS_IM_MODULES`` list. The variable resides |
90 |
inside the file ``/etc/synnefo/20-snf-astakos-app-settings.conf`` |
91 |
|
92 |
Architecture |
93 |
------------ |
94 |
|
95 |
Prereqs |
96 |
------- |
97 |
|
98 |
Installation |
99 |
------------ |
100 |
|
101 |
Configuration |
102 |
------------- |
103 |
|
104 |
Working with Astakos |
105 |
-------------------- |
106 |
|
107 |
User activation methods |
108 |
~~~~~~~~~~~~~~~~~~~~~~~ |
109 |
|
110 |
When a new user signs up, he/she is not marked as active. You can see his/her |
111 |
state by running (on the machine that runs the Astakos app): |
112 |
|
113 |
.. code-block:: console |
114 |
|
115 |
$ snf-manage user-list |
116 |
|
117 |
There are two different ways to activate a new user. Both need access to a |
118 |
running :ref:`mail server <mail-server>`. |
119 |
|
120 |
Manual activation |
121 |
````````````````` |
122 |
|
123 |
You can manually activate a new user that has already signed up, by sending |
124 |
him/her an activation email. The email will contain an approriate activation |
125 |
link, which will complete the activation process if followed. You can send the |
126 |
email by running: |
127 |
|
128 |
.. code-block:: console |
129 |
|
130 |
$ snf-manage user-activation-send <user ID or email> |
131 |
|
132 |
Be sure to have already setup your mail server and defined it in your Synnefo |
133 |
settings, before running the command. |
134 |
|
135 |
Automatic activation |
136 |
```````````````````` |
137 |
|
138 |
FIXME: Describe Regex activation method |
139 |
|
140 |
Setting quota limits |
141 |
~~~~~~~~~~~~~~~~~~~~ |
142 |
|
143 |
Set default quotas |
144 |
`````````````````` |
145 |
|
146 |
In 20-snf-astakos-app-settings.conf, |
147 |
uncomment the default setting ``ASTAKOS_SERVICES`` |
148 |
and customize the ``'uplimit'`` values. |
149 |
These are the default base quotas for all users. |
150 |
|
151 |
To apply your configuration run:: |
152 |
|
153 |
# snf-manage astakos-init --load-service-resources |
154 |
# snf-manage astakos-quota --sync |
155 |
|
156 |
Set base quotas for individual users |
157 |
```````````````````````````````````` |
158 |
|
159 |
For individual users that need different quotas than the default |
160 |
you can set it for each resource like this:: |
161 |
|
162 |
# use this to display quotas / uuid |
163 |
# snf-manage user-show 'uuid or email' |
164 |
|
165 |
# snf-manage user-set-initial-quota --set-capacity 'user-uuid' 'cyclades.vm' 10 |
166 |
|
167 |
# this applies the configuration |
168 |
# snf-manage astakos-quota --sync --user 'user-uuid' |
169 |
|
170 |
|
171 |
Enable the Projects feature |
172 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
173 |
|
174 |
If you want to enable the projects feature so that users may apply |
175 |
on their own for resources by creating and joining projects, |
176 |
in ``20-snf-astakos-app-settings.conf`` set:: |
177 |
|
178 |
# this will make the 'projects' page visible in the dashboard |
179 |
ASTAKOS_PROJECTS_VISIBLE = True |
180 |
|
181 |
You can change the maximum allowed number of pending project applications |
182 |
per user with:: |
183 |
|
184 |
# snf-manage resource-modify astakos.pending_app --limit <number> |
185 |
|
186 |
You can also set a user-specific limit with:: |
187 |
|
188 |
# snf-manage user-set-initial-quota --set-capacity 'user-uuid' 'astakos.pending_app' 5 |
189 |
|
190 |
When users apply for projects they are not automatically granted |
191 |
the resources. They must first be approved by the administrator. |
192 |
|
193 |
To list pending project applications in astakos:: |
194 |
|
195 |
# snf-manage project-list --pending |
196 |
|
197 |
Note the last column, the application id. To approve it:: |
198 |
|
199 |
# <app id> from the last column of project-list |
200 |
# snf-manage project-control --approve <app id> |
201 |
|
202 |
To deny an application:: |
203 |
|
204 |
# snf-manage project-control --deny <app id> |
205 |
|
206 |
Users designated as *project admins* can approve, deny, or modify |
207 |
an application through the web interface. In |
208 |
``20-snf-astakos-app-settings.conf`` set:: |
209 |
|
210 |
# UUIDs of users that can approve or deny project applications from the web. |
211 |
ASTAKOS_PROJECT_ADMINS = [<uuid>, ...] |
212 |
|
213 |
|
214 |
Astakos advanced operations |
215 |
--------------------------- |
216 |
|
217 |
Adding "Terms of Use" |
218 |
~~~~~~~~~~~~~~~~~~~~~ |
219 |
|
220 |
Astakos supports versioned terms-of-use. First of all you need to create an |
221 |
html file that will contain your terms. For example, create the file |
222 |
``/usr/share/synnefo/sample-terms.html``, which contains the following: |
223 |
|
224 |
.. code-block:: console |
225 |
|
226 |
<h1>~okeanos terms</h1> |
227 |
|
228 |
These are the example terms for ~okeanos |
229 |
|
230 |
Then, add those terms-of-use with the snf-manage command: |
231 |
|
232 |
.. code-block:: console |
233 |
|
234 |
$ snf-manage term-add /usr/share/synnefo/sample-terms.html |
235 |
|
236 |
Your terms have been successfully added and you will see the corresponding link |
237 |
appearing in the Astakos web pages' footer. |
238 |
|
239 |
Enabling reCAPTCHA |
240 |
~~~~~~~~~~~~~~~~~~ |
241 |
|
242 |
Astakos supports the `reCAPTCHA <http://www.google.com/recaptcha>`_ feature. |
243 |
If enabled, it protects the Astakos forms from bots. To enable the feature, go |
244 |
to https://www.google.com/recaptcha/admin/create and create your own reCAPTCHA |
245 |
key pair. Then edit ``/etc/synnefo/20-snf-astakos-app-settings.conf`` and set |
246 |
the corresponding variables to reflect your newly created key pair. Finally, set |
247 |
the ``ASTAKOS_RECAPTCHA_ENABLED`` variable to ``True``: |
248 |
|
249 |
.. code-block:: console |
250 |
|
251 |
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*(' |
252 |
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*(' |
253 |
|
254 |
ASTAKOS_RECAPTCHA_ENABLED = True |
255 |
|
256 |
Restart the service on the Astakos node(s) and you are ready: |
257 |
|
258 |
.. code-block:: console |
259 |
|
260 |
# /etc/init.d/gunicorn restart |
261 |
|
262 |
Checkout your new Sign up page. If you see the reCAPTCHA box, you have setup |
263 |
everything correctly. |
264 |
|
265 |
|
266 |
|
267 |
File Storage Service (Pithos) |
268 |
============================= |
269 |
|
270 |
Overview |
271 |
-------- |
272 |
|
273 |
Architecture |
274 |
------------ |
275 |
|
276 |
Prereqs |
277 |
------- |
278 |
|
279 |
Installation |
280 |
------------ |
281 |
|
282 |
Configuration |
283 |
------------- |
284 |
|
285 |
Working with Pithos |
286 |
------------------- |
287 |
|
288 |
Pithos advanced operations |
289 |
-------------------------- |
290 |
|
291 |
|
292 |
|
293 |
Compute/Network/Image Service (Cyclades) |
294 |
======================================== |
295 |
|
296 |
Compute Overview |
297 |
---------------- |
298 |
|
299 |
Network Overview |
300 |
---------------- |
301 |
|
302 |
Image Overview |
303 |
-------------- |
304 |
|
305 |
Architecture |
306 |
------------ |
307 |
|
308 |
Asynchronous communication with Ganeti backends |
309 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
310 |
Synnefo uses Google Ganeti backends for VM cluster management. In order for |
311 |
Cyclades to be able to handle thousands of user requests, Cyclades and Ganeti |
312 |
communicate asynchronously. Briefly, requests are submitted to Ganeti through |
313 |
Ganeti's RAPI/HTTP interface, and then asynchronous notifications about the |
314 |
progress of Ganeti jobs are being created and pushed upwards to Cyclades. The |
315 |
architecture and communication with a Ganeti backend is shown in the graph |
316 |
below: |
317 |
|
318 |
.. image:: images/cyclades-ganeti-communication.png |
319 |
:width: 50% |
320 |
:target: _images/cyclades-ganeti-communication.png |
321 |
|
322 |
The Cyclades API server is responsible for handling user requests. Read-only |
323 |
requests are directly served by looking up the Cyclades DB. If the request |
324 |
needs an action in the Ganeti backend, Cyclades submit jobs to the Ganeti |
325 |
master using the `Ganeti RAPI interface |
326 |
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html>`_. |
327 |
|
328 |
While Ganeti executes the job, `snf-ganeti-eventd`, `snf-ganeti-hook` and |
329 |
`snf-progress-monitor` are monitoring the progress of the job and send |
330 |
corresponding messages to the RabbitMQ servers. These components are part |
331 |
of `snf-cyclades-gtools` and must be installed on all Ganeti nodes. Specially: |
332 |
|
333 |
* *snf-ganeti-eventd* sends messages about operations affecting the operating |
334 |
state of instances and networks. Works by monitoring the Ganeti job queue. |
335 |
* *snf-ganeti_hook* sends messages about the NICs of instances. It includes a |
336 |
number of `Ganeti hooks <http://docs.ganeti.org/ganeti/2.2/html/hooks.html>`_ |
337 |
for customisation of operations. |
338 |
* *snf-progress_monitor* sends messages about the progress of the Image deployment |
339 |
phase which is done by the Ganeti OS Definition `snf-image`. |
340 |
|
341 |
Finally, `snf-dispatcher` consumes messages from the RabbitMQ queues, processes |
342 |
these messages and properly updates the state of the Cyclades DB. Subsequent |
343 |
requests to the Cyclades API, will retrieve the updated state from the DB. |
344 |
|
345 |
|
346 |
Prereqs |
347 |
------- |
348 |
|
349 |
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`. |
350 |
|
351 |
Installation |
352 |
------------ |
353 |
|
354 |
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`. |
355 |
|
356 |
Configuration |
357 |
------------- |
358 |
|
359 |
Work in progress. Please refer to :ref:`quick administrator quide <quick-install-admin-guide>`. |
360 |
|
361 |
Working with Cyclades |
362 |
--------------------- |
363 |
|
364 |
Managing Ganeti Backends |
365 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
366 |
|
367 |
Since v0.11, Synnefo is able to manage multiple Ganeti clusters (backends) |
368 |
making it capable to scale linearly to tens of thousands of VMs. Backends |
369 |
can be dynamically added or removed via `snf-manage` commands. |
370 |
|
371 |
Each newly created VM is allocated to a Ganeti backend by the Cyclades backend |
372 |
allocator. The VM is "pinned" to this backend, and can not change through its |
373 |
lifetime. The backend allocator decides in which backend to spawn the VM based |
374 |
on the available resources of each backend, trying to balance the load between |
375 |
them. |
376 |
|
377 |
Handling of Networks, as far as backends are concerned, is based on whether the |
378 |
network is public or not. Public networks are created through the `snf-manage |
379 |
network-create` command, and are only created on one backend. Private networks |
380 |
are created on all backends, in order to ensure that VMs residing on different |
381 |
backends can be connected to the same private network. |
382 |
|
383 |
Listing existing backends |
384 |
````````````````````````` |
385 |
To list all the Ganeti backends known to Synnefo, we run: |
386 |
|
387 |
.. code-block:: console |
388 |
|
389 |
$ snf-manage backend-list |
390 |
|
391 |
Adding a new Ganeti backend |
392 |
``````````````````````````` |
393 |
Backends are dynamically added under the control of Synnefo with `snf-manage |
394 |
backend-add` command. In this section it is assumed that a Ganeti cluster, |
395 |
named ``cluster.example.com`` is already up and running and configured to be |
396 |
able to host Synnefo VMs. |
397 |
|
398 |
To add this Ganeti cluster, we run: |
399 |
|
400 |
.. code-block:: console |
401 |
|
402 |
$ snf-manage backend-add --clustername=cluster.example.com --user="synnefo_user" --pass="synnefo_pass" |
403 |
|
404 |
where ``clustername`` is the Cluster hostname of the Ganeti cluster, and |
405 |
``user`` and ``pass`` are the credentials for the `Ganeti RAPI user |
406 |
<http://docs.ganeti.org/ganeti/2.2/html/rapi.html#users-and-passwords>`_. All |
407 |
backend attributes can be also changed dynamically using the `snf-manage |
408 |
backend-modify` command. |
409 |
|
410 |
``snf-manage backend-add`` will also create all existing private networks to |
411 |
the new backend. You can verify that the backend is added, by running |
412 |
`snf-manage backend-list`. |
413 |
|
414 |
Note that no VMs will be spawned to this backend, since by default it is in a |
415 |
``drained`` state after addition and also it has no public network assigned to |
416 |
it. |
417 |
|
418 |
So, first you need to create its public network, make sure everything works as |
419 |
expected and finally make it active by un-setting the ``drained`` flag. You can |
420 |
do this by running: |
421 |
|
422 |
.. code-block:: console |
423 |
|
424 |
$ snf-manage backend-modify --drained=False <backend_id> |
425 |
|
426 |
Removing an existing Ganeti backend |
427 |
``````````````````````````````````` |
428 |
In order to remove an existing backend from Synnefo, we run: |
429 |
|
430 |
.. code-block:: console |
431 |
|
432 |
# snf-manage backend-remove <backend_id> |
433 |
|
434 |
This command will fail if there are active VMs on the backend. Also, the |
435 |
backend is not cleaned before removal, so all the Synnefo private networks |
436 |
will be left on the Ganeti nodes. You need to remove them manually. |
437 |
|
438 |
Allocation of VMs in Ganeti backends |
439 |
```````````````````````````````````` |
440 |
As already mentioned, the Cyclades backend allocator is responsible for |
441 |
allocating new VMs to backends. This allocator does not choose the exact Ganeti |
442 |
node that will host the VM but just the Ganeti backend. The exact node is |
443 |
chosen by the Ganeti cluster's allocator (hail). |
444 |
|
445 |
The decision about which backend will host a VM is based on the available |
446 |
resources. The allocator computes a score for each backend, that shows its load |
447 |
factor, and the one with the minimum score is chosen. The admin can exclude |
448 |
backends from the allocation phase by marking them as ``drained`` by running: |
449 |
|
450 |
.. code-block:: console |
451 |
|
452 |
$ snf-manage backend-modify --drained=True <backend_id> |
453 |
|
454 |
The backend resources are periodically updated, at a period defined by |
455 |
the ``BACKEND_REFRESH_MIN`` setting, or by running `snf-manage backend-update-status` |
456 |
command. It is advised to have a cron job running this command at a smaller |
457 |
interval than ``BACKEND_REFRESH_MIN`` in order to remove the load of refreshing |
458 |
the backends stats from the VM creation phase. |
459 |
|
460 |
Finally, the admin can decide to have a user's VMs being allocated to a |
461 |
specific backend, with the ``BACKEND_PER_USER`` setting. This is a mapping |
462 |
between users and backends. If the user is found in ``BACKEND_PER_USER``, then |
463 |
Synnefo allocates all his/hers VMs to the specific backend in the variable, |
464 |
even if is marked as drained (useful for testing). |
465 |
|
466 |
|
467 |
Managing Virtual Machines |
468 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
469 |
|
470 |
As mentioned, Cyclades uses Ganeti for management of VMs. The administrator can |
471 |
handle Cyclades VMs just like any other Ganeti instance, via `gnt-instance` |
472 |
commands. All Ganeti instances that belong to Synnefo, are separated from |
473 |
others, by a prefix in their names. This prefix is defined in |
474 |
``BACKEND_PREFIX_ID`` setting in |
475 |
``/etc/synnefo/20-snf-cyclades-app-backend.conf``. |
476 |
|
477 |
Apart from handling instances directly in the Ganeti level, a number of `snf-manage` |
478 |
commands are available: |
479 |
|
480 |
* ``snf-manage server-list``: List servers |
481 |
* ``snf-manage server-show``: Show information about a server in the Cyclades DB |
482 |
* ``snf-manage server-inspect``: Inspect the state of a server both in DB and Ganeti |
483 |
* ``snf-manage server-modify``: Modify the state of a server in the Cycldes DB |
484 |
* ``snf-manage server-create``: Create a new server |
485 |
* ``snf-manage server-import``: Import an existing Ganeti instance to Cyclades |
486 |
|
487 |
|
488 |
Managing Virtual Networks |
489 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
490 |
|
491 |
Cyclades is able to create and manage Virtual Networks. Networking is |
492 |
desployment specific and must be customized based on the specific needs of the |
493 |
system administrator. For better understanding of networking please refer to |
494 |
the :ref:`Network <networks>` section. |
495 |
|
496 |
Exactly as Cyclades VMs can be handled like Ganeti instances, Cyclades Networks |
497 |
can also by handled as Ganeti networks, via `gnt-network commands`. All Ganeti |
498 |
networks that belong to Synnefo are named with the prefix |
499 |
`${BACKEND_PREFIX_ID}-net-`. |
500 |
|
501 |
There are also the following `snf-manage` commands for managing networks: |
502 |
|
503 |
* ``snf-manage network-list``: List networks |
504 |
* ``snf-manage network-show``: Show information about a network in the Cyclades DB |
505 |
* ``snf-manage network-inspect``: Inspect the state of the network in DB and Ganeti backends |
506 |
* ``snf-manage network-modify``: Modify the state of a network in the Cycldes DB |
507 |
* ``snf-manage network-create``: Create a new network |
508 |
* ``snf-manage network-remove``: Remove an existing network |
509 |
|
510 |
Managing Network Resources |
511 |
`````````````````````````` |
512 |
|
513 |
Proper operation of the Cyclades Network Service depends on the unique |
514 |
assignment of specific resources to each type of virtual network. Specifically, |
515 |
these resources are: |
516 |
|
517 |
* IP addresses. Cyclades creates a Pool of IPs for each Network, and assigns a |
518 |
unique IP address to each VM, thus connecting it to this Network. You can see |
519 |
the IP pool of each network by running `snf-manage network-inspect |
520 |
<network_ID>`. IP pools are automatically created and managed by Cyclades, |
521 |
depending on the subnet of the Network. |
522 |
* Bridges corresponding to physical VLANs, which are required for networks of |
523 |
type `PRIVATE_PHYSICAL_VLAN`. |
524 |
* One Bridge corresponding to one physical VLAN which is required for networks of |
525 |
type `PRIVATE_MAC_PREFIX`. |
526 |
|
527 |
Cyclades allocates those resources from pools that are created by the |
528 |
administrator with the `snf-manage pool-create` management command. |
529 |
|
530 |
Pool Creation |
531 |
````````````` |
532 |
Pools are created using the `snf-manage pool-create` command: |
533 |
|
534 |
.. code-block:: console |
535 |
|
536 |
# snf-manage pool-create --type=bridge --base=prv --size=20 |
537 |
|
538 |
will create a pool of bridges, containing bridges prv1, prv2,..prv21. |
539 |
|
540 |
You can verify the creation of the pool, and check its contents by running: |
541 |
|
542 |
.. code-block:: console |
543 |
|
544 |
# snf-manage pool-list |
545 |
# snf-manage pool-show --type=bridge 1 |
546 |
|
547 |
With the same commands you can handle a pool of MAC prefixes. For example: |
548 |
|
549 |
.. code-block:: console |
550 |
|
551 |
# snf-manage pool-create --type=mac-prefix --base=aa:00:0 --size=65536 |
552 |
|
553 |
will create a pool of MAC prefixes from ``aa:00:1`` to ``b9:ff:f``. The MAC |
554 |
prefix pool is responsible for providing only unicast and locally administered |
555 |
MAC addresses, so many of these prefixes will be externally reserved, to |
556 |
exclude from allocation. |
557 |
|
558 |
Cyclades advanced operations |
559 |
---------------------------- |
560 |
|
561 |
Reconciliation mechanism |
562 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
563 |
|
564 |
On certain occasions, such as a Ganeti or RabbitMQ failure, the state of |
565 |
Cyclades database may differ from the real state of VMs and networks in the |
566 |
Ganeti backends. The reconciliation process is designed to synchronize |
567 |
the state of the Cyclades DB with Ganeti. There are two management commands |
568 |
for reconciling VMs and Networks |
569 |
|
570 |
Reconciling Virtual Machines |
571 |
```````````````````````````` |
572 |
|
573 |
Reconciliation of VMs detects the following conditions: |
574 |
|
575 |
* Stale DB servers without corresponding Ganeti instances |
576 |
* Orphan Ganeti instances, without corresponding DB entries |
577 |
* Out-of-sync state for DB entries wrt to Ganeti instances |
578 |
|
579 |
To detect all inconsistencies you can just run: |
580 |
|
581 |
.. code-block:: console |
582 |
|
583 |
$ snf-manage reconcile-servers |
584 |
|
585 |
Adding the `--fix-all` option, will do the actual synchronization: |
586 |
|
587 |
.. code-block:: console |
588 |
|
589 |
$ snf-manage reconcile --fix-all |
590 |
|
591 |
Please see ``snf-manage reconcile --help`` for all the details. |
592 |
|
593 |
|
594 |
Reconciling Networks |
595 |
```````````````````` |
596 |
|
597 |
Reconciliation of Networks detects the following conditions: |
598 |
|
599 |
* Stale DB networks without corresponding Ganeti networks |
600 |
* Orphan Ganeti networks, without corresponding DB entries |
601 |
* Private networks that are not created to all Ganeti backends |
602 |
* Unsynchronized IP pools |
603 |
|
604 |
To detect all inconsistencies you can just run: |
605 |
|
606 |
.. code-block:: console |
607 |
|
608 |
$ snf-manage reconcile-networks |
609 |
|
610 |
Adding the `--fix-all` option, will do the actual synchronization: |
611 |
|
612 |
.. code-block:: console |
613 |
|
614 |
$ snf-manage reconcile-networks --fix-all |
615 |
|
616 |
Please see ``snf-manage reconcile-networks --help`` for all the details. |
617 |
|
618 |
|
619 |
|
620 |
Block Storage Service (Archipelago) |
621 |
=================================== |
622 |
|
623 |
Overview |
624 |
-------- |
625 |
Archipelago offers Copy-On-Write snapshotable volumes. Pithos images can be used |
626 |
to provision a volume with Copy-On-Write semantics (i.e. a clone). Snapshots |
627 |
offer a unique deduplicated image of a volume, that reflects the volume state |
628 |
during snapshot creation and are indistinguishable from a Pithos image. |
629 |
|
630 |
Archipelago is used by Cyclades and Ganeti for fast provisioning of VMs based on |
631 |
CoW volumes. Moreover, it enables live migration of thinly-provisioned VMs with |
632 |
no physically shared storage. |
633 |
|
634 |
Archipelago Architecture |
635 |
------------------------ |
636 |
|
637 |
.. image:: images/archipelago-architecture.png |
638 |
:width: 50% |
639 |
:target: _images/archipelago-architecture.png |
640 |
|
641 |
.. _syn+archip+rados: |
642 |
|
643 |
Overview of Synnefo + Archipelago + RADOS |
644 |
----------------------------------------- |
645 |
|
646 |
.. image:: images/synnefo-arch3.png |
647 |
:width: 100% |
648 |
:target: _images/synnefo-arch3.png |
649 |
|
650 |
Prereqs |
651 |
------- |
652 |
|
653 |
The administrator must initialize the storage backend where archipelago volume |
654 |
blocks will reside. |
655 |
|
656 |
In case of a files backend, the administrator must create two directories. One |
657 |
for the archipelago data blocks and one for the archipelago map blocks. These |
658 |
should probably be over shared storage to enable sharing archipelago volumes |
659 |
between multiple nodes. He or she, must also be able to supply a directory where |
660 |
the pithos data and map blocks reside. |
661 |
|
662 |
In case of a RADOS backend, the administrator must create two rados pools, one |
663 |
for data blocks, and one for the map blocks. These pools, must be the same pools |
664 |
used in pithos, in order to enable volume creation based on pithos images. |
665 |
|
666 |
Installation |
667 |
------------ |
668 |
|
669 |
Archipelago consists of |
670 |
|
671 |
* ``libxseg0``: libxseg used to communicate over shared memory segments |
672 |
* ``python-xseg``: python bindings for libxseg |
673 |
* ``archipelago-kernel-dkms``: contains archipelago kernel modules to provide |
674 |
block devices to be used as vm disks |
675 |
* ``python-archipelago``: archipelago python module. Includes archipelago and |
676 |
vlmc functionality. |
677 |
* ``archipelago``: user space tools and peers for the archipelago management and |
678 |
volume composition |
679 |
* ``archipelago-ganeti``: ganeti ext storage scripts, that enable ganeti to |
680 |
provision VMs over archipelago |
681 |
|
682 |
Performing |
683 |
|
684 |
.. code-block:: console |
685 |
|
686 |
$ apt-get install archipelago-ganeti |
687 |
|
688 |
should fetch all the required packages and get you up 'n going with archipelago |
689 |
|
690 |
Bare in mind, that custom librados is required, which is provided in the apt |
691 |
repo of GRNet. |
692 |
|
693 |
|
694 |
For now, librados is a dependency of archipelago, even if you do not intend to |
695 |
use archipelago over RADOS. |
696 |
|
697 |
Configuration |
698 |
------------- |
699 |
Archipelago should work out of the box with a RADOS backend, but basic |
700 |
configuration can be done in ``/etc/default/archipelago`` . |
701 |
|
702 |
If you wish to change the storage backend to files, set |
703 |
|
704 |
.. code-block:: console |
705 |
|
706 |
STORAGE="files" |
707 |
|
708 |
and provide the appropriate settings for files storage backend in the conf file. |
709 |
|
710 |
These are: |
711 |
|
712 |
* ``FILED_IMAGES``: directory for archipelago data blocks. |
713 |
* ``FILED_MAPS``: directory for archipelago map blocks. |
714 |
* ``PITHOS``: directory of pithos data blocks. |
715 |
* ``PITHOSMAPS``: directory of pithos map blocks. |
716 |
|
717 |
The settings for RADOS storage backend are: |
718 |
|
719 |
* ``RADOS_POOL_MAPS``: The pool where archipelago and pithos map blocks reside. |
720 |
* ``RADOS_POOL_BLOCKS``: The pool where archipelago and pithos data blocks |
721 |
reside. |
722 |
|
723 |
Examples can be found in the conf file. |
724 |
|
725 |
Be aware that archipelago infrastructure doesn't provide default values for this |
726 |
settings. If they are not set in the conf file, archipelago will not be able to |
727 |
function. |
728 |
|
729 |
Archipelago also provides ``VERBOSITY`` config options to control the output |
730 |
generated by the userspace peers. |
731 |
|
732 |
The available options are: |
733 |
|
734 |
* ``VERBOSITY_BLOCKERB`` |
735 |
* ``VERBOSITY_BLOCKERM`` |
736 |
* ``VERBOSITY_MAPPER`` |
737 |
* ``VERBOSITY_VLMC`` |
738 |
|
739 |
and the available values are: |
740 |
|
741 |
* 0 : Error only logging. |
742 |
* 1 : Warning logging. |
743 |
* 2 : Info logging. |
744 |
* 3 : Debug logging. WARNING: This options produces tons of output, but the |
745 |
logrotate daemon should take care of it. |
746 |
|
747 |
Working with Archipelago |
748 |
------------------------ |
749 |
|
750 |
``archipelago`` provides basic functionality for archipelago. |
751 |
|
752 |
Usage: |
753 |
|
754 |
.. code-block:: console |
755 |
|
756 |
$ archipelago [-u] command |
757 |
|
758 |
|
759 |
Currently it supports the following commands: |
760 |
|
761 |
* ``start [peer]`` |
762 |
Starts archipelago or the specified peer. |
763 |
* ``stop [peer]`` |
764 |
Stops archipelago or the specified peer. |
765 |
* ``restart [peer]`` |
766 |
Restarts archipelago or the specified peer. |
767 |
* ``status`` |
768 |
Show the status of archipelago. |
769 |
|
770 |
Available peers: ``blockerm``, ``blockerb``, ``mapperd``, ``vlmcd``. |
771 |
|
772 |
|
773 |
``start``, ``stop``, ``restart`` can be combined with the ``-u / --user`` option |
774 |
to affect only the userspace peers supporting archipelago. |
775 |
|
776 |
|
777 |
|
778 |
Archipelago advanced operations |
779 |
------------------------------- |
780 |
The ``vlmc`` tool provides a way to interact with archipelago volumes |
781 |
|
782 |
* ``vlmc map <volumename>``: maps the volume to a xsegbd device. |
783 |
|
784 |
* ``vlmc unmap </dev/xsegbd[1-..]>``: unmaps the specified device from the |
785 |
system. |
786 |
|
787 |
* ``vlmc create <volumename> --snap <snapname> --size <size>``: creates a new |
788 |
volume named <volumename> from snapshot name <snapname> with size <size>. |
789 |
The ``--snap`` and ``--size`` are optional, but at least one of them is |
790 |
mandatory. e.g: |
791 |
|
792 |
``vlmc create <volumename> --snap <snapname>`` creates a volume named |
793 |
volumename from snapshot snapname. The size of the volume is the same as |
794 |
the size of the snapshot. |
795 |
|
796 |
``vlmc create <volumename> --size <size>`` creates an empty volume of size |
797 |
<size> named <volumename>. |
798 |
|
799 |
* ``vlmc remove <volumename>``: removes the volume and all the related |
800 |
archipelago blocks from storage. |
801 |
|
802 |
* ``vlmc list``: provides a list of archipelago volumes. Currently only works |
803 |
with RADOS storage backend. |
804 |
|
805 |
* ``vlmc info <volumename>``: shows volume information. Currently returns only |
806 |
volume size. |
807 |
|
808 |
* ``vlmc open <volumename>``: opens an archipelago volume. That is, taking all |
809 |
the necessary locks and also make the rest of the infrastructure aware of the |
810 |
operation. |
811 |
|
812 |
This operation succeeds if the volume is alread opened. |
813 |
|
814 |
* ``vlmc close <volumename>``: closes an archipelago volume. That is, performing |
815 |
all the necessary functions in the insfrastrure to successfully release the |
816 |
volume. Also releases all the acquired locks. |
817 |
|
818 |
``vlmc close`` should be performed after a ``vlmc open`` operation. |
819 |
|
820 |
* ``vlmc lock <volumename>``: locks a volume. This step allow the administrator |
821 |
to lock an archipelago volume, independently from the rest of the |
822 |
infrastrure. |
823 |
|
824 |
* ``vlmc unlock [-f] <volumename>``: unlocks a volume. This allow the |
825 |
administrator to unlock a volume, independently from the rest of the |
826 |
infrastructure. |
827 |
The unlock option can be performed only by the blocker that acquired the lock |
828 |
in the first place. To unlock a volume from another blocker, ``-f`` option |
829 |
must be used to break the lock. |
830 |
|
831 |
|
832 |
The "kamaki" API client |
833 |
======================= |
834 |
|
835 |
To upload, register or modify an image you will need the **kamaki** tool. |
836 |
Before proceeding make sure that it is configured properly. Verify that |
837 |
*image.url*, *file.url*, *user.url* and *token* are set as needed: |
838 |
|
839 |
.. code-block:: console |
840 |
|
841 |
$ kamaki config list |
842 |
|
843 |
To chage a setting use ``kamaki config set``: |
844 |
|
845 |
.. code-block:: console |
846 |
|
847 |
$ kamaki config set image.url https://cyclades.example.com/plankton |
848 |
$ kamaki config set file.url https://pithos.example.com/v1 |
849 |
$ kamaki config set user.url https://accounts.example.com |
850 |
$ kamaki config set token ... |
851 |
|
852 |
To test that everything works, try authenticating the current account with |
853 |
kamaki: |
854 |
|
855 |
.. code-block:: console |
856 |
|
857 |
$ kamaki user authenticate |
858 |
|
859 |
This will output user information. |
860 |
|
861 |
Upload Image |
862 |
------------ |
863 |
|
864 |
By convention, images are stored in a container called ``images``. Check if the |
865 |
container exists, by listing all containers in your account: |
866 |
|
867 |
.. code-block:: console |
868 |
|
869 |
$ kamaki file list |
870 |
|
871 |
If the container ``images`` does not exist, create it: |
872 |
|
873 |
.. code-block:: console |
874 |
|
875 |
$ kamaki file create images |
876 |
|
877 |
You are now ready to upload an image to container ``images``. You can upload it |
878 |
with a Pithos+ client, or use kamaki directly: |
879 |
|
880 |
.. code-block:: console |
881 |
|
882 |
$ kamaki file upload ubuntu.iso images |
883 |
|
884 |
You can use any Pithos+ client to verify that the image was uploaded correctly, |
885 |
or you can list the contents of the container with kamaki: |
886 |
|
887 |
.. code-block:: console |
888 |
|
889 |
$ kamaki file list images |
890 |
|
891 |
The full Pithos URL for the previous example will be |
892 |
``pithos://u53r-un1qu3-1d/images/ubuntu.iso`` where ``u53r-un1qu3-1d`` is the |
893 |
unique user id (uuid). |
894 |
|
895 |
Register Image |
896 |
-------------- |
897 |
|
898 |
To register an image you will need to use the full Pithos+ URL. To register as |
899 |
a public image the one from the previous example use: |
900 |
|
901 |
.. code-block:: console |
902 |
|
903 |
$ kamaki image register Ubuntu pithos://u53r-un1qu3-1d/images/ubuntu.iso --public |
904 |
|
905 |
The ``--public`` flag is important, if missing the registered image will not |
906 |
be listed by ``kamaki image list``. |
907 |
|
908 |
Use ``kamaki image register`` with no arguments to see a list of available |
909 |
options. A more complete example would be the following: |
910 |
|
911 |
.. code-block:: console |
912 |
|
913 |
$ kamaki image register Ubuntu pithos://u53r-un1qu3-1d/images/ubuntu.iso \ |
914 |
--public --disk-format diskdump --property kernel=3.1.2 |
915 |
|
916 |
To verify that the image was registered successfully use: |
917 |
|
918 |
.. code-block:: console |
919 |
|
920 |
$ kamaki image list --name-like=ubuntu |
921 |
|
922 |
|
923 |
Miscellaneous |
924 |
============= |
925 |
|
926 |
.. RabbitMQ |
927 |
|
928 |
RabbitMQ Broker |
929 |
--------------- |
930 |
|
931 |
Queue nodes run the RabbitMQ sofware, which provides AMQP functionality. To |
932 |
guarantee high-availability, more than one Queue nodes should be deployed, each |
933 |
of them belonging to the same `RabbitMQ cluster |
934 |
<http://www.rabbitmq.com/clustering.html>`_. Synnefo uses the RabbitMQ |
935 |
active/active `High Available Queues <http://www.rabbitmq.com/ha.html>`_ which |
936 |
are mirrored between two nodes within a RabbitMQ cluster. |
937 |
|
938 |
The RabbitMQ nodes that form the cluster, are declared to Synnefo through the |
939 |
`AMQP_HOSTS` setting. Each time a Synnefo component needs to connect to |
940 |
RabbitMQ, one of these nodes is chosen in a random way. The client that Synnefo |
941 |
uses to connect to RabbitMQ, handles connection failures transparently and |
942 |
tries to reconnect to a different node. As long as one of these nodes are up |
943 |
and running, functionality of Synnefo should not be downgraded by the RabbitMQ |
944 |
node failures. |
945 |
|
946 |
All the queues that are being used are declared as durable, meaning that |
947 |
messages are persistently stored to RabbitMQ, until they get successfully |
948 |
processed by a client. |
949 |
|
950 |
Currently, RabbitMQ is used by the following components: |
951 |
|
952 |
* `snf-ganeti-eventd`, `snf-ganeti-hook` and `snf-progress-monitor`: |
953 |
These components send messages concerning the status and progress of |
954 |
jobs in the Ganeti backend. |
955 |
* `snf-dispatcher`: This daemon, consumes the messages that are sent from |
956 |
the above components, and updates the Cyclades DB accordingly. |
957 |
|
958 |
|
959 |
Installation |
960 |
~~~~~~~~~~~~ |
961 |
|
962 |
Please check the RabbitMQ documentation which covers extensively the |
963 |
`installation of RabbitMQ server <http://www.rabbitmq.com/download.html>`_ and |
964 |
the setup of a `RabbitMQ cluster <http://www.rabbitmq.com/clustering.html>`_. |
965 |
Also, check out the `web management plugin |
966 |
<http://www.rabbitmq.com/management.html>`_ that can be useful for managing and |
967 |
monitoring RabbitMQ. |
968 |
|
969 |
For a basic installation of RabbitMQ on two nodes (node1 and node2) you can do |
970 |
the following: |
971 |
|
972 |
On both nodes, install rabbitmq-server and create a Synnefo user: |
973 |
|
974 |
.. code-block:: console |
975 |
|
976 |
$ apt-get install rabbitmq-server |
977 |
$ rabbitmqctl add_user synnefo "example_pass" |
978 |
$ rabbitmqctl set_permissions synnefo ".*" ".*" ".*" |
979 |
|
980 |
Also guarantee that both nodes share the same cookie, by running: |
981 |
|
982 |
.. code-block:: console |
983 |
|
984 |
$ scp node1:/var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/.erlang.cookie |
985 |
|
986 |
and restart the nodes: |
987 |
|
988 |
.. code-block:: console |
989 |
|
990 |
$ /etc/init.d/rabbitmq-server restart |
991 |
|
992 |
|
993 |
To setup the RabbitMQ cluster run: |
994 |
|
995 |
.. code-block:: console |
996 |
|
997 |
root@node2: rabbitmqctl stop_app |
998 |
root@node2: rabbitmqctl reset |
999 |
root@node2: rabbitmqctl cluster rabbit@node1 rabbit@node2 |
1000 |
root@node2: rabbitmqctl start_app |
1001 |
|
1002 |
You can verify that the cluster is set up correctly by running: |
1003 |
|
1004 |
.. code-block:: console |
1005 |
|
1006 |
root@node2: rabbitmqctl cluster_status |
1007 |
|
1008 |
|
1009 |
|
1010 |
|
1011 |
|
1012 |
Admin tool: snf-manage |
1013 |
---------------------- |
1014 |
|
1015 |
``snf-manage`` is a tool used to perform various administrative tasks. It needs |
1016 |
to be able to access the django database, so the following should be able to |
1017 |
import the Django settings. |
1018 |
|
1019 |
Additionally, administrative tasks can be performed via the admin web interface |
1020 |
located in /admin. Only users of type ADMIN can access the admin pages. To |
1021 |
change the type of a user to ADMIN, snf-manage can be used: |
1022 |
|
1023 |
.. code-block:: console |
1024 |
|
1025 |
$ snf-manage user-modify 42 --type ADMIN |
1026 |
|
1027 |
Logging |
1028 |
------- |
1029 |
|
1030 |
Logging in Synnefo is using Python's logging module. The module is configured |
1031 |
using dictionary configuration, whose format is described here: |
1032 |
|
1033 |
http://docs.python.org/release/2.7.1/library/logging.html#logging-config-dictschema |
1034 |
|
1035 |
Note that this is a feature of Python 2.7 that we have backported for use in |
1036 |
Python 2.6. |
1037 |
|
1038 |
The logging configuration dictionary is defined in |
1039 |
``/etc/synnefo/10-snf-webproject-logging.conf`` |
1040 |
|
1041 |
The administrator can have finer logging control by modifying the |
1042 |
``LOGGING_SETUP`` dictionary, and defining subloggers with different handlers |
1043 |
and log levels. e.g. To enable debug messages only for the API set the level |
1044 |
of 'synnefo.api' to ``DEBUG`` |
1045 |
|
1046 |
By default, the Django webapp and snf-manage logs to syslog, while |
1047 |
`snf-dispatcher` logs to `/var/log/synnefo/dispatcher.log`. |
1048 |
|
1049 |
|
1050 |
.. _scale-up: |
1051 |
|
1052 |
Scaling up to multiple nodes |
1053 |
============================ |
1054 |
|
1055 |
Here we will describe how should a large scale Synnefo deployment look like. Make |
1056 |
sure you are familiar with Synnefo and Ganeti before proceeding with this section. |
1057 |
This means you should at least have already set up successfully a working Synnefo |
1058 |
deployment as described in the :ref:`Admin's Quick Installation Guide |
1059 |
<quick-install-admin-guide>` and also read the Administrator's Guide until this |
1060 |
section. |
1061 |
|
1062 |
Graph of a scale-out Synnefo deployment |
1063 |
--------------------------------------- |
1064 |
|
1065 |
Each box in the following graph corresponds to a distinct physical node: |
1066 |
|
1067 |
.. image:: images/synnefo-arch2-roles.png |
1068 |
:width: 100% |
1069 |
:target: _images/synnefo-arch2-roles.png |
1070 |
|
1071 |
The above graph is actually the same with the one at the beginning of this |
1072 |
:ref:`guide <admin-guide>`, with the only difference that here we show the |
1073 |
Synnefo roles of each physical node. These roles are described in the |
1074 |
following section. |
1075 |
|
1076 |
.. _physical-node-roles: |
1077 |
|
1078 |
Physical Node roles |
1079 |
------------------- |
1080 |
|
1081 |
As appears in the previous graph, a scale-out Synnefo deployment consists of |
1082 |
multiple physical nodes that have the following roles: |
1083 |
|
1084 |
* **WEBSERVER**: A web server running in front of gunicorn (e.g.: Apache, nginx) |
1085 |
* **ASTAKOS**: The Astakos application (gunicorn) |
1086 |
* **ASTAKOS_DB**: The Astakos database (postgresql) |
1087 |
* **PITHOS**: The Pithos application (gunicorn) |
1088 |
* **PITHOS_DB**: The Pithos database (postgresql) |
1089 |
* **CYCLADES**: The Cyclades application (gunicorn) |
1090 |
* **CYCLADES_DB**: The Cyclades database (postgresql) |
1091 |
* **MQ**: The message queue (RabbitMQ) |
1092 |
* **GANETI_MASTER**: The Ganeti master of a Ganeti cluster |
1093 |
* **GANETI_NODE** : A VM-capable Ganeti node of a Ganeti cluster |
1094 |
|
1095 |
You will probably also have: |
1096 |
|
1097 |
* **CMS**: The CMS used as a frotend portal for the Synnefo services |
1098 |
* **NS**: A nameserver serving all other Synnefo nodes and resolving Synnefo FQDNs |
1099 |
* **CLIENT**: A machine that runs the Synnefo clients (e.g.: kamaki, Web UI), |
1100 |
most of the times, the end user's local machine |
1101 |
|
1102 |
From this point we will also refer to the following groups of roles: |
1103 |
|
1104 |
* **SYNNEFO**: [ **ASTAKOS**, **ASTAKOS_DB**, **PITHOS**, **PITHOS_DB**, **CYCLADES**, **CYCLADES_DB**, **MQ**, **CMS**] |
1105 |
* **G_BACKEND**: [**GANETI_MASTER**, **GANETI_NODE**] |
1106 |
|
1107 |
Of course, when deploying Synnefo you can combine multiple of the above roles on a |
1108 |
single physical node, but if you are trying to scale out, the above separation |
1109 |
gives you significant advantages. |
1110 |
|
1111 |
So, in the next section we will take a look on what components you will have to |
1112 |
install on each physical node depending on its Synnefo role. We assume the graph's |
1113 |
architecture. |
1114 |
|
1115 |
Components for each role |
1116 |
------------------------ |
1117 |
|
1118 |
When deploying Synnefo in large scale, you need to install different Synnefo |
1119 |
or/and third party components on different physical nodes according to their |
1120 |
Synnefo role, as stated in the previous section. |
1121 |
|
1122 |
Specifically: |
1123 |
|
1124 |
Role **WEBSERVER** |
1125 |
* Synnefo components: `None` |
1126 |
* 3rd party components: Apache |
1127 |
Role **ASTAKOS** |
1128 |
* Synnefo components: `snf-webproject`, `snf-astakos-app` |
1129 |
* 3rd party components: Django, Gunicorn |
1130 |
Role **ASTAKOS_DB** |
1131 |
* Synnefo components: `None` |
1132 |
* 3rd party components: PostgreSQL |
1133 |
Role **PITHOS** |
1134 |
* Synnefo components: `snf-webproject`, `snf-pithos-app`, `snf-pithos-webclient` |
1135 |
* 3rd party components: Django, Gunicorn |
1136 |
Role **PITHOS_DB** |
1137 |
* Synnefo components: `None` |
1138 |
* 3rd party components: PostgreSQL |
1139 |
Role **CYCLADES** |
1140 |
* Synnefo components: `snf-webproject`, `snf-cyclades-app`, `snf-vncauthproxy` |
1141 |
* 3rd party components: Django Gunicorn |
1142 |
Role **CYCLADES_DB** |
1143 |
* Synnefo components: `None` |
1144 |
* 3rd party components: PostgreSQL |
1145 |
Role **MQ** |
1146 |
* Synnefo components: `None` |
1147 |
* 3rd party components: RabbitMQ |
1148 |
Role **GANETI_MASTER** |
1149 |
* Synnefo components: `snf-cyclades-gtools` |
1150 |
* 3rd party components: Ganeti |
1151 |
Role **GANETI_NODE** |
1152 |
* Synnefo components: `snf-cyclades-gtools`, `snf-network`, `snf-image`, `nfdhcpd` |
1153 |
* 3rd party components: Ganeti |
1154 |
Role **CMS** |
1155 |
* Synnefo components: `snf-webproject`, `snf-cloudcms` |
1156 |
* 3rd party components: Django, Gunicorn |
1157 |
Role **NS** |
1158 |
* Synnefo components: `None` |
1159 |
* 3rd party components: BIND |
1160 |
Role **CLIENT** |
1161 |
* Synnefo components: `kamaki`, `snf-image-creator` |
1162 |
* 3rd party components: `None` |
1163 |
|
1164 |
Example scale out installation |
1165 |
------------------------------ |
1166 |
|
1167 |
In this section we describe an example of a medium scale installation which |
1168 |
combines multiple roles on 10 different physical nodes. We also provide a |
1169 |
:ref:`guide <i-synnefo>` to help with such an install. |
1170 |
|
1171 |
We assume that we have the following 10 physical nodes with the corresponding |
1172 |
roles: |
1173 |
|
1174 |
Node1: |
1175 |
**WEBSERVER**, **ASTAKOS** |
1176 |
Guide sections: |
1177 |
* :ref:`apt <i-apt>` |
1178 |
* :ref:`gunicorn <i-gunicorn>` |
1179 |
* :ref:`apache <i-apache>` |
1180 |
* :ref:`snf-webproject <i-webproject>` |
1181 |
* :ref:`snf-astakos-app <i-astakos>` |
1182 |
Node2: |
1183 |
**WEBSERVER**, **PITHOS** |
1184 |
Guide sections: |
1185 |
* :ref:`apt <i-apt>` |
1186 |
* :ref:`gunicorn <i-gunicorn>` |
1187 |
* :ref:`apache <i-apache>` |
1188 |
* :ref:`snf-webproject <i-webproject>` |
1189 |
* :ref:`snf-pithos-app <i-pithos>` |
1190 |
* :ref:`snf-pithos-webclient <i-pithos>` |
1191 |
Node3: |
1192 |
**WEBSERVER**, **CYCLADES** |
1193 |
Guide sections: |
1194 |
* :ref:`apt <i-apt>` |
1195 |
* :ref:`gunicorn <i-gunicorn>` |
1196 |
* :ref:`apache <i-apache>` |
1197 |
* :ref:`snf-webproject <i-webproject>` |
1198 |
* :ref:`snf-cyclades-app <i-cyclades>` |
1199 |
* :ref:`snf-vncauthproxy <i-cyclades>` |
1200 |
Node4: |
1201 |
**WEBSERVER**, **CMS** |
1202 |
Guide sections: |
1203 |
* :ref:`apt <i-apt>` |
1204 |
* :ref:`gunicorn <i-gunicorn>` |
1205 |
* :ref:`apache <i-apache>` |
1206 |
* :ref:`snf-webproject <i-webproject>` |
1207 |
* :ref:`snf-cloudcms <i-cms>` |
1208 |
Node5: |
1209 |
**ASTAKOS_DB**, **PITHOS_DB**, **CYCLADES_DB** |
1210 |
Guide sections: |
1211 |
* :ref:`apt <i-apt>` |
1212 |
* :ref:`postgresql <i-db>` |
1213 |
Node6: |
1214 |
**MQ** |
1215 |
Guide sections: |
1216 |
* :ref:`apt <i-apt>` |
1217 |
* :ref:`rabbitmq <i-mq>` |
1218 |
Node7: |
1219 |
**GANETI_MASTER**, **GANETI_NODE** |
1220 |
Guide sections: |
1221 |
* :ref:`apt <i-apt>` |
1222 |
* :ref:`general <i-backends>` |
1223 |
* :ref:`ganeti <i-ganeti>` |
1224 |
* :ref:`snf-cyclades-gtools <i-gtools>` |
1225 |
* :ref:`snf-network <i-network>` |
1226 |
* :ref:`snf-image <i-image>` |
1227 |
* :ref:`nfdhcpd <i-network>` |
1228 |
Node8: |
1229 |
**GANETI_NODE** |
1230 |
Guide sections: |
1231 |
* :ref:`apt <i-apt>` |
1232 |
* :ref:`general <i-backends>` |
1233 |
* :ref:`ganeti <i-ganeti>` |
1234 |
* :ref:`snf-cyclades-gtools <i-gtools>` |
1235 |
* :ref:`snf-network <i-network>` |
1236 |
* :ref:`snf-image <i-image>` |
1237 |
* :ref:`nfdhcpd <i-network>` |
1238 |
Node9: |
1239 |
**GANETI_NODE** |
1240 |
Guide sections: |
1241 |
`Same as Node8` |
1242 |
Node10: |
1243 |
**GANETI_NODE** |
1244 |
Guide sections: |
1245 |
`Same as Node8` |
1246 |
|
1247 |
All sections: :ref:`Scale out Guide <i-synnefo>` |
1248 |
|
1249 |
|
1250 |
Upgrade Notes |
1251 |
============= |
1252 |
|
1253 |
.. toctree:: |
1254 |
:maxdepth: 1 |
1255 |
|
1256 |
v0.12 -> v0.13 <upgrade/upgrade-0.13> |
1257 |
|
1258 |
|
1259 |
Changelog, NEWS |
1260 |
=============== |
1261 |
|
1262 |
* v0.13 :ref:`Changelog <Changelog-0.13>`, :ref:`NEWS <NEWS-0.13>` |