Revision 7ed400f0

b/INSTALL
19 19
  versions 0.11.X or above have shown good behavior).
20 20
- `DRBD <http://www.drbd.org/>`_, kernel module and userspace utils,
21 21
  version 8.0.7 or above
22
- `RBD <http://ceph.newdream.net/>`_, kernel modules (rbd.ko/libceph.ko)
23
  and userspace utils (ceph-common)
22 24
- `LVM2 <http://sourceware.org/lvm2/>`_
23 25
- `OpenSSH <http://www.openssh.com/portable.html>`_
24 26
- `bridge utilities <http://www.linuxfoundation.org/en/Net:Bridge>`_
......
50 52
usually they can be installed via the standard package manager. Also
51 53
many of them will already be installed on a standard machine. On
52 54
Debian/Ubuntu, you can use this command line to install all required
53
packages, except for DRBD and Xen::
55
packages, except for RBD, DRBD and Xen::
54 56

  
55 57
  $ apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
56 58
                    ndisc6 python python-pyopenssl openssl \
b/doc/admin.rst
115 115
the instance sees the same virtual drive in all cases, the node-level
116 116
configuration varies between them.
117 117

  
118
There are four disk templates you can choose from:
118
There are five disk templates you can choose from:
119 119

  
120 120
diskless
121 121
  The instance has no disks. Only used for special purpose operating
......
138 138
  to obtain a highly available instance that can be failed over to a
139 139
  remote node should the primary one fail.
140 140

  
141
rbd
142
  The instance will use Volumes inside a RADOS cluster as backend for its
143
  disks. It will access them using the RADOS block device (RBD).
144

  
141 145
IAllocator
142 146
~~~~~~~~~~
143 147

  
......
510 514
target node, or the operation will fail if that's not possible. See
511 515
:ref:`instance-startup-label` for details.
512 516

  
517
If the instance's disk template is of type rbd, then you can specify
518
the target node (which can be any node) explicitly, or specify an
519
iallocator plugin. If you omit both, the default iallocator will be
520
used to determine the target node::
521

  
522
  gnt-instance failover -n TARGET_NODE INSTANCE_NAME
523

  
513 524
Live migrating an instance
514 525
~~~~~~~~~~~~~~~~~~~~~~~~~~
515 526

  
......
530 541
which case the target node should have at least the instance's current
531 542
runtime memory free.
532 543

  
544
If the instance's disk template is of type rbd, then you can specify
545
the target node (which can be any node) explicitly, or specify an
546
iallocator plugin. If you omit both, the default iallocator will be
547
used to determine the target node::
548

  
549
   gnt-instance migrate -n TARGET_NODE INSTANCE_NAME
550

  
533 551
Moving an instance (offline)
534 552
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
535 553

  
......
1247 1265
6. Remove the ganeti state directory (``rm -rf /var/lib/ganeti/*``),
1248 1266
   replacing the path with the correct path for your installation.
1249 1267

  
1268
7. If using RBD, run ``rbd unmap /dev/rbdN`` to unmap the RBD disks.
1269
   Then remove the RBD disk images used by Ganeti, identified by their
1270
   UUIDs (``rbd rm uuid.rbd.diskN``).
1271

  
1250 1272
On the master node, remove the cluster from the master-netdev (usually
1251 1273
``xen-br0`` for bridged mode, otherwise ``eth0`` or similar), by running
1252 1274
``ip a del $clusterip/32 dev xen-br0`` (use the correct cluster ip and
b/doc/iallocator.rst
41 41
Command line interface changes
42 42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43 43

  
44
The node selection options in instanece add and instance replace disks
44
The node selection options in instance add and instance replace disks
45 45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46 46
``-I``), which will cause the auto-assignement of nodes with the
47 47
passed iallocator. The selected node(s) will be show as part of the
b/doc/install.rst
69 69
You can also use file-based storage only, without LVM, but this setup is
70 70
not detailed in this document.
71 71

  
72
If you choose to use RBD-based instances, there's no need for LVM
73
provisioning. However, this feature is experimental, and is not
74
recommended for production clusters.
75

  
72 76
While you can use an existing system, please note that the Ganeti
73 77
installation is intrusive in terms of changes to the system
74 78
configuration, and it's best to use a newly-installed system without
......
300 304
       }
301 305
     }
302 306

  
307
Installing RBD
308
+++++++++++++++
309

  
310
Recommended on all nodes: RBD_ is required if you want to create
311
instances with RBD disks residing inside a RADOS cluster (make use of
312
the rbd disk template). RBD-based instances can failover or migrate to
313
any other node in the ganeti cluster, enabling you to exploit of all
314
Ganeti's high availabilily (HA) features.
315

  
316
.. attention::
317
   Be careful though: rbd is still experimental! For now it is
318
   recommended only for testing purposes.  No sensitive data should be
319
   stored there.
320

  
321
.. _RBD: http://ceph.newdream.net/
322

  
323
You will need the ``rbd`` and ``libceph`` kernel modules, the RBD/Ceph
324
userspace utils (ceph-common Debian package) and an appropriate
325
Ceph/RADOS configuration file on every VM-capable node.
326

  
327
You will also need a working RADOS Cluster accessible by the above
328
nodes.
329

  
330
RADOS Cluster
331
~~~~~~~~~~~~~
332

  
333
You will need a working RADOS Cluster accesible by all VM-capable nodes
334
to use the RBD template. For more information on setting up a RADOS
335
Cluster, refer to the `official docs <http://ceph.newdream.net/>`_.
336

  
337
If you want to use a pool for storing RBD disk images other than the
338
default (``rbd``), you should first create the pool in the RADOS
339
Cluster, and then set the corresponding rbd disk parameter named
340
``pool``.
341

  
342
Kernel Modules
343
~~~~~~~~~~~~~~
344

  
345
Unless your distribution already provides it, you might need to compile
346
the ``rbd`` and ``libceph`` modules from source. You will need Linux
347
Kernel 3.2 or above for the kernel modules. Alternatively you will have
348
to build them as external modules (from Linux Kernel source 3.2 or
349
above), if you want to run a less recent kernel, or your kernel doesn't
350
include them.
351

  
352
Userspace Utils
353
~~~~~~~~~~~~~~~
354

  
355
The RBD template has been tested with ``ceph-common`` v0.38 and
356
above. We recommend using the latest version of ``ceph-common``.
357

  
358
.. admonition:: Debian
359

  
360
   On Debian, you can just install the RBD/Ceph userspace utils with
361
   the following command::
362

  
363
      apt-get install ceph-common
364

  
365
Configuration file
366
~~~~~~~~~~~~~~~~~~
367

  
368
You should also provide an appropriate configuration file
369
(``ceph.conf``) in ``/etc/ceph``. For the rbd userspace utils, you'll
370
only need to specify the IP addresses of the RADOS Cluster monitors.
371

  
372
.. admonition:: ceph.conf
373

  
374
   Sample configuration file::
375

  
376
    [mon.a]
377
           host = example_monitor_host1
378
           mon addr = 1.2.3.4:6789
379
    [mon.b]
380
           host = example_monitor_host2
381
           mon addr = 1.2.3.5:6789
382
    [mon.c]
383
           host = example_monitor_host3
384
           mon addr = 1.2.3.6:6789
385

  
386
For more information, please see the `Ceph Docs
387
<http://ceph.newdream.net/docs/latest/>`_
388

  
303 389
Other required software
304 390
+++++++++++++++++++++++
305 391

  
b/man/gnt-cluster.rst
445 445
stripes
446 446
    Number of stripes to use for new LVs.
447 447

  
448
List of parameters available for the **rbd** template:
449

  
450
pool
451
    The RADOS cluster pool, inside which all rbd volumes will reside.
452
    When a new RADOS cluster is deployed, the default pool to put rbd
453
    volumes (Images in RADOS terminology) is 'rbd'.
454

  
448 455
The option ``--maintain-node-health`` allows one to enable/disable
449 456
automatic maintenance actions on nodes. Currently these include
450 457
automatic shutdown of instances and deactivation of DRBD devices on
b/man/gnt-instance.rst
27 27
^^^
28 28

  
29 29
| **add**
30
| {-t|--disk-template {diskless | file \| plain \| drbd}}
30
| {-t|--disk-template {diskless | file \| plain \| drbd \| rbd}}
31 31
| {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,metavg=*VG*][,mode=*ro\|rw*]
32 32
|  \| {-s|--os-size} *SIZE*}
33 33
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install]
......
588 588
drbd
589 589
    Disk devices will be drbd (version 8.x) on top of lvm volumes.
590 590

  
591
rbd
592
    Disk devices will be rbd volumes residing inside a RADOS cluster.
593

  
591 594

  
592 595
The optional second value of the ``-n (--node)`` is used for the drbd
593 596
template type and specifies the remote node.
......
1321 1324
{*amount*}
1322 1325

  
1323 1326
Grows an instance's disk. This is only possible for instances having a
1324
plain or drbd disk template.
1327
plain, drbd or rbd disk template.
1325 1328

  
1326 1329
Note that this command only change the block device size; it will not
1327 1330
grow the actual filesystems, partitions, etc. that live on that
......
1341 1344
to the arguments in the create instance operation, with a suffix
1342 1345
denoting the unit.
1343 1346

  
1344
Note that the disk grow operation might complete on one node but fail
1345
on the other; this will leave the instance with different-sized LVs on
1346
the two nodes, but this will not create problems (except for unused
1347
space).
1347
For instances with a drbd template, note that the disk grow operation
1348
might complete on one node but fail on the other; this will leave the
1349
instance with different-sized LVs on the two nodes, but this will not
1350
create problems (except for unused space).
1348 1351

  
1349 1352
If you do not want gnt-instance to wait for the new disk region to be
1350 1353
synced, use the ``--no-wait-for-sync`` option.
......
1401 1404
FAILOVER
1402 1405
^^^^^^^^
1403 1406

  
1404
**failover** [-f] [--ignore-consistency] [--shutdown-timeout=*N*]
1405
[--submit] [--ignore-ipolicy] {*instance*}
1407
| **failover** [-f] [--ignore-consistency] [--ignore-ipolicy]
1408
| [--shutdown-timeout=*N*]
1409
| [{-n|--target-node} *node* \| {-I|--iallocator} *name*]
1410
| [--submit]
1411
| {*instance*}
1406 1412

  
1407 1413
Failover will stop the instance (if running), change its primary node,
1408 1414
and if it was originally running it will start it again (on the new
1409 1415
primary). This only works for instances with drbd template (in which
1410 1416
case you can only fail to the secondary node) and for externally
1411
mirrored templates (shared storage) (which can change to any other
1417
mirrored templates (blockdev and rbd) (which can change to any other
1412 1418
node).
1413 1419

  
1420
If the instance's disk template is of type blockdev or rbd, then you
1421
can explicitly specify the target node (which can be any node) using
1422
the ``-n`` or ``--target-node`` option, or specify an iallocator plugin
1423
using the ``-I`` or ``--iallocator`` option. If you omit both, the default
1424
iallocator will be used to specify the target node.
1425

  
1414 1426
Normally the failover will check the consistency of the disks before
1415 1427
failing over the instance. If you are trying to migrate instances off
1416 1428
a dead node, this will fail. Use the ``--ignore-consistency`` option
......
1443 1455

  
1444 1456
**migrate** [-f] [--allow-failover] [--non-live]
1445 1457
[--migration-mode=live\|non-live] [--ignore-ipolicy]
1446
[--no-runtime-changes] {*instance*}
1447

  
1448
Migrate will move the instance to its secondary node without
1449
shutdown. It only works for instances having the drbd8 disk template
1450
type.
1458
[--no-runtime-changes]
1459
[{-n|--target-node} *node* \| {-I|--iallocator} *name*] {*instance*}
1460

  
1461
Migrate will move the instance to its secondary node without shutdown.
1462
As with failover, it only works for instances having the drbd disk
1463
template or an externally mirrored disk template type such as blockdev
1464
or rbd.
1465

  
1466
If the instance's disk template is of type blockdev or rbd, then you can
1467
explicitly specify the target node (which can be any node) using the
1468
``-n`` or ``--target-node`` option, or specify an iallocator plugin
1469
using the ``-I`` or ``--iallocator`` option. If you omit both, the
1470
default iallocator will be used to specify the target node.
1451 1471

  
1452 1472
The migration command needs a perfectly healthy instance, as we rely
1453 1473
on the dual-master capability of drbd8 and the disks of the instance

Also available in: Unified diff