Revision e19f7095

b/doc/design-2.7.rst
11 11
- :doc:`design-virtual-clusters`
12 12
- :doc:`design-network`
13 13
- :doc:`design-linuxha`
14
- :doc:`design-shared-storage` (Updated to reflect the new ExtStorage
15
  Interface)
14 16

  
15 17
The following designs have been partially implemented in Ganeti 2.7:
16 18

  
b/doc/design-shared-storage.rst
1
======================================
2
Ganeti shared storage support for 2.3+
3
======================================
1
=============================
2
Ganeti shared storage support
3
=============================
4 4

  
5 5
This document describes the changes in Ganeti 2.3+ compared to Ganeti
6
2.3 storage model.
6
2.3 storage model. It also documents the ExtStorage Interface.
7 7

  
8 8
.. contents:: :depth: 4
9 9
.. highlight:: shell-example
b/man/ganeti-extstorage-interface.rst
66 66
EXECUTABLE SCRIPTS
67 67
------------------
68 68

  
69

  
70 69
create
71 70
~~~~~~
72 71

  
......
198 197
TEXT FILES
199 198
----------
200 199

  
201

  
202 200
parameters.list
203 201
~~~~~~~~~~~~~~~
204 202

  
......
213 211

  
214 212
    # gnt-instance add --disk=0:fromsnap="file_name",nas_ip="1.2.3.4" ...
215 213

  
214
EXAMPLES
215
--------
216

  
217
In the following examples we assume that you have already installed
218
successfully two ExtStorage providers: ``pvdr1`` and ``pvdr2``
219

  
220
Add a new instance with a 10G first disk provided by ``pvdr1`` and a 20G
221
second disk provided by ``pvdr2``::
222

  
223
    # gnt-instance add -t ext --disk=0:size=10G,provider=pvdr1
224
                              --disk=1:size=20G,provider=pvdr2
225

  
226
Add a new instance with a 5G first disk provided by provider ``pvdr1``
227
and also pass the ``prm1``, ``prm2`` parameters to the provider, with
228
the corresponding values ``val1``, ``val2``::
229

  
230
   # gnt-instance add -t ext
231
                      --disk=0:size=5G,provider=pvdr1,prm1=val1,prm2=val2
232

  
233
Modify an existing instance of disk type ``ext`` by adding a new 30G
234
disk provided by provider ``pvdr2``::
235

  
236
   # gnt-instance modify --disk 1:add,size=30G,provider=pvdr2 <instance>
237

  
238
Modify an existing instance of disk type ``ext`` by adding 2 new disks,
239
of different providers, passing one parameter for the first one::
240

  
241
   # gnt-instance modify --disk 2:add,size=3G,provider=pvdr1,prm1=val1
242
                         --disk 3:add,size=5G,provider=pvdr2
243
                         <instance>
244

  
216 245
NOTES
217 246
-----
218 247

  
b/man/gnt-instance.rst
29 29
| **add**
30 30
| {-t|\--disk-template {diskless | file \| plain \| drbd \| rbd}}
31 31
| {\--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,metavg=*VG*][,mode=*ro\|rw*]
32
|  \| {size=*VAL*,provider=*PROVIDER*}[,param=*value*... ][,mode=*ro\|rw*]
32 33
|  \| {-s|\--os-size} *SIZE*}
33 34
| [\--no-ip-check] [\--no-name-check] [\--no-start] [\--no-install]
34 35
| [\--net=*N* [:options...] \| \--no-nics]
......
50 51
instance. The numbering of disks starts at zero, and at least one disk
51 52
needs to be passed. For each disk, either the size or the adoption
52 53
source needs to be given, and optionally the access mode (read-only or
53
the default of read-write) and the LVM volume group can also be
54
specified (via the ``vg`` key). For DRBD devices, a different VG can
55
be specified for the metadata device using the ``metavg`` key.  The
56
size is interpreted (when no unit is given) in mebibytes. You can also
57
use one of the suffixes *m*, *g* or *t* to specify the exact the units
58
used; these suffixes map to mebibytes, gibibytes and tebibytes.
54
the default of read-write). The size is interpreted (when no unit is
55
given) in mebibytes. You can also use one of the suffixes *m*, *g* or
56
*t* to specify the exact the units used; these suffixes map to
57
mebibytes, gibibytes and tebibytes. For LVM and DRBD devices, the LVM
58
volume group can also be specified (via the ``vg`` key). For DRBD
59
devices, a different VG can be specified for the metadata device using
60
the ``metavg`` key. For ExtStorage devices, also the ``provider``
61
option is mandatory, to specify which ExtStorage provider to use.
62

  
63
When creating ExtStorage disks, also arbitrary parameters can be passed,
64
to the ExtStorage provider. Those parameters are passed as additional
65
comma separated options. Therefore, an ExtStorage disk provided by
66
provider ``pvdr1`` with parameters ``param1``, ``param2`` would be
67
passed as ``--disk 0:size=10G,provider=pvdr1,param1=val1,param2=val2``.
59 68

  
60 69
When using the ``adopt`` key in the disk definition, Ganeti will
61 70
reuse those volumes (instead of creating new ones) as the
......
75 84
can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk
76 85
2:size=100G``.
77 86

  
87
The minimum information needed to specify an ExtStorage disk are the
88
``size`` and the ``provider``. For example:
89
``--disk 0:size=20G,provider=pvdr1``.
90

  
78 91
The ``--no-ip-check`` skips the checks that are done to see if the
79 92
instance's IP is not already alive (i.e. reachable from the master
80 93
node).
......
717 730
file
718 731
    Disk devices will be regular files.
719 732

  
733
sharedfile
734
    Disk devices will be regulare files on a shared directory.
735

  
720 736
plain
721 737
    Disk devices will be logical volumes.
722 738

  
......
726 742
rbd
727 743
    Disk devices will be rbd volumes residing inside a RADOS cluster.
728 744

  
745
blockdev
746
    Disk devices will be adopted pre-existent block devices.
747

  
748
ext
749
    Disk devices will be provided by external shared storage,
750
    through the ExtStorage Interface using ExtStorage providers.
729 751

  
730 752
The optional second value of the ``-n (--node)`` is used for the drbd
731 753
template type and specifies the remote node.
......
779 801
      -B maxmem=512 -o debian-etch -n node1.example.com instance1.example.com
780 802
    # gnt-instance add -t drbd --disk 0:size=30g -B maxmem=512 -o debian-etch \
781 803
      -n node1.example.com:node2.example.com instance2.example.com
804
    # gnt-instance add -t rbd --disk 0:size=30g -B maxmem=512 -o debian-etch \
805
      -n node1.example.com instance1.example.com
806
    # gnt-instance add -t ext --disk 0:size=30g,provider=pvdr1 -B maxmem=512 \
807
      -o debian-etch -n node1.example.com instance1.example.com
808
    # gnt-instance add -t ext --disk 0:size=30g,provider=pvdr1,param1=val1 \
809
      --disk 1:size=40g,provider=pvdr2,param2=val2,param3=val3 -B maxmem=512 \
810
      -o debian-etch -n node1.example.com instance1.example.com
782 811

  
783 812

  
784 813
BATCH-CREATE
......
994 1023
| [{-B|\--backend-parameters} *BACKEND\_PARAMETERS*]
995 1024
| [{-m|\--runtime-memory} *SIZE*]
996 1025
| [\--net add*[:options]* \| \--net [*N*:]remove \| \--net *N:options*]
997
| [\--disk add:size=*SIZE*[,vg=*VG*][,metavg=*VG*] \| \--disk [*N*:]remove \|
1026
| [\--disk add:size=*SIZE*[,vg=*VG*][,metavg=*VG*] \|
1027
|  \--disk add:size=*SIZE*,provider=*PROVIDER*[,param=*value*... ] \|
1028
|  \--disk [*N*:]remove \|
998 1029
|  \--disk *N*:mode=*MODE*]
999 1030
| [{-t|\--disk-template} plain | {-t|\--disk-template} drbd -n *new_secondary*] [\--no-wait-for-sync]
1000 1031
| [\--os-type=*OS* [\--force-variant]]
......
1028 1059
by ballooning it up or down to the new value.
1029 1060

  
1030 1061
The ``--disk add:size=``*SIZE* option adds a disk to the instance. The
1031
optional ``vg=``*VG* option specifies an LVM volume group other than
1032
the default volume group to create the disk on. For DRBD disks, the
1062
optional ``vg=``*VG* option specifies an LVM volume group other than the
1063
default volume group to create the disk on. For DRBD disks, the
1033 1064
``metavg=``*VG* option specifies the volume group for the metadata
1034
device. ``--disk`` *N*``:add,size=``**SIZE** can be used to add a
1035
disk at a specific index. The ``--disk remove`` option will remove the
1036
last disk of the instance. Use ``--disk `` *N*``:remove`` to remove a
1037
disk by its index. The ``--disk`` *N*``:mode=``*MODE* option will change
1038
the mode of the Nth disk of the instance between read-only (``ro``) and
1039
read-write (``rw``).
1065
device. When adding an ExtStorage disk the ``provider=``*PROVIDER*
1066
option is also mandatory and specifies the ExtStorage provider. Also,
1067
for ExtStorage disks arbitrary parameters can be passed as additional
1068
comma separated options, same as in the **add** command. ``--disk``
1069
*N*``:add,size=``**SIZE** can be used to add a disk at a specific index.
1070
The ``--disk remove`` option will remove the last disk of the instance.
1071
Use ``--disk `` *N*``:remove`` to remove a disk by its index. The
1072
``--disk`` *N*``:mode=``*MODE* option will change the mode of the Nth
1073
disk of the instance between read-only (``ro``) and read-write (``rw``).
1040 1074

  
1041 1075
The ``--net add:``*options* and ``--net`` *N*``:add,``*options* option
1042 1076
will add a new network interface to the instance. The available options
......
1472 1506
| {*instance*} {*disk*} {*amount*}
1473 1507

  
1474 1508
Grows an instance's disk. This is only possible for instances having a
1475
plain, drbd, file, sharedfile or rbd disk template.
1509
plain, drbd, file, sharedfile, rbd or ext disk template. For the ext
1510
template to work, the ExtStorage provider should also support growing.
1511
This means having a ``grow`` script that actually grows the volume of
1512
the external shared storage.
1476 1513

  
1477 1514
Note that this command only change the block device size; it will not
1478 1515
grow the actual filesystems, partitions, etc. that live on that
......
1572 1609

  
1573 1610
Failover will stop the instance (if running), change its primary node,
1574 1611
and if it was originally running it will start it again (on the new
1575
primary). This only works for instances with drbd template (in which
1576
case you can only fail to the secondary node) and for externally
1577
mirrored templates (blockdev and rbd) (which can change to any other
1578
node).
1579

  
1580
If the instance's disk template is of type blockdev or rbd, then you
1581
can explicitly specify the target node (which can be any node) using
1582
the ``-n`` or ``--target-node`` option, or specify an iallocator plugin
1583
using the ``-I`` or ``--iallocator`` option. If you omit both, the default
1584
iallocator will be used to specify the target node.
1612
primary). This works for instances with drbd template (in which case you
1613
can only fail to the secondary node) and for externally mirrored
1614
templates (sharedfile, blockdev, rbd and ext) (in which case you can
1615
fail to any other node).
1616

  
1617
If the instance's disk template is of type sharedfile, blockdev, rbd or
1618
ext, then you can explicitly specify the target node (which can be any
1619
node) using the ``-n`` or ``--target-node`` option, or specify an
1620
iallocator plugin using the ``-I`` or ``--iallocator`` option. If you
1621
omit both, the default iallocator will be used to specify the target
1622
node.
1623

  
1624
If the instance's disk template is of type drbd, the target node is
1625
automatically selected as the drbd's secondary node. Changing the
1626
secondary node is possible with a replace-disks operation.
1585 1627

  
1586 1628
Normally the failover will check the consistency of the disks before
1587 1629
failing over the instance. If you are trying to migrate instances off
......
1606 1648

  
1607 1649
    # gnt-instance failover instance1.example.com
1608 1650

  
1651
For externally mirrored templates also ``-n`` is available::
1652

  
1653
    # gnt-instance failover -n node3.example.com instance1.example.com
1654

  
1609 1655

  
1610 1656
MIGRATE
1611 1657
^^^^^^^
......
1618 1664
| **migrate** [-f] \--cleanup [\--submit] {*instance*}
1619 1665

  
1620 1666
Migrate will move the instance to its secondary node without shutdown.
1621
As with failover, it only works for instances having the drbd disk
1622
template or an externally mirrored disk template type such as blockdev
1623
or rbd.
1624

  
1625
If the instance's disk template is of type blockdev or rbd, then you can
1626
explicitly specify the target node (which can be any node) using the
1627
``-n`` or ``--target-node`` option, or specify an iallocator plugin
1628
using the ``-I`` or ``--iallocator`` option. If you omit both, the
1629
default iallocator will be used to specify the target node.
1630
Alternatively, the default iallocator can be requested by specifying
1631
``.`` as the name of the plugin.
1632

  
1633
The migration command needs a perfectly healthy instance, as we rely
1634
on the dual-master capability of drbd8 and the disks of the instance
1635
are not allowed to be degraded.
1667
As with failover, it works for instances having the drbd disk template
1668
or an externally mirrored disk template type such as sharedfile,
1669
blockdev, rbd or ext.
1670

  
1671
If the instance's disk template is of type sharedfile, blockdev, rbd or
1672
ext, then you can explicitly specify the target node (which can be any
1673
node) using the ``-n`` or ``--target-node`` option, or specify an
1674
iallocator plugin using the ``-I`` or ``--iallocator`` option. If you
1675
omit both, the default iallocator will be used to specify the target
1676
node.  Alternatively, the default iallocator can be requested by
1677
specifying ``.`` as the name of the plugin.
1678

  
1679
If the instance's disk template is of type drbd, the target node is
1680
automatically selected as the drbd's secondary node. Changing the
1681
secondary node is possible with a replace-disks operation.
1682

  
1683
The migration command needs a perfectly healthy instance for drbd
1684
instances, as we rely on the dual-master capability of drbd8 and the
1685
disks of the instance are not allowed to be degraded.
1636 1686

  
1637 1687
The ``--non-live`` and ``--migration-mode=non-live`` options will
1638 1688
switch (for the hypervisors that support it) between a "fully live"
......
1647 1697
viewed with the **gnt-cluster info** command).
1648 1698

  
1649 1699
If the ``--cleanup`` option is passed, the operation changes from
1650
migration to attempting recovery from a failed previous migration.  In
1700
migration to attempting recovery from a failed previous migration. In
1651 1701
this mode, Ganeti checks if the instance runs on the correct node (and
1652 1702
updates its configuration if not) and ensures the instances' disks
1653 1703
are configured correctly. In this mode, the ``--non-live`` option is
......
1704 1754
| [-n *node*] [\--shutdown-timeout=*N*] [\--submit] [\--ignore-ipolicy]
1705 1755
| {*instance*}
1706 1756

  
1707
Move will move the instance to an arbitrary node in the cluster.  This
1757
Move will move the instance to an arbitrary node in the cluster. This
1708 1758
works only for instances having a plain or file disk template.
1709 1759

  
1710 1760
Note that since this operation is done via data copy, it will take a

Also available in: Unified diff