as 'dc'.
For KVM the boot order is either "floppy", "cdrom", "disk" or
- "network". Please note that older versions of KVM couldn't
- netboot from virtio interfaces. This has been fixed in more recent
- versions and is confirmed to work at least with qemu-kvm 0.11.1.
+ "network". Please note that older versions of KVM couldn't netboot
+ from virtio interfaces. This has been fixed in more recent versions
+ and is confirmed to work at least with qemu-kvm 0.11.1. Also note
+ that if you have set the ``kernel_path`` option, that will be used
+ for booting, and this setting will be silently ignored.
blockdev\_prefix
Valid for the Xen HVM and PVM hypervisors.
Enables or disables passing mouse events via SPICE vdagent.
+cpu\_type
+ Valid for the KVM hypervisor.
+
+ This parameter determines the emulated cpu for the instance. If this
+ parameter is empty (which is the default configuration), it will not
+ be passed to KVM.
+
+ Be aware of setting this parameter to ``"host"`` if you have nodes
+ with different CPUs from each other. Live migration may stop working
+ in this situation.
+
+ For more information please refer to the KVM manual.
+
acpi
Valid for the Xen HVM and KVM hypervisors.
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the kernel to boot
- the instance with. Xen PVM instances always require this, while
- for KVM if this option is empty, it will cause the machine to load
- the kernel from its disks.
+ the instance with. Xen PVM instances always require this, while for
+ KVM if this option is empty, it will cause the machine to load the
+ kernel from its disks (and the boot will be done accordingly to
+ ``boot_order``).
kernel\_args
Valid for the Xen PVM and KVM hypervisors.
versions >= 0.11.0.
cpu\_mask
- Valid for the LXC hypervisor.
+ Valid for the Xen, KVM and LXC hypervisors.
The processes belonging to the given instance are only scheduled
on the specified CPUs.
- The parameter format is a comma-separated list of CPU IDs or CPU
- ID ranges. The ranges are defined by a lower and higher boundary,
- separated by a dash. The boundaries are inclusive.
+ The format of the mask can be given in three forms. First, the word
+ "all", which signifies the common case where all VCPUs can live on
+ any CPU, based on the hypervisor's decisions.
+
+ Second, a comma-separated list of CPU IDs or CPU ID ranges. The
+ ranges are defined by a lower and higher boundary, separated by a
+ dash, and the boundaries are inclusive. In this form, all VCPUs of
+ the instance will be mapped on the selected list of CPUs. Example:
+ ``0-2,5``, mapping all VCPUs (no matter how many) onto physical CPUs
+ 0, 1, 2 and 5.
+
+ The last form is used for explicit control of VCPU-CPU pinnings. In
+ this form, the list of VCPU mappings is given as a colon (:)
+ separated list, whose elements are the possible values for the
+ second or first form above. In this form, the number of elements in
+ the colon-separated list _must_ equal the number of VCPUs of the
+ instance.
+
+ Example::
+
+ # Map the entire instance to CPUs 0-2
+ gnt-instance modify -H cpu_mask=0-2 my-inst
+
+ # Map vCPU 0 to physical CPU 1 and vCPU 1 to CPU 3 (assuming 2 vCPUs)
+ gnt-instance modify -H cpu_mask=1:3 my-inst
+
+ # Pin vCPU 0 to CPUs 1 or 2, and vCPU 1 to any CPU
+ gnt-instance modify -H cpu_mask=1-2:all my-inst
+
+ # Pin vCPU 0 to any CPU, vCPU 1 to CPUs 1, 3, 4 or 5, and CPU 2 to
+ # CPU 0 (backslashes for escaping the comma)
+ gnt-instance modify -H cpu_mask=all:1\\,3-5:0 my-inst
+
+ # Pin entire VM to CPU 0
+ gnt-instance modify -H cpu_mask=0 my-inst
+
+ # Turn off CPU pinning (default setting)
+ gnt-instance modify -H cpu_mask=all my-inst
usb\_mouse
Valid for the KVM hypervisor.
[\--disks *idx*] {*instance*}
**replace-disks** [\--submit] [\--early-release] [\--ignore-ipolicy]
-{\--iallocator *name* \| \--node *node* } {*instance*}
+{{-I\|\--iallocator} *name* \| \--node *node* } {*instance*}
**replace-disks** [\--submit] [\--early-release] [\--ignore-ipolicy]
{\--auto} {*instance*}
ACTIVATE-DISKS
^^^^^^^^^^^^^^
-**activate-disks** [\--submit] [\--ignore-size] {*instance*}
+**activate-disks** [\--submit] [\--ignore-size] [\--wait-for-sync] {*instance*}
Activates the block devices of the given instance. If successful, the
command will show the location and name of the block devices::
in LVM devices). This should not be used in normal cases, but only
when activate-disks fails without it.
+The ``--wait-for-sync`` option will ensure that the command returns only
+after the instance's disks are synchronised (mostly for DRBD); this can
+be useful to ensure consistency, as otherwise there are no commands that
+can wait until synchronisation is done. However when passing this
+option, the command will have additional output, making it harder to
+parse the disk information.
+
Note that it is safe to run this command while the instance is already
running.
RECREATE-DISKS
^^^^^^^^^^^^^^
-| **recreate-disks** [\--submit] [-n node1:[node2]]
+| **recreate-disks** [\--submit]
+| [{-n node1:[node2] \| {-I\|\--iallocator *name*}}]
| [\--disk=*N*[:[size=*VAL*][,mode=*ro\|rw*]]] {*instance*}
Recreates all or a subset of disks of the given instance.
has. Note that changing nodes is only allowed when all disks are
replaced, e.g. when no ``--disk`` option is passed.
+Another method of chosing which nodes to place the instance on is by
+using the specified iallocator, passing the ``--iallocator`` option.
+The primary and secondary nodes will be chosen by the specified
+iallocator plugin.
+
See **ganeti(7)** for a description of ``--submit`` and other common
options.
instance's runtime before migrating it (eg. ballooning an instance
down because the target node doesn't have enough available memory).
+If an instance has the backend parameter ``always\_failover`` set to
+true, then the migration is automatically converted into a failover.
+
See **ganeti(7)** for a description of ``--submit`` and other common
options.