X-Git-Url: https://code.grnet.gr/git/ganeti-local/blobdiff_plain/25c5878d35bab020b3b0c79d7f536bf1ed0635f8..7e9366f7b0c77b63d7c4cc971ac06125ac34b48c:/man/gnt-instance.sgml diff --git a/man/gnt-instance.sgml b/man/gnt-instance.sgml index 36a6c14..e6d190e 100644 --- a/man/gnt-instance.sgml +++ b/man/gnt-instance.sgml @@ -2,7 +2,7 @@ - May 16, 2007"> + May 29, 2008"> 8"> @@ -20,6 +20,7 @@ 2006 2007 + 2008 Google Inc. &dhdate; @@ -66,32 +67,70 @@ --swap-size disksize -m memsize - -o os-type + -b bridge --mac MAC-address + + --hvm-boot-order boot-order + --hvm-acpi ACPI-support - --kernel + + --hvm-pae PAE-support + + + --hvm-cdrom-image-path + cdrom-image-path + + + --hvm-nic-type NICTYPE + + + --hvm-disk-type + DISKTYPE + + + --vnc-bind-address + vnc-bind-address + + + --kernel default kernel_path - --initrd + + --initrd default none initrd_path - + - -t + + --file-storage-dir dir_path + --file-driver + loop + blktap + + + + -t diskless + file plain - local_raid1 - remote_raid1 drbd - - + + + + + -n node:secondary-node + --iallocator name + + + + -o os-type - -n node:secondary-node + instance @@ -192,12 +231,70 @@ - The option is only relevant for Xen HVM instances and - ignored by all other instances types. + The default is not to set an HVM boot order which is + interpreted as 'dc'. This option, like all options starting + with 'hvm', is only relevant for Xen HVM instances and + ignored by all other instance types. + + + + The option specifies if Xen + should enable ACPI support for this HVM instance. Valid + values are true or false. The default value is false, + disabling ACPI support for this instance. + + + + The option specifies if Xen + should enabled PAE support for this HVM instance. Valid + values are true or false. The default is false, disabling + PAE support for this instance. + + + + The option specifies the + path to the file Xen uses to emulate a virtual CDROM drive + for this HVM instance. Valid values are either an + absolute path to an existing file or None, which disables + virtual CDROM support for this instance. The default is + None, disabling virtual CDROM support. + + + + The specifies the NIC type + Xen should use for this HVM instance. Valid choices are + rtl8139, ne2k_pci, ne2k_isa and paravirtual with rtl8139 + as the default. The paravirtual setting is intended for use + with the GPL PV drivers inside HVM Windows instances. + + + + The specifies the disk type + Xen should use for the HVM instance. Valid choices are ioemu + and paravirtual with ioemu as the default. The paravirtual + setting is intended for use with the GPL PV drivers inside + HVM Windows instances. + + + + The option specifies the + address that the VNC listener for this instance should bind + to. Valid values are IPv4 addresses. Use the address 0.0.0.0 + to bind to all available interfaces (this is the default) + or specify the address of one of the interfaces on the node + to restrict listening to that interface. + + + + The option specifies the instance + allocator plugin to use. If you pass in this option the allocator + will select nodes for this instance automatically, so you don't need + to pass them with the option. For more + information please refer to the instance allocator documentation. - The options allows the instance to + The option allows the instance to use a custom kernel (if a filename is passed) or to use the default kernel (@CUSTOM_XEN_KERNEL@), if the string default is passed. @@ -231,30 +328,15 @@ - plain + file - Disk devices will be logical volumes. + Disk devices will be regular files. - local_raid1 - - - Disk devices will be md raid1 arrays over two local - logical volumes. - - - - - remote_raid1 + plain - - Disk devices will be md raid1 arrays with one - component (so it's not actually raid1): a drbd - (0.7.x) device between the instance's primary node - and the node given by the second value of the - option. - + Disk devices will be logical volumes. @@ -262,10 +344,7 @@ Disk devices will be drbd (version 8.x) on top of - lvm volumes. They are equivalent in functionality to - remote_raid1, but are - recommended for new instances (if you have drbd 8.x - installed). + lvm volumes. @@ -274,7 +353,7 @@ The optional second value of the is used for - the remote raid template type and specifies the remote node. + the drbd template type and specifies the remote node. @@ -284,11 +363,63 @@ + The specifies the relative path + under the cluster-wide file storage directory to store file-based + disks. It is useful for having different subdirectories for + different instances. The full path of the directory where the disk + files are stored will consist of cluster-wide file storage directory + + optional subdirectory + instance name. Example: + /srv/ganeti/file-storage/mysubdir/instance1.example.com. This option + is only relevant for instances using the file storage backend. + + + + The specifies the driver to use for + file-based disks. Note that currently these drivers work with the + xen hypervisor only. This option is only relevant for instances using + the file storage backend. The available choices are: + + + loop + + Kernel loopback driver. + + + + blktap + + blktap driver. + + + + + + + The loop driver uses loopback devices to access the filesystem + within the file. However, running I/O intensive applications + in your instance using the loop driver might result in slowdowns. + Furthermore, if you use the loopback driver consider increasing + the maximum amount of loopback devices (on most systems it's 8) + using the max_loop param. + + + + In order to be able to use the blktap driver you should check + if the 'blktapctrl' user space disk agent is running (usually + automatically started via xend). This user-level disk I/O + interface has the advantage of better performance. Especially + if you use a network file system (e.g. NFS) to store your instances + this is the recommended choice. + + + Example: +# gnt-instance add -t file -s 30g -m 512 -o debian-etch \ + -n node1.example.com --file-storage-dir=mysubdir instance1.example.com # gnt-instance add -t plain -s 30g -m 512 -o debian-etch \ -n node1.example.com instance1.example.com -# gnt-instance add -t remote_raid1 -s 30g -m 512 -o debian-etch \ +# gnt-instance add -t drbd -s 30g -m 512 -o debian-etch \ -n node1.example.com:node2.example.com instance2.example.com @@ -335,7 +466,7 @@ list --no-headers --separator=SEPARATOR - -o FIELD,... + -o [+]FIELD,... @@ -405,8 +536,27 @@ oper_state - the actual state of the instance; can take of - the values "running", "stopped", "(node down)" + the actual state of the instance; can be + one of the values "running", "stopped", "(node + down)" + + + + status + + combined form of admin_state and oper_stat; + this can be one of: + ERROR_nodedown if the + node of the instance is down, + ERROR_down if the + instance should run but is down, + ERROR_up if the + instance should be stopped but is actually running, + ADMIN_down if the + instance has been stopped (and is stopped) and + running if the + instance is set to be running (and is + running) @@ -436,15 +586,58 @@ + + sda_size + + the size of the instance's first disk + + + + sdb_size + + the size of the instance's second disk + + + + vcpus + + the number of VCPUs allocated to the + instance + + + + tags + + comma-separated list of the instances's + tags + + + + serial_no + + the so called 'serial number' of the + instance; this is a numeric field that is incremented + each time the instance is modified, and it can be used + to detect modifications + + + If the value of the option starts with the character + +, the new fields will be added to the + default list. This allows to quickly see the default list + plus a few other fields, instead of retyping the entire list + of fields. + + + There is a subtle grouping about the available output - fields: all fields except for - and are configuration value and - not run-time values. So if you don't select any of the - fields, the query will be satisfied + fields: all fields except for , + and are + configuration value and not run-time values. So if you don't + select any of the these fields, the query will be satisfied instantly from the cluster configuration, without having to ask the remote nodes for the data. This can be helpful for big clusters when you only want some data and it makes sense @@ -468,6 +661,10 @@ info + + -s + --static + instance @@ -475,7 +672,13 @@ Show detailed information about the (given) instances. This is different from list as it shows detailed data about the instance's disks (especially useful - for remote raid templates). + for drbd disk template). + + + + If the option is used, only information + available in the configuration file is returned, without + querying nodes, making the operation faster. @@ -490,6 +693,15 @@ -b bridge --mac MAC-address --hvm-boot-order boot-order + --hvm-acpi ACPI-support + --hvm-pae PAE-support + --hvm-cdrom-image-path + cdrom-image-path + --hvm-nic-type NICTYPE + --hvm-disk-type DISKTYPE + --vnc-bind-address + vnc-bind-address + --kernel default @@ -528,6 +740,49 @@ + The option specifies if Xen + should enable ACPI support for this HVM instance. Valid + values are true or false. + + + + The option specifies if Xen + should enabled PAE support for this HVM instance. Valid + values are true or false. + + + + The specifies the + path to the file xen uses to emulate a virtual CDROM drive + for this HVM instance. Valid values are either an + absolute path to an existing file or None, which disables + virtual CDROM support for this instance. + + + + The specifies the NIC type + Xen should use for this HVM instance. Valid choices are + rtl8139, ne2k_pci, ne2k_isa and paravirtual with rtl8139 + as the default. The paravirtual setting is intended for use + with the GPL PV drivers inside HVM Windows instances. + + + + The specifies the disk type + Xen should use for the HVM instance. Valid choices are ioemu + and paravirtual with ioemu as the default. The paravirtual + setting is intended for use with the GPL PV drivers inside + HVM Windows instances. + + + + The specifies the + address that the VNC listener for this instance should bind + to. Valid values are IPv4 addresses. Use the address 0.0.0.0 + to bind to all available interfaces. + + + All the changes take effect at the next restart. If the instance is running, there is no effect on the instance. @@ -540,6 +795,7 @@ reinstall -o os-type -f force + --select-os instance @@ -549,6 +805,12 @@ is specified, the operating system is changed. + + + The option switches to an + interactive OS reinstall. The user is prompted to select the OS + template from the list of available OS templates. + @@ -749,7 +1011,7 @@ recreates the hypervisor config for the instance and starts the instance. A full reboot does the equivalent of gnt-instance shutdown && gnt-instance - startup. The default is soft reboot. + startup. The default is hard reboot. @@ -785,12 +1047,21 @@ CONSOLE console + --show-cmd instance Connects to the console of the given instance. If the instance - is not up, an error is returned. + is not up, an error is returned. Use the + option to display the command instead of executing it. + + + + For HVM instances, this will attempt to connect to the serial + console of the instance. To connect to the virtualized + "physical" console of a HVM instance, use a VNC client with + the connection info from gnt-instance info. @@ -811,93 +1082,54 @@ replace-disks - --new-secondary NODE + -p instance replace-disks + -s - --new-secondary NODE instance replace-disks - - -s - -p + + + --iallocator name + --new-secondary NODE + instance This command is a generalized form for adding and replacing - disks. - - - - The first form is usable with the - remote_raid1 disk template. This will - replace the disks on both the primary and secondary node, - and optionally will change the secondary node to a new one - if you pass the option. + disks. It is currently only valid for the mirrored (DRBD) + disk template. - The second and third forms are usable with the - drbd disk template. The second form will - do a secondary replacement, but as opposed to the - remote_raid1 will not replace the disks - on the primary, therefore it will execute faster. The third - form will replace the disks on either the primary - () or the secondary () - node of the instance only, without changing the node. + The first form (when passing the option) + will replace the disks on the primary, while the second form + (when passing the option will replace + the disks on the secondary node. - - - - ADD-MIRROR - - add-mirror - -b sdX - -n node - instance - - Adds a new mirror to the disk layout of the instance, if the - instance has a remote raid disk layout. - - The new mirror member will be between the instance's primary - node and the node given with the option. + The third form (when passing either the + or the + option) is designed to + change secondary node of the instance. Specifying + makes the new secondary be + selected automatically by the specified allocator plugin, + otherwise the new secondary node will be the one chosen + manually via the option. - REMOVE-MIRROR - - - removemirror - -b sdX - -p id - instance - - - Removes a mirror componenent from the disk layout of the - instance, if the instance has a remote raid disk layout. - - - - You need to specifiy on which disk to act on using the - option (either sda - or sdb) and the mirror component, which - is identified by the option. You can - find the list of valid identifiers with the - info command. - - - ACTIVATE-DISKS @@ -909,16 +1141,16 @@ successful, the command will show the location and name of the block devices: -node1.example.com:sda:/dev/md0 -node1.example.com:sdb:/dev/md1 +node1.example.com:sda:/dev/drbd0 +node1.example.com:sdb:/dev/drbd1 In this example, node1.example.com is the name of the node on which the devices have been activated. The sda and sdb are the names of the block devices - inside the instance. /dev/md0 and - /dev/md1 are the names of the block + inside the instance. /dev/drbd0 and + /dev/drbd1 are the names of the block devices as visible on the node. @@ -937,13 +1169,95 @@ node1.example.com:sdb:/dev/md1 De-activates the block devices of the given instance. Note - that if you run this command for a remote raid instance - type, while it is running, it will not be able to shutdown - the block devices on the primary node, but it will shutdown - the block devices on the secondary nodes, thus breaking the - replication. + that if you run this command for an instance with a drbd + disk template, while it is running, it will not be able to + shutdown the block devices on the primary node, but it will + shutdown the block devices on the secondary nodes, thus + breaking the replication. + + + + + + GROW-DISK + + grow-disk + --no-wait-for-sync + instance + disk + amount + + + + Grows an instance's disk. This is only possible for + instances having a plain or + drbd disk template. + + Note that this command only change the block device size; it + will not grow the actual filesystems, partitions, etc. that + live on that disk. Usually, you will need to: + + + use gnt-instance grow-disk + + + reboot the instance (later, at a convenient + time) + + + use a filesystem resizer, such as + ext2online + 8 or + xfs_growfs + 8 to resize the + filesystem, or use + fdisk + 8 to change the + partition table on the disk + + + + + + + + The disk argument is either + sda or sdb. The + amount argument is given either + as a number (and it represents the amount to increase the + disk with in mebibytes) or can be given similar to the + arguments in the create instance operation, with a suffix + denoting the unit. + + + + Note that the disk grow operation might complete on one node + but fail on the other; this will leave the instance with + different-sized LVs on the two nodes, but this will not + create problems (except for unused space). + + + + If you do not want gnt-instance to wait for the new disk + region to be synced, use the + option. + + + + Example (increase sda for instance1 by 16GiB): + +# gnt-instance grow-disk instance1.example.com sda 16g + + + + + Also note that disk shrinking will not be supported; use + gnt-backup export and then + gnt-backup import to reduce the disk size + of an instance. + @@ -963,8 +1277,8 @@ node1.example.com:sdb:/dev/md1 Failover will fail the instance over its secondary - node. This works only for instances having a remote raid - disk layout. + node. This works only for instances having a drbd disk + template.