X-Git-Url: https://code.grnet.gr/git/ganeti-local/blobdiff_plain/7e84d392e15f5394f16a2febeb71edcc93033831..da961187f97344fde390140ebb2f10d10d334d51:/man/gnt-instance.sgml?ds=sidebyside diff --git a/man/gnt-instance.sgml b/man/gnt-instance.sgml index e5895d7..2d9d44f 100644 --- a/man/gnt-instance.sgml +++ b/man/gnt-instance.sgml @@ -2,7 +2,7 @@ - May 16, 2007"> + February 11, 2009"> 8"> @@ -20,6 +20,8 @@ 2006 2007 + 2008 + 2009 Google Inc. &dhdate; @@ -28,7 +30,7 @@ &dhucpackage; &dhsection; - ganeti 1.2 + ganeti 2.0 &dhpackage; @@ -62,31 +64,125 @@ ADD add - -n node - -s disksize - -o os-type - -m memsize - -b bridge - -t + -t diskless + file plain - local_raid1 - remote_raid1 - - + drbd + + + + + --disk=N:size=VAL,mode=ro|rw + -s SIZE + + + + --net=N:options + --no-nics + + + -B BEPARAMS + + + -H HYPERVISOR:option=value + + + --file-storage-dir dir_path + --file-driver + loop + blktap + + + + + -n node:secondary-node + --iallocator name + + + + -o os-type + + --submit + + instance + - Creates a new instance on the specified - host. instance must be in DNS and - resolve to a IP in the same network as the nodes in the - cluster. + Creates a new instance on the specified host. The + instance argument must be in DNS, + but depending on the bridge setup, need not be in the same + network as the nodes in the cluster. + + + + The option specifies the parameters + for the disks of the instance. The numbering of disks starts + at zero, and at least one disk needs to be passed. For each + disk, at least the size needs to be given, and optionally + the access mode (read-only or the default of read-write) can + also be specified. The size is interpreted (when no unit is + given) in mebibytes. You can also use one of the suffixes + m, g or + t to specificy the exact the units used; + these suffixes map to mebibytes, gibibytes and tebibytes. + + + + Alternatively, a single-disk instance can be created via the + option which takes a single argument, + the size of the disk. This is similar to the Ganeti 1.2 + version (but will only create one disk). + + + + The minimum disk specification is therefore + --disk 0:size=20G (or -s + 20G when using the option), + and a three-disk instance can be specified as + --disk 0:size=20G --disk 1:size=4G --disk + 2:size=100G. + + + + The NICs of the instances can be specified via the + option. By default, one NIC is + created for the instance, with a random MAC, and connected + to the default bridge. Each NIC can take up to three + parameters (all optional): + + + mac + + either a value or GENERATE + to generate a new unique MAC + + + + ip + + specifies the IP address assigned to the + instance from the Ganeti side (this is not necessarily + what the instance will use, but what the node expects + the instance to use) + + + + bridge + + specifies the bridge to attach this NIC + to + + + - The option specifies the disk size for - the instance, in gigibytes (defaults to 20 GiB). + Alternatively, if no network is desired for the instance, you + can prevent the default of one NIC with the + option. @@ -96,14 +192,318 @@ - The option specifies the memory size for - the instance, in megibytes (defaults to 128 MiB). + The option specifies the backend + parameters for the instance. If no such parameters are + specified, the values are inherited from the cluster. Possible + parameters are: + + + memory + + the memory size of the instance; as usual, + suffixes can be used to denote the unit, otherwise the + value is taken in mebibites + + + + vcpus + + the number of VCPUs to assign to the instance + (if this value makes sense for the hypervisor) + + + + auto_balance + + whether the instance is considered in the N+1 + cluster checks (enough redundancy in the cluster to + survive a node failure) + + + + + + + The option specified the hypervisor to + use for the instance (must be one of the enabled hypervisors + on the cluster) and optionally custom parameters for this + instance. If not other options are used (i.e. the invocation + is just -H + NAME) the instance + will inherit the cluster options. The defaults below show + the cluster defaults at cluster creation time. + + + + The possible hypervisor options are as follows: + + + boot_order + + Valid for the Xen HVM and KVM + hypervisors. + + A string value denoting the boot order. This + has different meaning for the Xen HVM hypervisor and + for the KVM one. + + + For Xen HVM, The boot order is a string of letters + listing the boot devices, with valid device letters + being: + + + + a + + + floppy drive + + + + + c + + + hard disk + + + + + d + + + CDROM drive + + + + + n + + + network boot (PXE) + + + + + + The default is not to set an HVM boot order which is + interpreted as 'dc'. + + + + + + cdrom_image_path + + Valid for the Xen HVM and KVM hypervisors. + + The path to a CDROM image to attach to the + instance. + + + + + nic_type + + Valid for the Xen HVM and KVM hypervisors. + + + This parameter determines the way the network cards + are presented to the instance. The possible options are: + + rtl8139 (default for Xen HVM) (HVM & KVM) + ne2k_isa (HVM & KVM) + ne2k_pci (HVM & KVM) + i82551 (KVM) + i82557b (KVM) + i82559er (KVM) + pcnet (KVM) + e1000 (KVM) + paravirtual (default for KVM) (HVM & KVM) + + + + + + disk_type + + Valid for the Xen HVM and KVM hypervisors. + + + This parameter determines the way the disks are + presented to the instance. The possible options are: + + ioemu (default for HVM & KVM) (HVM & KVM) + ide (HVM & KVM) + scsi (KVM) + sd (KVM) + mtd (KVM) + pflash (KVM) + + + + + + vnc_bind_address + + Valid for the Xen HVM and KVM hypervisors. + + Specifies the address that the VNC listener for + this instance should bind to. Valid values are IPv4 + addresses. Use the address 0.0.0.0 to bind to all + available interfaces (this is the default) or specify + the address of one of the interfaces on the node to + restrict listening to that interface. + + + + + vnc_tls + + Valid for the KVM hypervisor. + + A boolean option that controls whether the + VNC connection is secured with TLS. + + + + + vnc_x509_path + + Valid for the KVM hypervisor. + + If is enabled, this + options specifies the path to the x509 certificate to + use. + + + + + vnc_x509_verify + + Valid for the KVM hypervisor. + + + + + acpi + + Valid for the Xen HVM and KVM hypervisors. + + + A boolean option that specifies if the hypervisor + should enable ACPI support for this instance. By + default, ACPI is disabled. + + + + + + pae + + Valid for the Xen HVM and KVM hypervisors. + + + A boolean option that specifies if the hypervisor + should enabled PAE support for this instance. The + default is false, disabling PAE support. + + + + + + kernel_path + + Valid for the Xen PVM and KVM hypervisors. + + + This option specifies the path (on the node) to the + kernel to boot the instance with. Xen PVM instances + always require this, while for KVM if this option is + empty, it will cause the machine to load the kernel + from its disks. + + + + + + kernel_args + + Valid for the Xen PVM and KVM hypervisors. + + + This options specifies extra arguments to the kernel + that will be loaded. device. This is always used + for Xen PVM, while for KVM it is only used if the + option is also + specified. + + + + The default setting for this value is simply + "ro", which mounts the root + disk (initially) in read-only one. For example, + setting this to single will + cause the instance to start in single-user mode. + + + + + + initrd_path + + Valid for the Xen PVM and KVM hypervisors. + + + This option specifies the path (on the node) to the + initrd to boot the instance with. Xen PVM instances + can use this always, while for KVM if this option is + only used if the option + is also specified. You can pass here either an + absolute filename (the path to the initrd) if you + want to use an initrd, or use the format + no_initrd_path for no initrd. + + + + + + root_path + + Valid for the Xen PVM and KVM hypervisors. + + + This options specifies the name of the root + device. This is always needed for Xen PVM, while for + KVM it is only used if the + option is also + specified. + + + + + + serial_console + + Valid for the KVM hypervisor. + + This boolean option specifies whether to + emulate a serial console for the instance. + + + + + + - The option specifies the bridge to which the - instance will be connected. (defaults to the cluster-wide default - bridge specified at cluster initialization time). + The option specifies the instance + allocator plugin to use. If you pass in this option the allocator + will select nodes for this instance automatically, so you don't need + to pass them with the option. For more + information please refer to the instance allocator documentation. @@ -120,28 +520,23 @@ - plain + file - Disk devices will be logical volumes. + Disk devices will be regular files. - local_raid1 + plain - - Disk devices will be md raid1 arrays over two local - logical volumes. - + Disk devices will be logical volumes. - remote_raid1 + drbd - Disk devices will be md raid1 arrays with one - component (so it's not actually raid1): a drbd device - between the instance's primary node and the node given - by the option . + Disk devices will be drbd (version 8.x) on top of + lvm volumes. @@ -149,9 +544,8 @@ - The option is used with - the remote raid disk template type and specifies the remote - node. + The optional second value of the is used for + the drbd template type and specifies the remote node. @@ -160,16 +554,205 @@ option. + + The specifies the relative path + under the cluster-wide file storage directory to store file-based + disks. It is useful for having different subdirectories for + different instances. The full path of the directory where the disk + files are stored will consist of cluster-wide file storage directory + + optional subdirectory + instance name. Example: + /srv/ganeti/file-storage/mysubdir/instance1.example.com. This option + is only relevant for instances using the file storage backend. + + + + The specifies the driver to use for + file-based disks. Note that currently these drivers work with the + xen hypervisor only. This option is only relevant for instances using + the file storage backend. The available choices are: + + + loop + + + Kernel loopback driver. This driver uses loopback + devices to access the filesystem within the + file. However, running I/O intensive applications in + your instance using the loop driver might result in + slowdowns. Furthermore, if you use the loopback + driver consider increasing the maximum amount of + loopback devices (on most systems it's 8) using the + max_loop param. + + + + + blktap + + The blktap driver (for Xen hypervisors). In + order to be able to use the blktap driver you should + check if the 'blktapctrl' user space disk agent is + running (usually automatically started via xend). This + user-level disk I/O interface has the advantage of + better performance. Especially if you use a network + file system (e.g. NFS) to store your instances this is + the recommended choice. + + + + + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + Example: -# gnt-instance add -t plain -s 30 -m 512 -n node1.example.com \ -> instance1.example.com -# gnt-instance add -t remote_raid1 --secondary-node node3.example.com \ -> -s 30 -m 512 -n node1.example.com instance2.example.com +# gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \ + -n node1.example.com --file-storage-dir=mysubdir instance1.example.com +# gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \ + -n node1.example.com instance1.example.com +# gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \ + -n node1.example.com:node2.example.com instance2.example.com + + + + BATCH-CREATE + + batch-create + instances_file.json + + + + This command (similar to the Ganeti 1.2 + batcher tool) submits multiple instance + creation jobs based on a definition file. The instance + configurations do not encompass all the possible options for + the add command, but only a subset. + + + + The instance file should be a valid-formed JSON file, + containing a dictionary with instance name and instance + parameters. The accepted parameters are: + + + + disk_size + + The size of the disks of the instance. + + + + disk_templace + + The disk template to use for the instance, + the same as in the add + command. + + + + backend + + A dictionary of backend parameters. + + + + hypervisor + + A dictionary with a single key (the + hypervisor name), and as value the hypervisor + options. If not passed, the default hypervisor and + hypervisor options will be inherited. + + + + mac, ip, bridge + + Specifications for the one NIC that will be + created for the instance. + + + + primary_node, secondary_node + + The primary and optionally the secondary node + to use for the instance (in case an iallocator script + is not used). + + + + iallocator + + Instead of specifying the nodes, an + iallocator script can be used to automatically compute + them. + + + + start + + whether to start the instance + + + + ip_check + + Skip the check for already-in-use instance; + see the description in the add + command for details. + + + + file_storage_dir, file_driver + + Configuration for the file + disk type, see the add command for + details. + + + + + + + A simple definition for one instance can be (with most of + the parameters taken from the cluster defaults): + +{ + "instance3": { + "template": "drbd", + "os": "debootstrap", + "disk_size": ["25G"], + "iallocator": "dumb" + }, + "instance5": { + "template": "drbd", + "os": "debootstrap", + "disk_size": ["25G"], + "iallocator": "dumb", + "hypervisor": "xen-hvm", + "hvparams": {"acpi": true}, + "backend": {"memory": 512} + } +} + + + + + The command will display the job id for each submitted instance, as follows: + +# gnt-instance batch-create instances.json +instance3: 11224 +instance5: 11225 + + @@ -178,6 +761,8 @@ remove + --ignore-failures + --submit instance @@ -187,6 +772,22 @@ you are not sure if you use an instance again, use shutdown first and leave it in the shutdown state for a while. + + + + + The option will cause the + removal to proceed even in the presence of errors during the + removal of the instance (e.g. during the shutdown or the + disk removal). If this option is not given, the command will + stop at the first error. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. @@ -204,14 +805,14 @@ list --no-headers --separator=SEPARATOR - -o FIELD,... + -o [+]FIELD,... + instance Shows the currently configured instances with memory usage, - disk usage, the node they are running on, and the CPU time, - counted in seconds, used by each instance since its latest - restart. + disk usage, the node they are running on, and their run + status. @@ -247,7 +848,7 @@ snodes - comma-separated list of secondary-nodes for the + comma-separated list of secondary nodes for the instance; usually this will be just one node @@ -260,179 +861,698 @@ - admin_ram + disk_template - the desired memory for the instance + the disk template of the instance - disk_template + oper_state - the disk template of the instance + the actual state of the instance; can be + one of the values "running", "stopped", "(node + down)" + + + + status + + combined form of admin_state and oper_stat; + this can be one of: + ERROR_nodedown if the + node of the instance is down, + ERROR_down if the + instance should run but is down, + ERROR_up if the + instance should be stopped but is actually running, + ADMIN_down if the + instance has been stopped (and is stopped) and + running if the + instance is set to be running (and is + running) + + + + oper_ram + + the actual memory usage of the instance as seen + by the hypervisor + + + + ip + + the ip address ganeti recognizes as associated with + the first instance interface + + + + mac + + the first instance interface MAC address + + + + bridge + + the bridge of the first instance NIC + + + + + sda_size + + the size of the instance's first disk - oper_state + sdb_size + + the size of the instance's second disk, if + any + + + + vcpus + + the number of VCPUs allocated to the + instance + + + + tags + + comma-separated list of the instances's + tags + + + + serial_no + + the so called 'serial number' of the + instance; this is a numeric field that is incremented + each time the instance is modified, and it can be used + to track modifications + + + + network_port + + If the instance has a network port assigned + to it (e.g. for VNC connections), this will be shown, + otherwise - will be + displayed. + + + + beparams + + A text format of the entire beparams for the + instance. It's more useful to select individual fields + from this dictionary, see below. + + + + disk.count + + The number of instance disks. + + + + disk.size/N + + The size of the instance's Nth disk. This is + a more generic form of the sda_size + and sdb_size fields. + + + + disk.sizes + + A comma-separated list of the disk sizes for + this instance. + + + + disk_usage + + The total disk space used by this instance on + each of its nodes. This is not the instance-visible + disk size, but the actual disk "cost" of the + instance. + + + + nic.mac/N + + The MAC of the Nth instance NIC. + + + + nic.ip/N + + The IP address of the Nth instance NIC. + + + + nic.bridge/N + + The bridge the Nth instance NIC is attached + to. + + + + nic.macs + + A comma-separated list of all the MACs of the + instance's NICs. + + + + nic.ips + + A comma-separated list of all the IP + addresses of the instance's NICs. + + + + nic.bridges + + A comma-separated list of all the bridges of the + instance's NICs. + + + + nic.count + + The number of instance nics. + + + + hv/NAME + + The value of the hypervisor parameter called + NAME. For details of what + hypervisor parameters exist and their meaning, see the + add command. + + + + be/memory + + The configured memory for the instance. + + + + be/vcpus + + The configured number of VCPUs for the + instance. + + + + be/auto_balance + + Whether the instance is considered in N+1 + checks. + + + + + + + If the value of the option starts with the character + +, the new field(s) will be added to the + default list. This allows to quickly see the default list + plus a few other fields, instead of retyping the entire list + of fields. + + + + There is a subtle grouping about the available output + fields: all fields except for , + and are + configuration value and not run-time values. So if you don't + select any of the these fields, the query will be satisfied + instantly from the cluster configuration, without having to + ask the remote nodes for the data. This can be helpful for + big clusters when you only want some data and it makes sense + to specify a reduced set of output fields. + + + The default output field list is: + + name + os + pnode + admin_state + oper_state + oper_ram + . + + + + + INFO + + + info + + -s + --static + + + --all + instance + + + + + Show detailed information about the given instance(s). This is + different from list as it shows detailed data + about the instance's disks (especially useful for the drbd disk + template). + + + + If the option is used, only information + available in the configuration file is returned, without + querying nodes, making the operation faster. + + + + Use the to get info about all instances, + rather than explicitely passing the ones you're interested in. + + + + + MODIFY + + + modify + + -H HYPERVISOR_PARAMETERS + + -B BACKEND_PARAMETERS + + + --net add:options + --net remove + --net N:options + + + + --disk add:size=SIZE + --disk remove + --disk N:mode=MODE + + + + --submit + + instance + + + + Modifies the memory size, number of vcpus, ip address, MAC + address and/or bridge for an instance. It can also add and + remove disks and NICs to/from the instance. Note that you + need to give at least one of the arguments, otherwise the + command complains. + + + + The option specifies hypervisor options + in the form of name=value[,...]. For details which options can be specified, see the add command. + + + + The option + adds a disk to the instance. The will remove the last disk of the + instance. The + option will change the mode of the Nth disk of the instance + between read-only (ro) and read-write + (rw). + + + + The option will + add a new NIC to the instance. The available options are the + same as in the add command (mac, ip, + bridge). The will remove the + last NIC of the instance, while the + option will change the parameters of the Nth instance NIC. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + + + + All the changes take effect at the next restart. If the + instance is running, there is no effect on the instance. + + + + + REINSTALL + + + reinstall + -o os-type + --select-os + -f force + --force-multiple + + + --instance + --node + --primary + --secondary + --all + + --submit + instance + + + + Reinstalls the operating system on the given instance(s). The + instance(s) must be stopped when running this command. If the + is specified, the operating + system is changed. + + + + The option switches to an + interactive OS reinstall. The user is prompted to select the OS + template from the list of available OS templates. + + + + Since this is a potentially dangerous command, the user will + be required to confirm this action, unless the + flag is passed. When multiple instances + are selected (either by passing multiple arguments or by + using the , + , or + options), the user must pass both the + and + options to skip the + interactive confirmation. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + + + + + + + RENAME + + + rename + --no-ip-check + --submit + instance + new_name + + + + Renames the given instance. The instance must be stopped + when running this command. The requirements for the new name + are the same as for adding an instance: the new name must be + resolvable and the IP it resolves to must not be reachable + (in order to prevent duplicate IPs the next time the + instance is started). The IP test can be skipped if the + option is passed. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + + + + + + + + Starting/stopping/connecting to console + + + STARTUP + + + startup + + --force + + --force-multiple + + + --instance + --node + --primary + --secondary + --all + + + -H + -B + + --submit + + name + + + + Starts one or more instances, depending on the following + options. The four available modes are: + + + - the actual state of the instance; can take of - the values "running", "stopped", "(node down)" + will start the instances given as arguments + (at least one argument required); this is the default + selection - oper_ram + --node - the actual memory usage of the instance as seen - by the hypervisor + will start the instances who have the given + node as either primary or secondary - ip + - the ip address ganeti recognizes as associated with - the instance interface + will start all instances whose primary node + is in the list of nodes passed as arguments (at least + one node required) - mac + - the instance interface MAC address + will start all instances whose secondary node + is in the list of nodes passed as arguments (at least + one node required) - bridge + --all - bridge the instance is connected to - + will start all instances in the cluster (no + arguments accepted) - There is a subtle grouping about the available output - fields: all fields except for - and are configuration value and - not run-time values. So if you don't select any of the - fields, the query will be satisfied - instantly from the cluster configuration, without having to - ask the remote nodes for the data. This can be helpful for - big clusters when you only want some data and it makes sense - to specify a reduced set of output fields. + Note that although you can pass more than one selection + option, the last one wins, so in order to guarantee the + desired result, don't pass more than one such option. - The default output field list is: - - name - os - pnode - admin_state - oper_state - oper_ram - . + + Use to start even if secondary disks are + failing. - - - - INFO - - - info - instance - - Show detailed information about the (given) instances. This - is different from list as it shows - detailed data about the instance's disks (especially useful - for remote raid templates). + The will skip the + interactive confirmation in the case the more than one + instance will be affected. - - - - MODIFY - - - modify - -m memsize - -p vcpus - -i ip - -b bridge - instance - - Modify the memory size, number of vcpus, ip address and/or bridge - for an instance. + The and options + specify extra, temporary hypervisor and backend parameters + that can be used to start an instance with modified + parameters. They can be useful for quick testing without + having to modify an instance back and forth, e.g.: + +# gnt-instance start -H root_args="single" instance1 +# gnt-instance start -B memory=2048 instance2 + + The first form will start the instance + instance1 in single-user mode, and + the instance instance2 with 2GB of + RAM (this time only, unless that is the actual instance + memory size already). - The memory size is given in MiB. Note that you need to give - at least one of the arguments, otherwise the command - complains. + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. - All the changes take effect at the next restart. If the - instance is running, there is no effect on the instance. + Example: + +# gnt-instance start instance1.example.com +# gnt-instance start --node node1.example.com node2.example.com +# gnt-instance start --all + - - - - Starting/stopping/connecting to console - - STARTUP + SHUTDOWN - startup - --extra=PARAMS - instance + shutdown + + --force-multiple + + + --instance + --node + --primary + --secondary + --all + + + --submit + + name - Starts an instance. The node where to start the instance is - taken from the configuration. + Stops one or more instances. If the instance cannot be + cleanly stopped during a hardcoded interval (currently 2 + minutes), it will forcibly stop the instance (equivalent to + switching off the power on a physical machine). - The option is used to pass - additional argument to the instance's kernel for this start - only. Currently there is no way to specify a persistent set - of arguments (beside the one hardcoded). Note that this may - not apply to all virtualization types. + The , , + , and + options are similar as for the + startup command and they influence the + actual instances being shutdown. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. Example: -# gnt-instance start instance1.example.com -# gnt-instance start --extra single test1.example.com +# gnt-instance shutdown instance1.example.com +# gnt-instance shutdown --all - SHUTDOWN + REBOOT - shutdown - instance + reboot + + --type=REBOOT-TYPE + + --ignore-secondaries + + --force-multiple + + + --instance + --node + --primary + --secondary + --all + + + --submit + + name - Stops the instance. If the instance cannot be cleanly - stopped during a hardcoded interval (currently 2 minutes), - it will forcibly stop the instance (equivalent to switching - off the power on a physical machine). + Reboots one or more instances. The type of reboot depends on + the value of . A soft reboot does a + hypervisor reboot, a hard reboot does a instance stop, + recreates the hypervisor config for the instance and + starts the instance. A full reboot does the equivalent + of gnt-instance shutdown && gnt-instance + startup. The default is hard reboot. + + + + For the hard reboot the option + ignores errors for the + secondary node while re-assembling the instance disks. + + + + The , , + , and + options are similar as for the + startup command and they influence the + actual instances being rebooted. + + + + The will skip the + interactive confirmation in the case the more than one + instance will be affected. Example: -# gnt-instance shutdown instance1.example.com +# gnt-instance reboot instance1.example.com +# gnt-instance reboot --type=full instance1.example.com @@ -441,12 +1561,23 @@ CONSOLE console + --show-cmd instance - Connects to the console of the given instance. If the instance - is not up, an error is returned. + Connects to the console of the given instance. If the + instance is not up, an error is returned. Use the + option to display the command + instead of executing it. + + + + For HVM instances, this will attempt to connect to the + serial console of the instance. To connect to the + virtualized "physical" console of a HVM instance, use a VNC + client with the connection info from the + info command. @@ -467,70 +1598,81 @@ replace-disks - --new-secondary NODE + --submit + -p + --disks idx instance - - This command does a full add and replace for both disks of - an instance. It basically does an - addmirror and - removemirror for both disks of the - instance. - - - - If you also want to replace the secondary node during this - process (for example to fix a broken secondary node), you - can do so using the option. - - + + replace-disks + --submit + -s + --disks idx + instance + - - ADD-MIRROR - add-mirror - -b sdX - -n node + replace-disks + --submit + + --iallocator name + --new-secondary NODE + + instance + - Adds a new mirror to the disk layout of the instance, if the - instance has a remote raid disk layout. + This command is a generalized form for replacing disks. It + is currently only valid for the mirrored (DRBD) disk + template. + - The new mirror member will be between the instance's primary - node and the node given with the option. + + The first form (when passing the option) + will replace the disks on the primary, while the second form + (when passing the option will replace + the disks on the secondary node. For these two cases (as the + node doesn't change), it is possible to only run the replace + for a subset of the disks, using the option + which takes a list of + comma-delimited disk indices (zero-based), + e.g. 0,2 to replace only the first + and third disks. - - - REMOVE-MIRROR + + The third form (when passing either the + or the + option) is designed to + change secondary node of the instance. Specifying + makes the new secondary be + selected automatically by the specified allocator plugin, + otherwise the new secondary node will be the one chosen + manually via the option. + - - removemirror - -b sdX - -p id - instance - - Removes a mirror componenent from the disk layout of the - instance, if the instance has a remote raid disk layout. + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. - You need to specifiy on which disk to act on using the - option (either sda - or sdb) and the mirror component, which - is identified by the option. You can - find the list of valid identifiers with the - info command. + Note that it is not possible to select an offline or drained + node as a new secondary. + + ACTIVATE-DISKS activate-disks + --submit instance @@ -538,17 +1680,25 @@ successful, the command will show the location and name of the block devices: -node1.example.com:sda:/dev/md0 -node1.example.com:sdb:/dev/md1 +node1.example.com:disk/0:/dev/drbd0 +node1.example.com:disk/1:/dev/drbd1 In this example, node1.example.com is the name of the node on which the devices have been - activated. The sda and - sdb are the names of the block devices - inside the instance. /dev/md0 and - /dev/md1 are the names of the block - devices as visible on the node. + activated. The disk/0 and + disk/1 are the Ganeti-names of the + instance disks; how they are visible inside the instance is + hypervisor-specific. /dev/drbd0 and + /dev/drbd1 are the actual block devices + as visible on the node. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. @@ -562,17 +1712,115 @@ node1.example.com:sdb:/dev/md1 deactivate-disks + --submit instance De-activates the block devices of the given instance. Note - that if you run this command for a remote raid instance - type, while it is running, it will not be able to shutdown - the block devices on the primary node, but it will shutdown - the block devices on the secondary nodes, thus breaking the - replication. + that if you run this command for an instance with a drbd + disk template, while it is running, it will not be able to + shutdown the block devices on the primary node, but it will + shutdown the block devices on the secondary nodes, thus + breaking the replication. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + + + + + + GROW-DISK + + grow-disk + --no-wait-for-sync + --submit + instance + disk + amount + + + + Grows an instance's disk. This is only possible for + instances having a plain or + drbd disk template. + + + + Note that this command only change the block device size; it + will not grow the actual filesystems, partitions, etc. that + live on that disk. Usually, you will need to: + + + use gnt-instance grow-disk + + + reboot the instance (later, at a convenient + time) + + + use a filesystem resizer, such as + ext2online + 8 or + xfs_growfs + 8 to resize the + filesystem, or use + fdisk + 8 to change the + partition table on the disk + + + + + + + + The disk argument is the index of + the instance disk to grow. The + amount argument is given either + as a number (and it represents the amount to increase the + disk with in mebibytes) or can be given similar to the + arguments in the create instance operation, with a suffix + denoting the unit. + + + + Note that the disk grow operation might complete on one node + but fail on the other; this will leave the instance with + different-sized LVs on the two nodes, but this will not + create problems (except for unused space). + + If you do not want gnt-instance to wait for the new disk + region to be synced, use the + option. + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. + + + + Example (increase the first disk for instance1 by 16GiB): + +# gnt-instance grow-disk instance1.example.com 0 16g + + + + + Also note that disk shrinking is not supported; use + gnt-backup export and then + gnt-backup import to reduce the disk size + of an instance. + @@ -587,13 +1835,14 @@ node1.example.com:sdb:/dev/md1 failover -f --ignore-consistency + --submit instance Failover will fail the instance over its secondary - node. This works only for instances having a remote raid - disk layout. + node. This works only for instances having a drbd disk + template. @@ -601,7 +1850,17 @@ node1.example.com:sdb:/dev/md1 disks before failing over the instance. If you are trying to migrate instances off a dead node, this will fail. Use the option for this - purpose. + purpose. Note that this option can be dangerous as errors in + shutting down the instance will be ignored, resulting in + possibly having the instance running on two machines in + parallel (on disconnected DRBD drives). + + + + The option is used to send the job to + the master daemon but not wait for its completion. The job + ID will be shown so that it can be examined via + gnt-job info. @@ -612,6 +1871,148 @@ node1.example.com:sdb:/dev/md1 + + MIGRATE + + + migrate + -f + --cleanup + instance + + + + migrate + -f + --non-live + instance + + + + Migrate will move the instance to its secondary node without + shutdown. It only works for instances having the drbd8 disk + template type. + + + + The migration command needs a perfectly healthy instance, as + we rely on the dual-master capability of drbd8 and the disks + of the instance are not allowed to be degraded. + + + + The option will switch (for the + hypervisors that support it) between a "fully live" + (i.e. the interruption is as minimal as possible) migration + and one in which the instance is frozen, its state saved and + transported to the remote node, and then resumed there. This + all depends on the hypervisor support for two different + methods. In any case, it is not an error to pass this + parameter (it will just be ignored if the hypervisor doesn't + support it). + + + + If the option is passed, the + operation changes from migration to attempting recovery from + a failed previous migration. In this mode, ganeti checks if + the instance runs on the correct node (and updates its + configuration if not) and ensures the instances's disks are + configured correctly. In this mode, the + option is ignored. + + + + The option will skip the prompting for + confirmation. + + + Example (and expected output): + +# gnt-instance migrate instance1 +Migrate will happen to the instance instance1. Note that migration is +**experimental** in this version. This might impact the instance if +anything goes wrong. Continue? +y/[n]/?: y +* checking disk consistency between source and target +* ensuring the target is in secondary mode +* changing disks into dual-master mode + - INFO: Waiting for instance instance1 to sync disks. + - INFO: Instance instance1's disks are in sync. +* migrating instance to node2.example.com +* changing the instance's disks on source node to secondary + - INFO: Waiting for instance instance1 to sync disks. + - INFO: Instance instance1's disks are in sync. +* changing the instance's disks to single-master +# + + + + + + + + TAGS + + + ADD-TAGS + + + add-tags + --from file + instancename + tag + + + + Add tags to the given instance. If any of the tags contains + invalid characters, the entire operation will abort. + + + If the option is given, the list of + tags will be extended with the contents of that file (each + line becomes a tag). In this case, there is not need to pass + tags on the command line (if you do, both sources will be + used). A file name of - will be interpreted as stdin. + + + + + LIST-TAGS + + + list-tags + instancename + + + List the tags of the given instance. + + + + REMOVE-TAGS + + remove-tags + --from file + instancename + tag + + + + Remove tags from the given instance. If any of the tags are + not existing on the node, the entire operation will abort. + + + + If the option is given, the list of + tags will be extended with the contents of that file (each + line becomes a tag). In this case, there is not need to pass + tags on the command line (if you do, both sources will be + used). A file name of - will be interpreted as stdin. + + +