+ be listed with <command>gnt-os
+ list</command>. Passing <option>--no-install</option> will
+ however skip the OS installation, allowing a manual import
+ if so desired. Note that the no-installation mode will
+ automatically disable the start-up of the instance (without
+ an OS, it most likely won't be able to start-up
+ successfully).
+ </para>
+
+ <para>
+ The <option>-B</option> option specifies the backend
+ parameters for the instance. If no such parameters are
+ specified, the values are inherited from the cluster. Possible
+ parameters are:
+ <variablelist>
+ <varlistentry>
+ <term>memory</term>
+ <listitem>
+ <simpara>the memory size of the instance; as usual,
+ suffixes can be used to denote the unit, otherwise the
+ value is taken in mebibites</simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>vcpus</term>
+ <listitem>
+ <simpara>the number of VCPUs to assign to the instance
+ (if this value makes sense for the hypervisor)</simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>auto_balance</term>
+ <listitem>
+ <simpara>whether the instance is considered in the N+1
+ cluster checks (enough redundancy in the cluster to
+ survive a node failure)</simpara>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ The <option>-H</option> option specified the hypervisor to
+ use for the instance (must be one of the enabled hypervisors
+ on the cluster) and optionally custom parameters for this
+ instance. If not other options are used (i.e. the invocation
+ is just <userinput>-H
+ <replaceable>NAME</replaceable></userinput>) the instance
+ will inherit the cluster options. The defaults below show
+ the cluster defaults at cluster creation time.
+ </para>
+
+ <para>
+ The possible hypervisor options are as follows:
+ <variablelist>
+ <varlistentry>
+ <term>boot_order</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM
+ hypervisors.</simpara>
+
+ <simpara>A string value denoting the boot order. This
+ has different meaning for the Xen HVM hypervisor and
+ for the KVM one.</simpara>
+
+ <simpara>
+ For Xen HVM, The boot order is a string of letters
+ listing the boot devices, with valid device letters
+ being:
+ </simpara>
+ <variablelist>
+ <varlistentry>
+ <term>a</term>
+ <listitem>
+ <para>
+ floppy drive
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>c</term>
+ <listitem>
+ <para>
+ hard disk
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>d</term>
+ <listitem>
+ <para>
+ CDROM drive
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>n</term>
+ <listitem>
+ <para>
+ network boot (PXE)
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ <simpara>
+ The default is not to set an HVM boot order which is
+ interpreted as 'dc'.
+ </simpara>
+
+ <simpara>
+ For KVM the boot order is either
+ <quote>cdrom</quote>, <quote>disk</quote> or
+ <quote>network</quote>. Please note that older
+ versions of KVM couldn't netboot from virtio
+ interfaces. This has been fixed in more recent
+ versions and is confirmed to work at least with
+ qemu-kvm 0.11.1.
+ </simpara>
+
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>cdrom_image_path</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <simpara>The path to a CDROM image to attach to the
+ instance.</simpara>
+
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nic_type</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>
+ This parameter determines the way the network cards
+ are presented to the instance. The possible options are:
+ <simplelist>
+ <member>rtl8139 (default for Xen HVM) (HVM & KVM)</member>
+ <member>ne2k_isa (HVM & KVM)</member>
+ <member>ne2k_pci (HVM & KVM)</member>
+ <member>i82551 (KVM)</member>
+ <member>i82557b (KVM)</member>
+ <member>i82559er (KVM)</member>
+ <member>pcnet (KVM)</member>
+ <member>e1000 (KVM)</member>
+ <member>paravirtual (default for KVM) (HVM & KVM)</member>
+ </simplelist>
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>disk_type</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>
+ This parameter determines the way the disks are
+ presented to the instance. The possible options are:
+ <simplelist>
+ <member>ioemu (default for HVM & KVM) (HVM & KVM)</member>
+ <member>ide (HVM & KVM)</member>
+ <member>scsi (KVM)</member>
+ <member>sd (KVM)</member>
+ <member>mtd (KVM)</member>
+ <member>pflash (KVM)</member>
+ </simplelist>
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>vnc_bind_address</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>Specifies the address that the VNC listener for
+ this instance should bind to. Valid values are IPv4
+ addresses. Use the address 0.0.0.0 to bind to all
+ available interfaces (this is the default) or specify
+ the address of one of the interfaces on the node to
+ restrict listening to that interface.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>vnc_tls</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>A boolean option that controls whether the
+ VNC connection is secured with TLS.</simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>vnc_x509_path</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <para>If <option>vnc_tls</option> is enabled, this
+ options specifies the path to the x509 certificate to
+ use.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>vnc_x509_verify</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>acpi</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>
+ A boolean option that specifies if the hypervisor
+ should enable ACPI support for this instance. By
+ default, ACPI is disabled.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>pae</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>
+ A boolean option that specifies if the hypervisor
+ should enabled PAE support for this instance. The
+ default is false, disabling PAE support.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>use_localtime</term>
+ <listitem>
+ <simpara>Valid for the Xen HVM and KVM hypervisors.</simpara>
+
+ <para>
+ A boolean option that specifies if the instance
+ should be started with its clock set to the
+ localtime of the machine (when true) or to the UTC
+ (When false). The default is false, which is useful
+ for Linux/Unix machines; for Windows OSes, it is
+ recommended to enable this parameter.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>kernel_path</term>
+ <listitem>
+ <simpara>Valid for the Xen PVM and KVM hypervisors.</simpara>
+
+ <para>
+ This option specifies the path (on the node) to the
+ kernel to boot the instance with. Xen PVM instances
+ always require this, while for KVM if this option is
+ empty, it will cause the machine to load the kernel
+ from its disks.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>kernel_args</term>
+ <listitem>
+ <simpara>Valid for the Xen PVM and KVM hypervisors.</simpara>
+
+ <para>
+ This options specifies extra arguments to the kernel
+ that will be loaded. device. This is always used
+ for Xen PVM, while for KVM it is only used if the
+ <option>kernel_path</option> option is also
+ specified.
+ </para>
+
+ <para>
+ The default setting for this value is simply
+ <constant>"ro"</constant>, which mounts the root
+ disk (initially) in read-only one. For example,
+ setting this to <userinput>single</userinput> will
+ cause the instance to start in single-user mode.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>initrd_path</term>
+ <listitem>
+ <simpara>Valid for the Xen PVM and KVM hypervisors.</simpara>
+
+ <para>
+ This option specifies the path (on the node) to the
+ initrd to boot the instance with. Xen PVM instances
+ can use this always, while for KVM if this option is
+ only used if the <option>kernel_path</option> option
+ is also specified. You can pass here either an
+ absolute filename (the path to the initrd) if you
+ want to use an initrd, or use the format
+ <userinput>no_initrd_path</userinput> for no initrd.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>root_path</term>
+ <listitem>
+ <simpara>Valid for the Xen PVM and KVM hypervisors.</simpara>
+
+ <para>
+ This options specifies the name of the root
+ device. This is always needed for Xen PVM, while for
+ KVM it is only used if the
+ <option>kernel_path</option> option is also
+ specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>serial_console</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>This boolean option specifies whether to
+ emulate a serial console for the instance.</simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>disk_cache</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>The disk cache mode. It can be either
+ <userinput>default</userinput> to not pass any cache
+ option to KVM, or one of the KVM cache modes: none
+ (for direct I/O), writethrough (to use the host cache
+ but report completion to the guest only when the host
+ has committed the changes to disk) or writeback (to
+ use the host cache and report completion as soon as
+ the data is in the host cache). Note that there are
+ special considerations for the cache mode depending on
+ version of KVM used and disk type (always raw file
+ under Ganeti), please refer to the KVM documentation
+ for more details.
+ </simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>security_model</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>The security model for kvm. Currently one of
+ <quote>none</quote>, <quote>user</quote> or
+ <quote>pool</quote>. Under <quote>none</quote>, the
+ default, nothing is done and instances are run as
+ the ganeti daemon user (normally root).
+ </simpara>
+
+ <simpara>Under <quote>user</quote> kvm will drop
+ privileges and become the user specified by the
+ security_domain parameter.
+ </simpara>
+
+ <simpara>Under <quote>pool</quote> a global cluster
+ pool of users will be used, making sure no two
+ instances share the same user on the same node.
+ (this mode is not implemented yet)
+ </simpara>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>security_domain</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>Under security model <quote>user</quote> the username to
+ run the instance under. It must be a valid username
+ existing on the host.
+ </simpara>
+ <simpara>Cannot be set under security model <quote>none</quote>
+ or <quote>pool</quote>.
+ </simpara>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>kvm_flag</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>If <quote>enabled</quote> the -enable-kvm flag is
+ passed to kvm. If <quote>disabled</quote> -disable-kvm is
+ passed. If unset no flag is passed, and the default running
+ mode for your kvm binary will be used.
+ </simpara>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>migration_downtime</term>
+ <listitem>
+ <simpara>Valid for the KVM hypervisor.</simpara>
+
+ <simpara>The maximum amount of time (in ms) a KVM instance is
+ allowed to be frozen during a live migration, in order to copy
+ dirty memory pages. Default value is 30ms, but you may need to
+ increase this value for busy instances.
+ </simpara>
+
+ <simpara>This option is only effective with kvm versions >= 87
+ and qemu-kvm versions >= 0.11.0.
+ </simpara>
+
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ </para>
+
+ <para>
+ The <option>--iallocator</option> option specifies the instance
+ allocator plugin to use. If you pass in this option the allocator
+ will select nodes for this instance automatically, so you don't need
+ to pass them with the <option>-n</option> option. For more
+ information please refer to the instance allocator documentation.
+ </para>
+
+ <para>
+ The <option>-t</option> options specifies the disk layout type for
+ the instance. The available choices are:
+ <variablelist>
+ <varlistentry>
+ <term>diskless</term>
+ <listitem>
+ <para>
+ This creates an instance with no disks. Its useful for
+ testing only (or other special cases).
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>file</term>
+ <listitem>
+ <para>Disk devices will be regular files.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>plain</term>
+ <listitem>
+ <para>Disk devices will be logical volumes.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>drbd</term>
+ <listitem>
+ <para>
+ Disk devices will be drbd (version 8.x) on top of
+ lvm volumes.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ The optional second value of the <option>--node</option> is used for
+ the drbd template type and specifies the remote node.
+ </para>
+
+ <para>
+ If you do not want gnt-instance to wait for the disk mirror
+ to be synced, use the <option>--no-wait-for-sync</option>
+ option.
+ </para>
+
+ <para>
+ The <option>--file-storage-dir</option> specifies the relative path
+ under the cluster-wide file storage directory to store file-based
+ disks. It is useful for having different subdirectories for
+ different instances. The full path of the directory where the disk
+ files are stored will consist of cluster-wide file storage directory
+ + optional subdirectory + instance name. Example:
+ /srv/ganeti/file-storage/mysubdir/instance1.example.com. This option
+ is only relevant for instances using the file storage backend.
+ </para>
+
+ <para>
+ The <option>--file-driver</option> specifies the driver to use for
+ file-based disks. Note that currently these drivers work with the
+ xen hypervisor only. This option is only relevant for instances using
+ the file storage backend. The available choices are:
+ <variablelist>
+ <varlistentry>
+ <term>loop</term>
+ <listitem>
+ <para>
+ Kernel loopback driver. This driver uses loopback
+ devices to access the filesystem within the
+ file. However, running I/O intensive applications in
+ your instance using the loop driver might result in
+ slowdowns. Furthermore, if you use the loopback
+ driver consider increasing the maximum amount of
+ loopback devices (on most systems it's 8) using the
+ max_loop param.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>blktap</term>
+ <listitem>
+ <para>The blktap driver (for Xen hypervisors). In
+ order to be able to use the blktap driver you should
+ check if the 'blktapctrl' user space disk agent is
+ running (usually automatically started via xend). This
+ user-level disk I/O interface has the advantage of
+ better performance. Especially if you use a network
+ file system (e.g. NFS) to store your instances this is
+ the recommended choice.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>