~~~~
| **init**
-| [-s *secondary\_ip*]
+| [{-s|--secondary-ip} *secondary\_ip*]
| [--vg-name *vg-name*]
| [--master-netdev *interface-name*]
-| [-m *mac-prefix*]
+| [{-m|--mac-prefix} *mac-prefix*]
| [--no-lvm-storage]
| [--no-etc-hosts]
| [--no-ssh-init]
| [--file-storage-dir *dir*]
| [--enabled-hypervisors *hypervisors*]
| [-t *hypervisor name*]
-| [--hypervisor-parameters *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
-| [--backend-parameters *be-param*=*value* [,*be-param*=*value*...]]
-| [--nic-parameters *nic-param*=*value* [,*nic-param*=*value*...]]
+| [{-H|--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
+| [{-B|--backend-parameters} *be-param*=*value* [,*be-param*=*value*...]]
+| [{-N|--nic-parameters} *nic-param*=*value* [,*nic-param*=*value*...]]
| [--maintain-node-health {yes \| no}]
| [--uid-pool *user-id pool definition*]
-| [-I *default instance allocator*]
+| [{-I|--default-iallocator} *default instance allocator*]
| [--primary-ip-version *version*]
| [--prealloc-wipe-disks {yes \| no}]
| [--node-parameters *ndparams*]
+| [{-C|--candidate-pool-size} *candidate\_pool\_size*]
| {*clustername*}
This commands is only run once initially on the first node of the
use.
The cluster can run in two modes: single-home or dual-homed. In the
-first case, all traffic (both public traffic, inter-node traffic
-and data replication traffic) goes over the same interface. In the
+first case, all traffic (both public traffic, inter-node traffic and
+data replication traffic) goes over the same interface. In the
dual-homed case, the data replication traffic goes over the second
-network. The ``-s`` option here marks the cluster as dual-homed and
-its parameter represents this node's address on the second network.
-If you initialise the cluster with ``-s``, all nodes added must
-have a secondary IP as well.
+network. The ``-s (--secondary-ip)`` option here marks the cluster as
+dual-homed and its parameter represents this node's address on the
+second network. If you initialise the cluster with ``-s``, all nodes
+added must have a secondary IP as well.
Note that for Ganeti it doesn't matter if the secondary network is
actually a separate physical network, or is done using tunneling,
important that all nodes have this interface because you'll need it
for a master failover.
-The ``-m`` option will let you specify a three byte prefix under
-which the virtual MAC addresses of your instances will be
-generated. The prefix must be specified in the format XX:XX:XX and
-the default is aa:00:00.
+The ``-m (--mac-prefix)`` option will let you specify a three byte
+prefix under which the virtual MAC addresses of your instances will be
+generated. The prefix must be specified in the format ``XX:XX:XX`` and
+the default is ``aa:00:00``.
The ``--no-lvm-storage`` option allows you to initialize the
cluster without lvm support. This means that only instances using
use for storing the instance disk files when using file storage as
backend for instance disks.
-The ``--enabled-hypervisors`` option allows you to set the list of
-hypervisors that will be enabled for this cluster. Instance
-hypervisors can only be chosen from the list of enabled
-hypervisors, and the first entry of this list will be used by
-default. Currently, the following hypervisors are available:
-
The ``--prealloc-wipe-disks`` sets a cluster wide configuration
value for wiping disks prior to allocation. This increases security
on instance level as the instance can't access untouched data from
it's underlying storage.
-
-
-
+The ``--enabled-hypervisors`` option allows you to set the list of
+hypervisors that will be enabled for this cluster. Instance
+hypervisors can only be chosen from the list of enabled
+hypervisors, and the first entry of this list will be used by
+default. Currently, the following hypervisors are available:
xen-pvm
Xen PVM hypervisor
fake
fake hypervisor for development/testing
-
Either a single hypervisor name or a comma-separated list of
hypervisor names can be specified. If this option is not specified,
only the xen-pvm hypervisor is enabled by default.
-The ``--hypervisor-parameters`` option allows you to set default
+The ``-H (--hypervisor-parameters)`` option allows you to set default
hypervisor specific parameters for the cluster. The format of this
option is the name of the hypervisor, followed by a colon and a
-comma-separated list of key=value pairs. The keys available for
-each hypervisors are detailed in the gnt-instance(8) man page, in
-the **add** command plus the following parameters which are only
+comma-separated list of key=value pairs. The keys available for each
+hypervisors are detailed in the gnt-instance(8) man page, in the
+**add** command plus the following parameters which are only
configurable globally (at cluster level):
migration\_port
This option is only effective with kvm versions >= 78 and qemu-kvm
versions >= 0.10.0.
-
-The ``--backend-parameters`` option allows you to set the default
+The ``-B (--backend-parameters)`` option allows you to set the default
backend parameters for the cluster. The parameter format is a
-comma-separated list of key=value pairs with the following
-supported keys:
+comma-separated list of key=value pairs with the following supported
+keys:
vcpus
Number of VCPUs to set for an instance by default, must be an
will be set to true if not specified.
-The ``--nic-parameters`` option allows you to set the default nic
-parameters for the cluster. The parameter format is a
-comma-separated list of key=value pairs with the following
-supported keys:
+The ``-N (--nic-parameters)`` option allows you to set the default nic
+parameters for the cluster. The parameter format is a comma-separated
+list of key=value pairs with the following supported keys:
mode
The default nic mode, 'routed' or 'bridged'.
network script it is interpreted as a routing table number or
name.
-
The option ``--maintain-node-health`` allows to enable/disable
automatic maintenance actions on nodes. Currently these include
automatic shutdown of instances and deactivation of DRBD devices on
parameters for the cluster. Please see **ganeti**(7) for more
information about supported key=value pairs.
+The ``-C (--candidate-pool-size)`` option specifies the
+``candidate_pool_size`` cluster parameter. This is the number of nodes
+that the master will try to keep as master\_candidates. For more
+details about this role and other node roles, see the ganeti(7).
+
LIST-TAGS
~~~~~~~~~
| [--vg-name *vg-name*]
| [--no-lvm-storage]
| [--enabled-hypervisors *hypervisors*]
-| [--hypervisor-parameters *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
-| [--backend-parameters *be-param*=*value* [,*be-param*=*value*...]]
-| [--nic-parameters *nic-param*=*value* [,*nic-param*=*value*...]]
+| [{-H|--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
+| [{-B|--backend-parameters} *be-param*=*value* [,*be-param*=*value*...]]
+| [{-N|--nic-parameters} *nic-param*=*value* [,*nic-param*=*value*...]]
| [--uid-pool *user-id pool definition*]
| [--add-uids *user-id pool definition*]
| [--remove-uids *user-id pool definition*]
-| [-C *candidate\_pool\_size*]
+| [{-C|--candidate-pool-size} *candidate\_pool\_size*]
| [--maintain-node-health {yes \| no}]
| [--prealloc-wipe-disks {yes \| no}]
-| [-I *default instance allocator*]
+| [{-I|--default-iallocator} *default instance allocator*]
| [--reserved-lvs=*NAMES*]
| [--node-parameters *ndparams*]
| [--master-netdev *interface-name*]
Modify the options for the cluster.
The ``--vg-name``, ``--no-lvm-storarge``, ``--enabled-hypervisors``,
-``--hypervisor-parameters``, ``--backend-parameters``,
-``--nic-parameters``, ``--maintain-node-health``,
-``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``,
-``--master-netdev`` options are described in the **init** command.
-
-The ``-C`` option specifies the ``candidate_pool_size`` cluster
-parameter. This is the number of nodes that the master will try to
-keep as master\_candidates. For more details about this role and
-other node roles, see the ganeti(7). If you increase the size, the
-master will automatically promote as many nodes as required and
-possible to reach the intended number.
+``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)``,
+``--nic-parameters``, ``-C (--candidate-pool-size)``,
+``--maintain-node-health``, ``--prealloc-wipe-disks``, ``--uid-pool``,
+``--node-parameters``, ``--master-netdev`` options are described in
+the **init** command.
The ``--add-uids`` and ``--remove-uids`` options can be used to
modify the user-id pool by adding/removing a list of user-ids or
To remove all reserved logical volumes, pass in an empty argument
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``.
-The ``-I`` is described in the **init** command. To clear the
-default iallocator, just pass an empty string ('').
+The ``-I (--default-iallocator)`` is described in the **init**
+command. To clear the default iallocator, just pass an empty string
+('').
QUEUE
~~~~~
^^^
| **add**
-| {-t {diskless | file \| plain \| drbd}}
+| {-t|--disk-template {diskless | file \| plain \| drbd}}
| {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,metavg=*VG*][,mode=*ro\|rw*]
-| \| -s *SIZE*}
+| \| {-s|--os-size} *SIZE*}
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install]
| [--net=*N* [:options...] \| --no-nics]
-| [-B *BEPARAMS*]
-| [-H *HYPERVISOR* [: option=*value*... ]]
-| [-O, --os-parameters *param*=*value*... ]
+| [{-B|--backend-parameters} *BEPARAMS*]
+| [{-H|--hypervisor-parameters} *HYPERVISOR* [: option=*value*... ]]
+| [{-O|--os-parameters} *param*=*value*... ]
| [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}]
-| {-n *node[:secondary-node]* \| --iallocator *name*}
-| {-o *os-type*}
+| {{-n|--node} *node[:secondary-node]* \| {-I|--iallocator} *name*}
+| {{-o|--os-type} *os-type*}
| [--submit]
| {*instance*}
random MAC, and set up according the the cluster level nic
parameters. Each NIC can take these parameters (all optional):
-
-
mac
either a value or 'generate' to generate a new unique MAC
the instance, you can prevent the default of one NIC with the
``--no-nics`` option.
-The ``-o`` options specifies the operating system to be installed.
-The available operating systems can be listed with **gnt-os list**.
-Passing ``--no-install`` will however skip the OS installation,
-allowing a manual import if so desired. Note that the no-installation
-mode will automatically disable the start-up of the instance (without
-an OS, it most likely won't be able to start-up successfully).
+The ``-o (--os-type)`` option specifies the operating system to be
+installed. The available operating systems can be listed with
+**gnt-os list**. Passing ``--no-install`` will however skip the OS
+installation, allowing a manual import if so desired. Note that the
+no-installation mode will automatically disable the start-up of the
+instance (without an OS, it most likely won't be able to start-up
+successfully).
-The ``-B`` option specifies the backend parameters for the
-instance. If no such parameters are specified, the values are
-inherited from the cluster. Possible parameters are:
+The ``-B (--backend-parameters)`` option specifies the backend
+parameters for the instance. If no such parameters are specified, the
+values are inherited from the cluster. Possible parameters are:
memory
the memory size of the instance; as usual, suffixes can be used to
(enough redundancy in the cluster to survive a node failure)
-The ``-H`` option specified the hypervisor to use for the instance
-(must be one of the enabled hypervisors on the cluster) and optionally
-custom parameters for this instance. If not other options are used
-(i.e. the invocation is just -H *NAME*) the instance will inherit the
-cluster options. The defaults below show the cluster defaults at
-cluster creation time.
+The ``-H (--hypervisor-parameters)`` option specified the hypervisor
+to use for the instance (must be one of the enabled hypervisors on the
+cluster) and optionally custom parameters for this instance. If not
+other options are used (i.e. the invocation is just -H *NAME*) the
+instance will inherit the cluster options. The defaults below show the
+cluster defaults at cluster creation time.
The possible hypervisor options are as follows:
This parameter determines the way the network cards are presented
to the instance. The possible options are:
- rtl8139 (default for Xen HVM) (HVM & KVM)
- ne2k\_isa (HVM & KVM)
- ne2k\_pci (HVM & KVM)
- i82551 (KVM)
- i82557b (KVM)
- i82559er (KVM)
- pcnet (KVM)
- e1000 (KVM)
- paravirtual (default for KVM) (HVM & KVM)
+ - rtl8139 (default for Xen HVM) (HVM & KVM)
+ - ne2k\_isa (HVM & KVM)
+ - ne2k\_pci (HVM & KVM)
+ - i82551 (KVM)
+ - i82557b (KVM)
+ - i82559er (KVM)
+ - pcnet (KVM)
+ - e1000 (KVM)
+ - paravirtual (default for KVM) (HVM & KVM)
disk\_type
Valid for the Xen HVM and KVM hypervisors.
"tablet".
-The ``-O`` (``--os-parameters``) option allows customisation of the OS
+The ``-O (--os-parameters)`` option allows customisation of the OS
parameters. The actual parameter names and values depends on the OS
being used, but the syntax is the same key=value. For example, setting
a hypothetical ``dhcp`` parameter to yes can be achieved by::
gnt-instance add -O dhcp=yes ...
+The ``-I (--iallocator)`` option specifies the instance allocator
+plugin to use. If you pass in this option the allocator will select
+nodes for this instance automatically, so you don't need to pass them
+with the ``-n`` option. For more information please refer to the
+instance allocator documentation.
-The ``--iallocator`` option specifies the instance allocator plugin to
-use. If you pass in this option the allocator will select nodes for
-this instance automatically, so you don't need to pass them with the
-``-n`` option. For more information please refer to the instance
-allocator documentation.
-
-The ``-t`` options specifies the disk layout type for the instance.
-The available choices are:
+The ``-t (--disk-template)`` options specifies the disk layout type
+for the instance. The available choices are:
diskless
This creates an instance with no disks. Its useful for testing only
Disk devices will be drbd (version 8.x) on top of lvm volumes.
-The optional second value of the ``--node`` is used for the drbd
+The optional second value of the ``-n (--node)`` is used for the drbd
template type and specifies the remote node.
If you do not want gnt-instance to wait for the disk mirror to be
| **list**
| [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*] [-v]
-| [-o *[+]FIELD,...*] [instance...]
+| [{-o|--output} *[+]FIELD,...*] [instance...]
Shows the currently configured instances with memory usage, disk
usage, the node they are running on, and their run status.
The ``-v`` option activates verbose mode, which changes the display of
special field states (see **ganeti(7)**).
-The ``-o`` option takes a comma-separated list of output fields. The
-available fields and their meaning are:
-
+The ``-o (--output)`` option takes a comma-separated list of output
+fields. The available fields and their meaning are:
name
the instance name
^^^^^^
| **modify**
-| [-H *HYPERVISOR\_PARAMETERS*]
-| [-B *BACKEND\_PARAMETERS*]
+| [{-H|--hypervisor-parameters} *HYPERVISOR\_PARAMETERS*]
+| [{-B|--backend-parameters} *BACKEND\_PARAMETERS*]
| [--net add*[:options]* \| --net remove \| --net *N:options*]
| [--disk add:size=*SIZE*[,vg=*VG*][,metavg=*VG*] \| --disk remove \|
| --disk *N*:mode=*MODE*]
-| [-t plain | -t drbd -n *new_secondary*] [--no-wait-for-sync]
+| [{-t|--disk-template} plain | {-t|--disk-template} drbd -n *new_secondary*] [--no-wait-for-sync]
| [--os-type=*OS* [--force-variant]]
-| [-O, --os-parameters *param*=*value*... ]
+| [{-O|--os-parameters} *param*=*value*... ]
| [--submit]
| {*instance*}
disks and NICs to/from the instance. Note that you need to give at
least one of the arguments, otherwise the command complains.
-The ``-H``, ``-B`` and ``-O`` options specifies hypervisor, backend
-and OS parameter options in the form of name=value[,...]. For details
+The ``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)``
+and ``-O (--os-parameters)`` options specifies hypervisor, backend and
+OS parameter options in the form of name=value[,...]. For details
which options can be specified, see the **add** command.
-The ``-t`` option will change the disk template of the instance.
-Currently only conversions between the plain and drbd disk templates
-are supported, and the instance must be stopped before attempting the
-conversion. When changing from the plain to the drbd disk template, a
-new secondary node must be specified via the ``-n`` option. The option
-``--no-wait-for-sync`` can be used when converting to the ``drbd``
-template in order to make the instance available for startup before
-DRBD has finished resyncing.
+The ``-t (--disk-template)`` option will change the disk template of
+the instance. Currently only conversions between the plain and drbd
+disk templates are supported, and the instance must be stopped before
+attempting the conversion. When changing from the plain to the drbd
+disk template, a new secondary node must be specified via the ``-n``
+option. The option ``--no-wait-for-sync`` can be used when converting
+to the ``drbd`` template in order to make the instance available for
+startup before DRBD has finished resyncing.
The ``--disk add:size=``*SIZE* option adds a disk to the instance. The
optional ``vg=``*VG* option specifies LVM volume group other than
of the instance, while the ``--net`` *N*:*options* option will change
the parameters of the Nth instance NIC.
-The option ``--os-type`` will change the OS name for the instance
+The option ``-o (--os-type)`` will change the OS name for the instance
(without reinstallation). In case an OS variant is specified that is
not found, then by default the modification is refused, unless
``--force-variant`` is passed. An invalid OS will also be refused,
REINSTALL
^^^^^^^^^
-| **reinstall** [-o *os-type*] [--select-os] [-f *force*]
+| **reinstall** [{-o|--os-type} *os-type*] [--select-os] [-f *force*]
| [--force-multiple]
| [--instance \| --node \| --primary \| --secondary \| --all]
-| [-O *OS\_PARAMETERS*] [--submit] {*instance*...}
+| [{-O|--os-parameters} *OS\_PARAMETERS*] [--submit] {*instance*...}
Reinstalls the operating system on the given instance(s). The
-instance(s) must be stopped when running this command. If the
-``--os-type`` is specified, the operating system is changed.
+instance(s) must be stopped when running this command. If the ``-o
+(--os-type)`` is specified, the operating system is changed.
The ``--select-os`` option switches to an interactive OS reinstall.
The user is prompted to select the OS template from the list of
-available OS templates. OS parameters can be overridden using ``-O``
-(more documentation for this option under the **add** command).
+available OS templates. OS parameters can be overridden using ``-O
+(--os-parameters)`` (more documentation for this option under the
+**add** command).
Since this is a potentially dangerous command, the user will be
required to confirm this action, unless the ``-f`` flag is passed.
| [--force-multiple]
| [--instance \| --node \| --primary \| --secondary \| --all \|
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
-| [-H ``key=value...``] [-B ``key=value...``]
+| [{-H|--hypervisor-parameters} ``key=value...``]
+| [{-B|--backend-parameters} ``key=value...``]
| [--submit]
| {*name*...}
The ``--force-multiple`` will skip the interactive confirmation in the
case the more than one instance will be affected.
-The ``-H`` and ``-B`` options specify temporary hypervisor and backend
-parameters that can be used to start an instance with modified
-parameters. They can be useful for quick testing without having to
-modify an instance back and forth, e.g.::
+The ``-H (--hypervisor-parameters)`` and ``-B (--backend-parameters)``
+options specify temporary hypervisor and backend parameters that can
+be used to start an instance with modified parameters. They can be
+useful for quick testing without having to modify an instance back and
+forth, e.g.::
# gnt-instance start -H root_args="single" instance1
# gnt-instance start -B memory=2048 instance2
^^^^^^
| **reboot**
-| [--type=*REBOOT-TYPE*]
+| [{-t|--type} *REBOOT-TYPE*]
| [--ignore-secondaries]
| [--shutdown-timeout=*N*]
| [--force-multiple]
| [*name*...]
Reboots one or more instances. The type of reboot depends on the value
-of ``--type``. A soft reboot does a hypervisor reboot, a hard reboot
+of ``-t (--type)``. A soft reboot does a hypervisor reboot, a hard reboot
does a instance stop, recreates the hypervisor config for the instance
and starts the instance. A full reboot does the equivalent of
**gnt-instance shutdown && gnt-instance startup**. The default is
ADD
~~~
-| **add** [--readd] [-s *secondary\_ip*] [-g *nodegroup*]
+| **add** [--readd] [{-s|--secondary-ip} *secondary\_ip*]
+| [{-g|--node-group} *nodegroup*]
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``]
| [--node-parameters *ndparams*]
| {*nodename*}
forcibly join the specified host the cluster, not paying attention
to its current status (it could be already in a cluster, etc.)
-The ``-s`` is used in dual-home clusters and specifies the new node's
-IP in the secondary network. See the discussion in **gnt-cluster**(8)
-for more information.
+The ``-s (--secondary-ip)`` is used in dual-home clusters and
+specifies the new node's IP in the secondary network. See the
+discussion in **gnt-cluster**(8) for more information.
In case you're readding a node after hardware failure, you can use
the ``--readd`` parameter. In this case, you don't need to pass the
appears to belong to another cluster. This is used during cluster merging, for
example.
-The ``-g`` is used to add the new node into a specific node group,
-specified by UUID or name. If only one node group exists you can
-skip this option, otherwise it's mandatory.
+The ``-g (--node-group)`` option is used to add the new node into a
+specific node group, specified by UUID or name. If only one node group
+exists you can skip this option, otherwise it's mandatory.
The ``vm_capable``, ``master_capable`` and ``ndparams`` options are
described in **ganeti**(7), and are used to set the properties of the
The new location for the instances can be specified in two ways:
-- as a single node for all instances, via the ``--new-secondary``
+- as a single node for all instances, via the ``-n (--new-secondary)``
option
-- or via the ``--iallocator`` option, giving a script name as
+- or via the ``-I (--iallocator)`` option, giving a script name as
parameter, so each instance will be in turn placed on the (per the
script) optimal node
Example::
- # gnt-node evacuate -I dumb node3.example.com
+ # gnt-node evacuate -I hail node3.example.com
FAILOVER
| **list**
| [--no-headers] [--separator=*SEPARATOR*]
-| [--units=*UNITS*] [-v] [-o *[+]FIELD,...*]
+| [--units=*UNITS*] [-v] [{-o|--output} *[+]FIELD,...*]
| [node...]
Lists the nodes in the cluster.
The ``-v`` option activates verbose mode, which changes the display of
special field states (see **ganeti(7)**).
-The ``-o`` option takes a comma-separated list of output fields.
-The available fields and their meaning are:
+The ``-o (--output)`` option takes a comma-separated list of output
+fields. The available fields and their meaning are:
name
~~~~~~
| **modify** [-f] [--submit]
-| [--master-candidate=``yes|no``] [--drained=``yes|no``] [--offline=``yes|no``]
+| [{-C|--master-candidate} ``yes|no``]
+| [{-D|--drained} ``yes|no``] [{-O|--offline} ``yes|no``]
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``] [--auto-promote]
-| [-s *secondary_ip*]
+| [{-s|--secondary-ip} *secondary_ip*]
| [--node-parameters *ndparams*]
| [--node-powered=``yes|no``]
| {*node*}
yes. The meaning of the roles and flags are described in the
manpage **ganeti**(7).
-``--node-powered`` can be used to modify state-of-record if it doesn't reflect
-the reality anymore.
+The option ``--node-powered`` can be used to modify state-of-record if
+it doesn't reflect the reality anymore.
In case a node is demoted from the master candidate role, the
operation will be refused unless you pass the ``--auto-promote``
# gnt-node modify --offline=yes node1.example.com
-The ``-s`` can be used to change the node's secondary ip. No drbd
-instances can be running on the node, while this operation is
-taking place.
+The ``-s (--secondary-ip)`` option can be used to change the node's
+secondary ip. No drbd instances can be running on the node, while this
+operation is taking place.
Example (setting the node back to online and master candidate)::
~~~~~~~
| **volumes** [--no-headers] [--human-readable]
-| [--separator=*SEPARATOR*] [--output=*FIELDS*]
+| [--separator=*SEPARATOR*] [{-o|--output} *FIELDS*]
| [*node*...]
Lists all logical volumes and their physical disks from the node(s)
parsing by scripts. In both cases, the ``--units`` option can be
used to enforce a given output unit.
-The ``-o`` option takes a comma-separated list of output fields.
-The available fields and their meaning are:
+The ``-o (--output)`` option takes a comma-separated list of output
+fields. The available fields and their meaning are:
node
the node name on which the volume exists
| **list-storage** [--no-headers] [--human-readable]
| [--separator=*SEPARATOR*] [--storage-type=*STORAGE\_TYPE*]
-| [--output=*FIELDS*]
+| [{-o|--output} *FIELDS*]
| [*node*...]
Lists the available storage units and their details for the given
The ``--storage-type`` option can be used to choose a storage unit
type. Possible choices are lvm-pv, lvm-vg or file.
-The ``-o`` option takes a comma-separated list of output fields.
-The available fields and their meaning are:
+The ``-o (--output)`` option takes a comma-separated list of output
+fields. The available fields and their meaning are:
node
the node name on which the volume exists