constants.ISPECS_MAX,
constants.ISPECS_STD,
constants.IPOLICY_DTS,
- constants.IPOLICY_VCPU_RATIO])
+ constants.IPOLICY_VCPU_RATIO,
+ constants.IPOLICY_SPINDLE_RATIO])
.. pyassert::
constants.ISPEC_DISK_SIZE,
constants.ISPEC_DISK_COUNT,
constants.ISPEC_CPU_COUNT,
- constants.ISPEC_NIC_COUNT]))
+ constants.ISPEC_NIC_COUNT,
+ constants.ISPEC_SPINDLE_USE]))
.. |ispec-min| replace:: :pyeval:`constants.ISPECS_MIN`
.. |ispec-max| replace:: :pyeval:`constants.ISPECS_MAX`
The numbers of cpus used
:pyeval:`constants.ISPEC_NIC_COUNT`
The numbers of nics used
+ :pyeval:`constants.ISPEC_SPINDLE_USE`
+ The numbers of virtual disk spindles used by this instance. They are
+ not real in the sense of actual HDD spindles, but useful for
+ accounting the spindle usage on the residing node
:pyeval:`constants.IPOLICY_DTS`
A `list` of disk templates allowed for instances using this policy
:pyeval:`constants.IPOLICY_VCPU_RATIO`
Maximum ratio of virtual to physical CPUs (`float`)
+:pyeval:`constants.IPOLICY_SPINDLE_RATIO`
+ Maximum ratio of instances to their node's ``spindle_count`` (`float`)
Usage examples
--------------
Shell
+++++
-.. highlight:: sh
+.. highlight:: shell-example
Using wget::
- wget -q -O - https://CLUSTERNAME:5080/2/info
+ $ wget -q -O - https://%CLUSTERNAME%:5080/2/info
or curl::
- curl https://CLUSTERNAME:5080/2/info
+ $ curl https://%CLUSTERNAME%:5080/2/info
Python
Redistribute configuration to all nodes. The result will be a job id.
+Job result:
+
+.. opcode_result:: OP_CLUSTER_REDIST_CONF
+
``/2/features``
+++++++++++++++
rlib2._NODE_EVAC_RES1])
:pyeval:`rlib2._INST_CREATE_REQV1`
- Instance creation request data version 1 supported.
+ Instance creation request data version 1 supported
:pyeval:`rlib2._INST_REINSTALL_REQV1`
- Instance reinstall supports body parameters.
+ Instance reinstall supports body parameters
:pyeval:`rlib2._NODE_MIGRATE_REQV1`
Whether migrating a node (``/2/nodes/[node_name]/migrate``) supports
- request body parameters.
+ request body parameters
:pyeval:`rlib2._NODE_EVAC_RES1`
Whether evacuating a node (``/2/nodes/[node_name]/evacuate``) returns
a new-style result (see resource description)
.. opcode_params:: OP_CLUSTER_SET_PARAMS
+Job result:
+
+.. opcode_result:: OP_CLUSTER_SET_PARAMS
+
``/2/groups``
+++++++++++++
(i.e ``?bulk=1``), the output contains detailed information about node
groups as a list.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`.
Example::
Earlier versions used a parameter named ``name`` which, while still
supported, has been renamed to ``group_name``.
+Job result:
+
+.. opcode_result:: OP_GROUP_ADD
+
``/2/groups/[group_name]``
++++++++++++++++++++++++++
Returns information about a node group, similar to the bulk output from
the node group list.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`.
``DELETE``
~~~~~~~~~~
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_GROUP_REMOVE
+
``/2/groups/[group_name]/modify``
+++++++++++++++++++++++++++++++++
.. opcode_params:: OP_GROUP_ASSIGN_NODES
:exclude: group_name, force, dry_run
+Job result:
+
+.. opcode_result:: OP_GROUP_ASSIGN_NODES
+
``/2/groups/[group_name]/tags``
+++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+``/2/instances-multi-alloc``
+++++++++++++++++++++++++++++
+
+Tries to allocate multiple instances.
+
+It supports the following commands: ``POST``
+
+``POST``
+~~~~~~~~
+
+The parameters:
+
+.. opcode_params:: OP_INSTANCE_MULTI_ALLOC
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_MULTI_ALLOC
+
+
``/2/instances``
++++++++++++++++
(i.e ``?bulk=1``), the output contains detailed information about
instances as a list.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`.
Example::
Returns information about an instance, similar to the bulk output from
the instance list.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`.
``DELETE``
~~~~~~~~~~
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REMOVE
+
``/2/instances/[instance_name]/info``
+++++++++++++++++++++++++++++++++++++++
configuration without querying the instance's nodes. The result will be
a job id.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_QUERY_DATA
+
``/2/instances/[instance_name]/reboot``
+++++++++++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REBOOT
+
``/2/instances/[instance_name]/shutdown``
+++++++++++++++++++++++++++++++++++++++++
.. opcode_params:: OP_INSTANCE_SHUTDOWN
:exclude: instance_name, dry_run
+Job result:
+
+.. opcode_result:: OP_INSTANCE_SHUTDOWN
+
``/2/instances/[instance_name]/startup``
++++++++++++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_STARTUP
+
+
``/2/instances/[instance_name]/reinstall``
++++++++++++++++++++++++++++++++++++++++++++++
Ganeti 2.4 and below used query parameters. Those are deprecated and
should no longer be used.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REPLACE_DISKS
+
``/2/instances/[instance_name]/activate-disks``
+++++++++++++++++++++++++++++++++++++++++++++++
Takes the bool parameter ``ignore_size``. When set ignore the recorded
size (useful for forcing activation when recorded size is wrong).
+Job result:
+
+.. opcode_result:: OP_INSTANCE_ACTIVATE_DISKS
+
``/2/instances/[instance_name]/deactivate-disks``
+++++++++++++++++++++++++++++++++++++++++++++++++
Takes no parameters.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_DEACTIVATE_DISKS
+
``/2/instances/[instance_name]/recreate-disks``
+++++++++++++++++++++++++++++++++++++++++++++++++
.. opcode_params:: OP_INSTANCE_RECREATE_DISKS
:exclude: instance_name
+Job result:
+
+.. opcode_result:: OP_INSTANCE_RECREATE_DISKS
+
``/2/instances/[instance_name]/disk/[disk_index]/grow``
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. opcode_params:: OP_INSTANCE_GROW_DISK
:exclude: instance_name, disk
+Job result:
+
+.. opcode_result:: OP_INSTANCE_GROW_DISK
+
``/2/instances/[instance_name]/prepare-export``
+++++++++++++++++++++++++++++++++++++++++++++++++
Takes one parameter, ``mode``, for the export mode. Returns a job ID.
+Job result:
+
+.. opcode_result:: OP_BACKUP_PREPARE
+
``/2/instances/[instance_name]/export``
+++++++++++++++++++++++++++++++++++++++++++++++++
:exclude: instance_name
:alias: target_node=destination
+Job result:
+
+.. opcode_result:: OP_BACKUP_EXPORT
+
``/2/instances/[instance_name]/migrate``
++++++++++++++++++++++++++++++++++++++++
.. opcode_params:: OP_INSTANCE_MIGRATE
:exclude: instance_name, live
+Job result:
+
+.. opcode_result:: OP_INSTANCE_MIGRATE
+
``/2/instances/[instance_name]/failover``
+++++++++++++++++++++++++++++++++++++++++
.. opcode_params:: OP_INSTANCE_FAILOVER
:exclude: instance_name
+Job result:
+
+.. opcode_result:: OP_INSTANCE_FAILOVER
+
``/2/instances/[instance_name]/rename``
++++++++++++++++++++++++++++++++++++++++
])
``instance``
- Instance name.
+ Instance name
``kind``
Console type, one of :pyeval:`constants.CONS_SSH`,
:pyeval:`constants.CONS_VNC`, :pyeval:`constants.CONS_SPICE`
- or :pyeval:`constants.CONS_MESSAGE`.
+ or :pyeval:`constants.CONS_MESSAGE`
``message``
- Message to display (:pyeval:`constants.CONS_MESSAGE` type only).
+ Message to display (:pyeval:`constants.CONS_MESSAGE` type only)
``host``
Host to connect to (:pyeval:`constants.CONS_SSH`,
- :pyeval:`constants.CONS_VNC` or :pyeval:`constants.CONS_SPICE` only).
+ :pyeval:`constants.CONS_VNC` or :pyeval:`constants.CONS_SPICE` only)
``port``
TCP port to connect to (:pyeval:`constants.CONS_VNC` or
- :pyeval:`constants.CONS_SPICE` only).
+ :pyeval:`constants.CONS_SPICE` only)
``user``
- Username to use (:pyeval:`constants.CONS_SSH` only).
+ Username to use (:pyeval:`constants.CONS_SSH` only)
``command``
Command to execute on machine (:pyeval:`constants.CONS_SSH` only)
``display``
- VNC display number (:pyeval:`constants.CONS_VNC` only).
+ VNC display number (:pyeval:`constants.CONS_VNC` only)
``/2/instances/[instance_name]/tags``
Returned fields for bulk requests (unlike other bulk requests, these
fields are not the same as for per-job requests):
-:pyeval:`utils.CommaJoin(sorted(rlib2.J_FIELDS_BULK))`
+:pyeval:`utils.CommaJoin(sorted(rlib2.J_FIELDS_BULK))`.
``/2/jobs/[job_id]``
++++++++++++++++++++
dict:
``fields``
- The job fields on which to watch for changes.
+ The job fields on which to watch for changes
``previous_job_info``
- Previously received field values or None if not yet available.
+ Previously received field values or None if not yet available
``previous_log_serial``
Highest log serial number received so far or None if not yet
- available.
+ available
Returns None if no changes have been detected and a dict with two keys,
``job_info`` and ``log_entries`` otherwise.
(i.e ``?bulk=1``), the output contains detailed information about nodes
as a list.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`.
Example::
It supports the following commands: ``GET``.
-Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`.
``/2/nodes/[node_name]/powercycle``
+++++++++++++++++++++++++++++++++++
Returns a job ID.
+Job result:
+
+.. opcode_result:: OP_NODE_POWERCYCLE
+
``/2/nodes/[node_name]/evacuate``
+++++++++++++++++++++++++++++++++
It supports the bool ``force`` argument.
+Job result:
+
+.. opcode_result:: OP_NODE_SET_PARAMS
+
``/2/nodes/[node_name]/modify``
+++++++++++++++++++++++++++++++
additionally. Currently only :pyeval:`constants.SF_ALLOCATABLE` (bool)
is supported. The result will be a job id.
+Job result:
+
+.. opcode_result:: OP_NODE_MODIFY_STORAGE
+
+
``/2/nodes/[node_name]/storage/repair``
+++++++++++++++++++++++++++++++++++++++
repaired) and ``name`` (name of the storage unit). The result will be a
job id.
+Job result:
+
+.. opcode_result:: OP_REPAIR_NODE_STORAGE
+
+
``/2/nodes/[node_name]/tags``
+++++++++++++++++++++++++++++