``/var/lib/ganeti/rapi/users``) on startup. Changes to the file will be
read automatically.
-Each line consists of two or three fields separated by whitespace. The
-first two fields are for username and password. The third field is
-optional and can be used to specify per-user options. Currently,
-``write`` is the only option supported and enables the user to execute
-operations modifying the cluster. Lines starting with the hash sign
-(``#``) are treated as comments.
+Lines starting with the hash sign (``#``) are treated as comments. Each
+line consists of two or three fields separated by whitespace. The first
+two fields are for username and password. The third field is optional
+and can be used to specify per-user options (separated by comma without
+spaces). Available options:
+
+.. pyassert::
+
+ rapi.RAPI_ACCESS_ALL == set([
+ rapi.RAPI_ACCESS_WRITE,
+ rapi.RAPI_ACCESS_READ,
+ ])
+
+:pyeval:`rapi.RAPI_ACCESS_WRITE`
+ Enables the user to execute operations modifying the cluster. Implies
+ :pyeval:`rapi.RAPI_ACCESS_READ` access.
+:pyeval:`rapi.RAPI_ACCESS_READ`
+ Allow access to operations querying for information.
Passwords can either be written in clear text or as a hash. Clear text
passwords may not start with an opening brace (``{``) or they must be
# Hashed password for Jessica
jessica {HA1}7046452df2cbb530877058712cf17bd4 write
+ # Monitoring can query for values
+ monitoring {HA1}ec018ffe72b8e75bb4d508ed5b6d079c query
+
+ # A user who can query and write
+ superuser {HA1}ec018ffe72b8e75bb4d508ed5b6d079c query,write
+
.. [#pwhash] Using the MD5 hash of username, realm and password is
- described in :rfc:`2617` ("HTTP Authentication"), sections 3.2.2.2 and
- 3.3. The reason for using it over another algorithm is forward
+ described in :rfc:`2617` ("HTTP Authentication"), sections 3.2.2.2
+ and 3.3. The reason for using it over another algorithm is forward
compatibility. If ``ganeti-rapi`` were to implement HTTP Digest
authentication in the future, the same hash could be used.
In the current version ``ganeti-rapi``'s realm, ``Ganeti Remote
.. _JSON: http://www.json.org/
.. _REST: http://en.wikipedia.org/wiki/Representational_State_Transfer
+HTTP requests with a body (e.g. ``PUT`` or ``POST``) require the request
+header ``Content-type`` be set to ``application/json`` (see :rfc:`2616`
+(HTTP/1.1), section 7.2.1).
+
A note on JSON as used by RAPI
++++++++++++++++++++++++++++++
Force operation to continue even if it will cause the cluster to become
inconsistent (e.g. because there are not enough master candidates).
+Parameter details
+-----------------
+
+Some parameters are not straight forward, so we describe them in details
+here.
+
+.. _rapi-ipolicy:
+
+``ipolicy``
++++++++++++
+
+The instance policy specification is a dict with the following fields:
+
+.. pyassert::
+
+ constants.IPOLICY_ALL_KEYS == set([constants.ISPECS_MIN,
+ constants.ISPECS_MAX,
+ constants.ISPECS_STD,
+ constants.IPOLICY_DTS,
+ constants.IPOLICY_VCPU_RATIO,
+ constants.IPOLICY_SPINDLE_RATIO])
+
+
+.. pyassert::
+
+ (set(constants.ISPECS_PARAMETER_TYPES.keys()) ==
+ set([constants.ISPEC_MEM_SIZE,
+ constants.ISPEC_DISK_SIZE,
+ constants.ISPEC_DISK_COUNT,
+ constants.ISPEC_CPU_COUNT,
+ constants.ISPEC_NIC_COUNT,
+ constants.ISPEC_SPINDLE_USE]))
+
+.. |ispec-min| replace:: :pyeval:`constants.ISPECS_MIN`
+.. |ispec-max| replace:: :pyeval:`constants.ISPECS_MAX`
+.. |ispec-std| replace:: :pyeval:`constants.ISPECS_STD`
+
+
+|ispec-min|, |ispec-max|, |ispec-std|
+ A sub- `dict` with the following fields, which sets the limit and standard
+ values of the instances:
+
+ :pyeval:`constants.ISPEC_MEM_SIZE`
+ The size in MiB of the memory used
+ :pyeval:`constants.ISPEC_DISK_SIZE`
+ The size in MiB of the disk used
+ :pyeval:`constants.ISPEC_DISK_COUNT`
+ The numbers of disks used
+ :pyeval:`constants.ISPEC_CPU_COUNT`
+ The numbers of cpus used
+ :pyeval:`constants.ISPEC_NIC_COUNT`
+ The numbers of nics used
+ :pyeval:`constants.ISPEC_SPINDLE_USE`
+ The numbers of virtual disk spindles used by this instance. They are
+ not real in the sense of actual HDD spindles, but useful for
+ accounting the spindle usage on the residing node
+:pyeval:`constants.IPOLICY_DTS`
+ A `list` of disk templates allowed for instances using this policy
+:pyeval:`constants.IPOLICY_VCPU_RATIO`
+ Maximum ratio of virtual to physical CPUs (`float`)
+:pyeval:`constants.IPOLICY_SPINDLE_RATIO`
+ Maximum ratio of instances to their node's ``spindle_count`` (`float`)
+
Usage examples
--------------
Shell
+++++
-.. highlight:: sh
+.. highlight:: shell-example
Using wget::
- wget -q -O - https://CLUSTERNAME:5080/2/info
+ $ wget -q -O - https://%CLUSTERNAME%:5080/2/info
or curl::
- curl https://CLUSTERNAME:5080/2/info
+ $ curl https://%CLUSTERNAME%:5080/2/info
Python
``/``
+++++
-The root resource.
-
-It supports the following commands: ``GET``.
-
-``GET``
-~~~~~~~
-
-Shows the list of mapped resources.
-
-Returns: a dictionary with 'name' and 'uri' keys for each of them.
+The root resource. Has no function, but for legacy reasons the ``GET``
+method is supported.
``/2``
++++++
-The ``/2`` resource, the root of the version 2 API.
-
-It supports the following commands: ``GET``.
-
-``GET``
-~~~~~~~
-
-Show the list of mapped resources.
-
-Returns: a dictionary with ``name`` and ``uri`` keys for each of them.
+Has no function, but for legacy reasons the ``GET`` method is supported.
``/2/info``
+++++++++++
Redistribute configuration to all nodes. The result will be a job id.
+Job result:
+
+.. opcode_result:: OP_CLUSTER_REDIST_CONF
+
``/2/features``
+++++++++++++++
Returns a list of features supported by the RAPI server. Available
features:
-``instance-create-reqv1``
- Instance creation request data version 1 supported.
-``instance-reinstall-reqv1``
- Instance reinstall supports body parameters.
+.. pyassert::
+
+ rlib2.ALL_FEATURES == set([rlib2._INST_CREATE_REQV1,
+ rlib2._INST_REINSTALL_REQV1,
+ rlib2._NODE_MIGRATE_REQV1,
+ rlib2._NODE_EVAC_RES1])
+
+:pyeval:`rlib2._INST_CREATE_REQV1`
+ Instance creation request data version 1 supported
+:pyeval:`rlib2._INST_REINSTALL_REQV1`
+ Instance reinstall supports body parameters
+:pyeval:`rlib2._NODE_MIGRATE_REQV1`
+ Whether migrating a node (``/2/nodes/[node_name]/migrate``) supports
+ request body parameters
+:pyeval:`rlib2._NODE_EVAC_RES1`
+ Whether evacuating a node (``/2/nodes/[node_name]/evacuate``) returns
+ a new-style result (see resource description)
+
+
+``/2/modify``
+++++++++++++++++++++++++++++++++++++++++
+
+Modifies cluster parameters.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_CLUSTER_SET_PARAMS
+
+Job result:
+
+.. opcode_result:: OP_CLUSTER_SET_PARAMS
``/2/groups``
(i.e ``?bulk=1``), the output contains detailed information about node
groups as a list.
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`.
+
Example::
[
Body parameters:
-``name`` (string, required)
- Node group name.
+.. opcode_params:: OP_GROUP_ADD
+
+Earlier versions used a parameter named ``name`` which, while still
+supported, has been renamed to ``group_name``.
+
+Job result:
+
+.. opcode_result:: OP_GROUP_ADD
``/2/groups/[group_name]``
Returns information about a node group, similar to the bulk output from
the node group list.
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.G_FIELDS))`.
+
``DELETE``
~~~~~~~~~~
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_GROUP_REMOVE
+
``/2/groups/[group_name]/modify``
+++++++++++++++++++++++++++++++++
Body parameters:
-``alloc_policy`` (string)
- If present, the new allocation policy for the node group.
+.. opcode_params:: OP_GROUP_SET_PARAMS
+ :exclude: group_name
+
+Job result:
+
+.. opcode_result:: OP_GROUP_SET_PARAMS
``/2/groups/[group_name]/rename``
Body parameters:
-``new_name`` (string, required)
- New node group name.
+.. opcode_params:: OP_GROUP_RENAME
+ :exclude: group_name
+
+Job result:
+
+.. opcode_result:: OP_GROUP_RENAME
+
+
+``/2/groups/[group_name]/assign-nodes``
++++++++++++++++++++++++++++++++++++++++
+
+Assigns nodes to a group.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID. It supports the ``dry-run`` and ``force`` arguments.
+
+Body parameters:
+
+.. opcode_params:: OP_GROUP_ASSIGN_NODES
+ :exclude: group_name, force, dry_run
+
+Job result:
+
+.. opcode_result:: OP_GROUP_ASSIGN_NODES
+
+
+``/2/groups/[group_name]/tags``
++++++++++++++++++++++++++++++++
+
+Manages per-nodegroup tags.
+
+Supports the following commands: ``GET``, ``PUT``, ``DELETE``.
+
+``GET``
+~~~~~~~
+
+Returns a list of tags.
+
+Example::
+
+ ["tag1", "tag2", "tag3"]
+
+``PUT``
+~~~~~~~
+
+Add a set of tags.
+
+The request as a list of strings should be ``PUT`` to this URI. The
+result will be a job id.
+
+It supports the ``dry-run`` argument.
+
+
+``DELETE``
+~~~~~~~~~~
+
+Delete a tag.
+
+In order to delete a set of tags, the DELETE request should be addressed
+to URI like::
+
+ /tags?tag=[tag]&tag=[tag]
+
+It supports the ``dry-run`` argument.
+
+
+``/2/networks``
++++++++++++++++
+
+The networks resource.
+
+It supports the following commands: ``GET``, ``POST``.
+
+``GET``
+~~~~~~~
+
+Returns a list of all existing networks.
+
+Example::
+
+ [
+ {
+ "name": "network1",
+ "uri": "\/2\/networks\/network1"
+ },
+ {
+ "name": "network2",
+ "uri": "\/2\/networks\/network2"
+ }
+ ]
+
+If the optional bool *bulk* argument is provided and set to a true value
+(i.e ``?bulk=1``), the output contains detailed information about networks
+as a list.
+
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.NET_FIELDS))`.
+
+Example::
+
+ [
+ {
+ 'external_reservations': '10.0.0.0, 10.0.0.1, 10.0.0.15',
+ 'free_count': 13,
+ 'gateway': '10.0.0.1',
+ 'gateway6': None,
+ 'group_list': ['default(bridged, prv0)'],
+ 'inst_list': [],
+ 'mac_prefix': None,
+ 'map': 'XX.............X',
+ 'name': 'nat',
+ 'network': '10.0.0.0/28',
+ 'network6': None,
+ 'network_type': 'private',
+ 'reserved_count': 3,
+ 'tags': ['nfdhcpd']
+ },
+ ]
+
+``POST``
+~~~~~~~~
+
+Creates a network.
+
+If the optional bool *dry-run* argument is provided, the job will not be
+actually executed, only the pre-execution checks will be done.
+
+Returns: a job ID that can be used later for polling.
+
+Body parameters:
+
+.. opcode_params:: OP_NETWORK_ADD
+
+Job result:
+
+.. opcode_result:: OP_NETWORK_ADD
+
+
+``/2/networks/[network_name]``
+++++++++++++++++++++++++++++++
+
+Returns information about a network.
+
+It supports the following commands: ``GET``, ``DELETE``.
+
+``GET``
+~~~~~~~
+
+Returns information about a network, similar to the bulk output from
+the network list.
+
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.NET_FIELDS))`.
+
+``DELETE``
+~~~~~~~~~~
+
+Deletes a network.
+
+It supports the ``dry-run`` argument.
+
+Job result:
+
+.. opcode_result:: OP_NETWORK_REMOVE
+
+
+``/2/networks/[network_name]/modify``
++++++++++++++++++++++++++++++++++++++
+
+Modifies the parameters of a network.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_NETWORK_SET_PARAMS
+
+Job result:
+
+.. opcode_result:: OP_NETWORK_SET_PARAMS
+
+
+``/2/networks/[network_name]/connect``
+++++++++++++++++++++++++++++++++++++++
+
+Connects a network to a nodegroup.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID. It supports the ``dry-run`` arguments.
+
+Body parameters:
+
+.. opcode_params:: OP_NETWORK_CONNECT
+
+Job result:
+
+.. opcode_result:: OP_NETWORK_CONNECT
+
+
+``/2/networks/[network_name]/disconnect``
++++++++++++++++++++++++++++++++++++++++++
+
+Disonnects a network from a nodegroup.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID. It supports the ``dry-run`` arguments.
+
+Body parameters:
+
+.. opcode_params:: OP_NETWORK_DISCONNECT
+
+Job result:
+
+.. opcode_result:: OP_NETWORK_DISCONNECT
+
+
+``/2/networks/[network_name]/tags``
++++++++++++++++++++++++++++++++++++
+
+Manages per-network tags.
+
+Supports the following commands: ``GET``, ``PUT``, ``DELETE``.
+
+``GET``
+~~~~~~~
+
+Returns a list of tags.
+
+Example::
+
+ ["tag1", "tag2", "tag3"]
+
+``PUT``
+~~~~~~~
+
+Add a set of tags.
+
+The request as a list of strings should be ``PUT`` to this URI. The
+result will be a job id.
+
+It supports the ``dry-run`` argument.
+
+
+``DELETE``
+~~~~~~~~~~
+
+Delete a tag.
+
+In order to delete a set of tags, the DELETE request should be addressed
+to URI like::
+
+ /tags?tag=[tag]&tag=[tag]
+
+It supports the ``dry-run`` argument.
+
+
+``/2/instances-multi-alloc``
+++++++++++++++++++++++++++++
+
+Tries to allocate multiple instances.
+
+It supports the following commands: ``POST``
+
+``POST``
+~~~~~~~~
+
+The parameters:
+
+.. opcode_params:: OP_INSTANCE_MULTI_ALLOC
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_MULTI_ALLOC
``/2/instances``
(i.e ``?bulk=1``), the output contains detailed information about
instances as a list.
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`.
+
Example::
[
``__version__`` (int, required)
Must be ``1`` (older Ganeti versions used a different format for
- instance creation requests, version ``0``, but that format is not
- documented).
-``mode`` (string, required)
- Instance creation mode.
-``name`` (string, required)
- Instance name.
-``disk_template`` (string, required)
- Disk template for instance.
-``disks`` (list, required)
- List of disk definitions. Example: ``[{"size": 100}, {"size": 5}]``.
- Each disk definition must contain a ``size`` value and can contain an
- optional ``mode`` value denoting the disk access mode (``ro`` or
- ``rw``).
-``nics`` (list, required)
- List of NIC (network interface) definitions. Example: ``[{}, {},
- {"ip": "198.51.100.4"}]``. Each NIC definition can contain the
- optional values ``ip``, ``mode``, ``link`` and ``bridge``.
-``os`` (string, required)
- Instance operating system.
-``osparams`` (dictionary)
- Dictionary with OS parameters. If not valid for the given OS, the job
- will fail.
-``force_variant`` (bool)
- Whether to force an unknown variant.
-``no_install`` (bool)
- Do not install the OS (will enable no-start)
-``pnode`` (string)
- Primary node.
-``snode`` (string)
- Secondary node.
-``src_node`` (string)
- Source node for import.
-``src_path`` (string)
- Source directory for import.
-``start`` (bool)
- Whether to start instance after creation.
-``ip_check`` (bool)
- Whether to ensure instance's IP address is inactive.
-``name_check`` (bool)
- Whether to ensure instance's name is resolvable.
-``file_storage_dir`` (string)
- File storage directory.
-``file_driver`` (string)
- File storage driver.
-``iallocator`` (string)
- Instance allocator name.
-``source_handshake`` (list)
- Signed handshake from source (remote import only).
-``source_x509_ca`` (string)
- Source X509 CA in PEM format (remote import only).
-``source_instance_name`` (string)
- Source instance name (remote import only).
-``hypervisor`` (string)
- Hypervisor name.
-``hvparams`` (dict)
- Hypervisor parameters, hypervisor-dependent.
-``beparams`` (dict)
- Backend parameters.
+ instance creation requests, version ``0``, but that format is no
+ longer supported)
+
+.. opcode_params:: OP_INSTANCE_CREATE
+
+Earlier versions used parameters named ``name`` and ``os``. These have
+been replaced by ``instance_name`` and ``os_type`` to match the
+underlying opcode. The old names can still be used.
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_CREATE
``/2/instances/[instance_name]``
Returns information about an instance, similar to the bulk output from
the instance list.
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.I_FIELDS))`.
+
``DELETE``
~~~~~~~~~~
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REMOVE
+
``/2/instances/[instance_name]/info``
+++++++++++++++++++++++++++++++++++++++
configuration without querying the instance's nodes. The result will be
a job id.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_QUERY_DATA
+
``/2/instances/[instance_name]/reboot``
+++++++++++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REBOOT
+
``/2/instances/[instance_name]/shutdown``
+++++++++++++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+.. opcode_params:: OP_INSTANCE_SHUTDOWN
+ :exclude: instance_name, dry_run
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_SHUTDOWN
+
``/2/instances/[instance_name]/startup``
++++++++++++++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_STARTUP
+
+
``/2/instances/[instance_name]/reinstall``
++++++++++++++++++++++++++++++++++++++++++++++
``POST``
~~~~~~~~
-Takes the parameters ``mode`` (one of ``replace_on_primary``,
-``replace_on_secondary``, ``replace_new_secondary`` or
-``replace_auto``), ``disks`` (comma separated list of disk indexes),
-``remote_node`` and ``iallocator``.
+Returns a job ID.
+
+Body parameters:
-Either ``remote_node`` or ``iallocator`` needs to be defined when using
-``mode=replace_new_secondary``.
+.. opcode_params:: OP_INSTANCE_REPLACE_DISKS
+ :exclude: instance_name
-``mode`` is a mandatory parameter. ``replace_auto`` tries to determine
-the broken disk(s) on its own and replacing it.
+Ganeti 2.4 and below used query parameters. Those are deprecated and
+should no longer be used.
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_REPLACE_DISKS
``/2/instances/[instance_name]/activate-disks``
Takes the bool parameter ``ignore_size``. When set ignore the recorded
size (useful for forcing activation when recorded size is wrong).
+Job result:
+
+.. opcode_result:: OP_INSTANCE_ACTIVATE_DISKS
+
``/2/instances/[instance_name]/deactivate-disks``
+++++++++++++++++++++++++++++++++++++++++++++++++
Takes no parameters.
+Job result:
+
+.. opcode_result:: OP_INSTANCE_DEACTIVATE_DISKS
+
+
+``/2/instances/[instance_name]/recreate-disks``
++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Recreate disks of an instance. Supports the following commands:
+``POST``.
+
+``POST``
+~~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_INSTANCE_RECREATE_DISKS
+ :exclude: instance_name
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_RECREATE_DISKS
+
+
+``/2/instances/[instance_name]/disk/[disk_index]/grow``
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Grows one disk of an instance.
+
+Supports the following commands: ``POST``.
+
+``POST``
+~~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_INSTANCE_GROW_DISK
+ :exclude: instance_name, disk
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_GROW_DISK
+
``/2/instances/[instance_name]/prepare-export``
+++++++++++++++++++++++++++++++++++++++++++++++++
Takes one parameter, ``mode``, for the export mode. Returns a job ID.
+Job result:
+
+.. opcode_result:: OP_BACKUP_PREPARE
+
``/2/instances/[instance_name]/export``
+++++++++++++++++++++++++++++++++++++++++++++++++
Body parameters:
-``mode`` (string)
- Export mode.
-``destination`` (required)
- Destination information, depends on export mode.
-``shutdown`` (bool, required)
- Whether to shutdown instance before export.
-``remove_instance`` (bool)
- Whether to remove instance after export.
-``x509_key_name``
- Name of X509 key (remote export only).
-``destination_x509_ca``
- Destination X509 CA (remote export only).
+.. opcode_params:: OP_BACKUP_EXPORT
+ :exclude: instance_name
+ :alias: target_node=destination
+
+Job result:
+
+.. opcode_result:: OP_BACKUP_EXPORT
``/2/instances/[instance_name]/migrate``
Body parameters:
-``mode`` (string)
- Migration mode.
-``cleanup`` (bool)
- Whether a previously failed migration should be cleaned up.
+.. opcode_params:: OP_INSTANCE_MIGRATE
+ :exclude: instance_name, live
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_MIGRATE
+
+
+``/2/instances/[instance_name]/failover``
++++++++++++++++++++++++++++++++++++++++++
+
+Does a failover of an instance.
+
+Supports the following commands: ``PUT``.
+
+``PUT``
+~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_INSTANCE_FAILOVER
+ :exclude: instance_name
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_FAILOVER
``/2/instances/[instance_name]/rename``
Body parameters:
-``new_name`` (string, required)
- New instance name.
-``ip_check`` (bool)
- Whether to ensure instance's IP address is inactive.
-``name_check`` (bool)
- Whether to ensure instance's name is resolvable.
+.. opcode_params:: OP_INSTANCE_RENAME
+ :exclude: instance_name
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_RENAME
``/2/instances/[instance_name]/modify``
Body parameters:
-``osparams`` (dict)
- Dictionary with OS parameters.
-``hvparams`` (dict)
- Hypervisor parameters, hypervisor-dependent.
-``beparams`` (dict)
- Backend parameters.
-``force`` (bool)
- Whether to force the operation.
-``nics`` (list)
- List of NIC changes. Each item is of the form ``(op, settings)``.
- ``op`` can be ``add`` to add a new NIC with the specified settings,
- ``remove`` to remove the last NIC or a number to modify the settings
- of the NIC with that index.
-``disks`` (list)
- List of disk changes. See ``nics``.
-``disk_template`` (string)
- Disk template for instance.
-``remote_node`` (string)
- Secondary node (used when changing disk template).
-``os_name`` (string)
- Change instance's OS name. Does not reinstall the instance.
-``force_variant`` (bool)
- Whether to force an unknown variant.
+.. opcode_params:: OP_INSTANCE_SET_PARAMS
+ :exclude: instance_name
+
+Job result:
+
+.. opcode_result:: OP_INSTANCE_SET_PARAMS
+
+
+``/2/instances/[instance_name]/console``
+++++++++++++++++++++++++++++++++++++++++
+
+Request information for connecting to instance's console.
+
+.. pyassert::
+
+ not (hasattr(rlib2.R_2_instances_name_console, "PUT") or
+ hasattr(rlib2.R_2_instances_name_console, "POST") or
+ hasattr(rlib2.R_2_instances_name_console, "DELETE"))
+
+Supports the following commands: ``GET``. Requires authentication with
+one of the following options:
+:pyeval:`utils.CommaJoin(rlib2.R_2_instances_name_console.GET_ACCESS)`.
+
+``GET``
+~~~~~~~
+
+Returns a dictionary containing information about the instance's
+console. Contained keys:
+
+.. pyassert::
+
+ constants.CONS_ALL == frozenset([
+ constants.CONS_MESSAGE,
+ constants.CONS_SSH,
+ constants.CONS_VNC,
+ constants.CONS_SPICE,
+ ])
+
+``instance``
+ Instance name
+``kind``
+ Console type, one of :pyeval:`constants.CONS_SSH`,
+ :pyeval:`constants.CONS_VNC`, :pyeval:`constants.CONS_SPICE`
+ or :pyeval:`constants.CONS_MESSAGE`
+``message``
+ Message to display (:pyeval:`constants.CONS_MESSAGE` type only)
+``host``
+ Host to connect to (:pyeval:`constants.CONS_SSH`,
+ :pyeval:`constants.CONS_VNC` or :pyeval:`constants.CONS_SPICE` only)
+``port``
+ TCP port to connect to (:pyeval:`constants.CONS_VNC` or
+ :pyeval:`constants.CONS_SPICE` only)
+``user``
+ Username to use (:pyeval:`constants.CONS_SSH` only)
+``command``
+ Command to execute on machine (:pyeval:`constants.CONS_SSH` only)
+``display``
+ VNC display number (:pyeval:`constants.CONS_VNC` only)
``/2/instances/[instance_name]/tags``
Returns: a dictionary with jobs id and uri.
+If the optional bool *bulk* argument is provided and set to a true value
+(i.e. ``?bulk=1``), the output contains detailed information about jobs
+as a list.
+
+Returned fields for bulk requests (unlike other bulk requests, these
+fields are not the same as for per-job requests):
+:pyeval:`utils.CommaJoin(sorted(rlib2.J_FIELDS_BULK))`.
+
``/2/jobs/[job_id]``
++++++++++++++++++++
``GET``
~~~~~~~
-Returns a job status.
-
-Returns: a dictionary with job parameters.
+Returns a dictionary with job parameters, containing the fields
+:pyeval:`utils.CommaJoin(sorted(rlib2.J_FIELDS))`.
The result includes:
effects. But whether it make sense to retry depends on the error
classification:
-``resolver_error``
+.. pyassert::
+
+ errors.ECODE_ALL == set([errors.ECODE_RESOLVER, errors.ECODE_NORES,
+ errors.ECODE_INVAL, errors.ECODE_STATE, errors.ECODE_NOENT,
+ errors.ECODE_EXISTS, errors.ECODE_NOTUNIQUE, errors.ECODE_FAULT,
+ errors.ECODE_ENVIRON])
+
+:pyeval:`errors.ECODE_RESOLVER`
Resolver errors. This usually means that a name doesn't exist in DNS,
so if it's a case of slow DNS propagation the operation can be retried
later.
-``insufficient_resources``
+:pyeval:`errors.ECODE_NORES`
Not enough resources (iallocator failure, disk space, memory,
etc.). If the resources on the cluster increase, the operation might
succeed.
-``wrong_input``
+:pyeval:`errors.ECODE_INVAL`
Wrong arguments (at syntax level). The operation will not ever be
accepted unless the arguments change.
-``wrong_state``
+:pyeval:`errors.ECODE_STATE`
Wrong entity state. For example, live migration has been requested for
a down instance, or instance creation on an offline node. The
operation can be retried once the resource has changed state.
-``unknown_entity``
+:pyeval:`errors.ECODE_NOENT`
Entity not found. For example, information has been requested for an
unknown instance.
-``already_exists``
+:pyeval:`errors.ECODE_EXISTS`
Entity already exists. For example, instance creation has been
requested for an already-existing instance.
-``resource_not_unique``
+:pyeval:`errors.ECODE_NOTUNIQUE`
Resource not unique (e.g. MAC or IP duplication).
-``internal_error``
+:pyeval:`errors.ECODE_FAULT`
Internal cluster error. For example, a node is unreachable but not set
offline, or the ganeti node daemons are not working, etc. A
``gnt-cluster verify`` should be run.
-``environment_error``
+:pyeval:`errors.ECODE_ENVIRON`
Environment error (e.g. node disk error). A ``gnt-cluster verify``
should be run.
dict:
``fields``
- The job fields on which to watch for changes.
+ The job fields on which to watch for changes
``previous_job_info``
- Previously received field values or None if not yet available.
+ Previously received field values or None if not yet available
``previous_log_serial``
Highest log serial number received so far or None if not yet
- available.
+ available
Returns None if no changes have been detected and a dict with two keys,
``job_info`` and ``log_entries`` otherwise.
}
]
-If the optional 'bulk' argument is provided and set to 'true' value (i.e
-'?bulk=1'), the output contains detailed information about nodes as a
-list.
+If the optional bool *bulk* argument is provided and set to a true value
+(i.e ``?bulk=1``), the output contains detailed information about nodes
+as a list.
+
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`.
Example::
It supports the following commands: ``GET``.
+Returned fields: :pyeval:`utils.CommaJoin(sorted(rlib2.N_FIELDS))`.
+
+``/2/nodes/[node_name]/powercycle``
++++++++++++++++++++++++++++++++++++
+
+Powercycles a node. Supports the following commands: ``POST``.
+
+``POST``
+~~~~~~~~
+
+Returns a job ID.
+
+Job result:
+
+.. opcode_result:: OP_NODE_POWERCYCLE
+
+
``/2/nodes/[node_name]/evacuate``
+++++++++++++++++++++++++++++++++
-Evacuates all secondary instances off a node.
+Evacuates instances off a node.
It supports the following commands: ``POST``.
``POST``
~~~~~~~~
-To evacuate a node, either one of the ``iallocator`` or ``remote_node``
-parameters must be passed::
+Returns a job ID. The result of the job will contain the IDs of the
+individual jobs submitted to evacuate the node.
- evacuate?iallocator=[iallocator]
- evacuate?remote_node=[nodeX.example.com]
+Body parameters:
+
+.. opcode_params:: OP_NODE_EVACUATE
+ :exclude: nodes
-The result value will be a list, each element being a triple of the job
-id (for this specific evacuation), the instance which is being evacuated
-by this job, and the node to which it is being relocated. In case the
-node is already empty, the result will be an empty list (without any
-jobs being submitted).
+Up to and including Ganeti 2.4 query arguments were used. Those are no
+longer supported. The new request can be detected by the presence of the
+:pyeval:`rlib2._NODE_EVAC_RES1` feature string.
-And additional parameter ``early_release`` signifies whether to try to
-parallelize the evacuations, at the risk of increasing I/O contention
-and increasing the chances of data loss, if the primary node of any of
-the instances being evacuated is not fully healthy.
+Job result:
-If the dry-run parameter was specified, then the evacuation jobs were
-not actually submitted, and the job IDs will be null.
+.. opcode_result:: OP_NODE_EVACUATE
``/2/nodes/[node_name]/migrate``
~~~~~~~~
If no mode is explicitly specified, each instances' hypervisor default
-migration mode will be used. Query parameters:
+migration mode will be used. Body parameters:
+
+.. opcode_params:: OP_NODE_MIGRATE
+ :exclude: node_name
-``live`` (bool)
- If set, use live migration if available.
-``mode`` (string)
- Sets migration mode, ``live`` for live migration and ``non-live`` for
- non-live migration. Supported by Ganeti 2.2 and above.
+The query arguments used up to and including Ganeti 2.4 are deprecated
+and should no longer be used. The new request format can be detected by
+the presence of the :pyeval:`rlib2._NODE_MIGRATE_REQV1` feature string.
+
+Job result:
+
+.. opcode_result:: OP_NODE_MIGRATE
``/2/nodes/[node_name]/role``
The role is always one of the following:
- drained
- - master
- master-candidate
- offline
- regular
+Note that the 'master' role is a special, and currently it can't be
+modified via RAPI, only via the command line (``gnt-cluster
+master-failover``).
+
``GET``
~~~~~~~
It supports the bool ``force`` argument.
+Job result:
+
+.. opcode_result:: OP_NODE_SET_PARAMS
+
+
+``/2/nodes/[node_name]/modify``
++++++++++++++++++++++++++++++++
+
+Modifies the parameters of a node. Supports the following commands:
+``POST``.
+
+``POST``
+~~~~~~~~
+
+Returns a job ID.
+
+Body parameters:
+
+.. opcode_params:: OP_NODE_SET_PARAMS
+ :exclude: node_name
+
+Job result:
+
+.. opcode_result:: OP_NODE_SET_PARAMS
+
+
``/2/nodes/[node_name]/storage``
++++++++++++++++++++++++++++++++
``GET``
~~~~~~~
+.. pyassert::
+
+ constants.VALID_STORAGE_TYPES == set([constants.ST_FILE,
+ constants.ST_LVM_PV,
+ constants.ST_LVM_VG])
+
Requests a list of storage units on a node. Requires the parameters
-``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``) and
+``storage_type`` (one of :pyeval:`constants.ST_FILE`,
+:pyeval:`constants.ST_LVM_PV` or :pyeval:`constants.ST_LVM_VG`) and
``output_fields``. The result will be a job id, using which the result
can be retrieved.
~~~~~~~
Modifies parameters of storage units on the node. Requires the
-parameters ``storage_type`` (one of ``file``, ``lvm-pv`` or ``lvm-vg``)
+parameters ``storage_type`` (one of :pyeval:`constants.ST_FILE`,
+:pyeval:`constants.ST_LVM_PV` or :pyeval:`constants.ST_LVM_VG`)
and ``name`` (name of the storage unit). Parameters can be passed
-additionally. Currently only ``allocatable`` (bool) is supported. The
-result will be a job id.
+additionally. Currently only :pyeval:`constants.SF_ALLOCATABLE` (bool)
+is supported. The result will be a job id.
+
+Job result:
+
+.. opcode_result:: OP_NODE_MODIFY_STORAGE
+
``/2/nodes/[node_name]/storage/repair``
+++++++++++++++++++++++++++++++++++++++
``PUT``
~~~~~~~
+.. pyassert::
+
+ constants.VALID_STORAGE_OPERATIONS == {
+ constants.ST_LVM_VG: set([constants.SO_FIX_CONSISTENCY]),
+ }
+
Repairs a storage unit on the node. Requires the parameters
-``storage_type`` (currently only ``lvm-vg`` can be repaired) and
-``name`` (name of the storage unit). The result will be a job id.
+``storage_type`` (currently only :pyeval:`constants.ST_LVM_VG` can be
+repaired) and ``name`` (name of the storage unit). The result will be a
+job id.
+
+Job result:
+
+.. opcode_result:: OP_REPAIR_NODE_STORAGE
+
``/2/nodes/[node_name]/tags``
+++++++++++++++++++++++++++++
It supports the ``dry-run`` argument.
+``/2/query/[resource]``
++++++++++++++++++++++++
+
+Requests resource information. Available fields can be found in man
+pages and using ``/2/query/[resource]/fields``. The resource is one of
+:pyeval:`utils.CommaJoin(constants.QR_VIA_RAPI)`. See the :doc:`query2
+design document <design-query2>` for more details.
+
+.. pyassert::
+
+ (rlib2.R_2_query.GET_ACCESS == rlib2.R_2_query.PUT_ACCESS and
+ not (hasattr(rlib2.R_2_query, "POST") or
+ hasattr(rlib2.R_2_query, "DELETE")))
+
+Supports the following commands: ``GET``, ``PUT``. Requires
+authentication with one of the following options:
+:pyeval:`utils.CommaJoin(rlib2.R_2_query.GET_ACCESS)`.
+
+``GET``
+~~~~~~~
+
+Returns list of included fields and actual data. Takes a query parameter
+named "fields", containing a comma-separated list of field names. Does
+not support filtering.
+
+``PUT``
+~~~~~~~
+
+Returns list of included fields and actual data. The list of requested
+fields can either be given as the query parameter "fields" or as a body
+parameter with the same name. The optional body parameter "filter" can
+be given and must be either ``null`` or a list containing filter
+operators.
+
+
+``/2/query/[resource]/fields``
+++++++++++++++++++++++++++++++
+
+Request list of available fields for a resource. The resource is one of
+:pyeval:`utils.CommaJoin(constants.QR_VIA_RAPI)`. See the
+:doc:`query2 design document <design-query2>` for more details.
+
+Supports the following commands: ``GET``.
+
+``GET``
+~~~~~~~
+
+Returns a list of field descriptions for available fields. Takes an
+optional query parameter named "fields", containing a comma-separated
+list of field names.
+
+
``/2/os``
+++++++++