Statistics
| Branch: | Tag: | Revision:

root / doc / iallocator.rst @ 28e15341

History | View | Annotate | Download (10.8 kB)

1
Ganeti automatic instance allocation
2
====================================
3

    
4
Documents Ganeti version 2.0
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Currently in Ganeti the admin has to specify the exact locations for
12
an instance's node(s). This prevents a completely automatic node
13
evacuation, and is in general a nuisance.
14

    
15
The *iallocator* framework will enable automatic placement via
16
external scripts, which allows customization of the cluster layout per
17
the site's requirements.
18

    
19
User-visible changes
20
~~~~~~~~~~~~~~~~~~~~
21

    
22
There are two parts of the ganeti operation that are impacted by the
23
auto-allocation: how the cluster knows what the allocator algorithms
24
are and how the admin uses these in creating instances.
25

    
26
An allocation algorithm is just the filename of a program installed in
27
a defined list of directories.
28

    
29
Cluster configuration
30
~~~~~~~~~~~~~~~~~~~~~
31

    
32
At configure time, the list of the directories can be selected via the
33
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
34
comma-separated list of directories. If not given, this defaults to
35
``$libdir/ganeti/iallocators``, i.e. for an installation under
36
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.
37

    
38
Ganeti will then search for allocator script in the configured list,
39
using the first one whose filename matches the one given by the user.
40

    
41
Command line interface changes
42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43

    
44
The node selection options in instanece add and instance replace disks
45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46
``-I``), which will cause the auto-assignement of nodes with the
47
passed iallocator. The selected node(s) will be show as part of the
48
command output.
49

    
50
IAllocator API
51
--------------
52

    
53
The protocol for communication between Ganeti and an allocator script
54
will be the following:
55

    
56
#. ganeti launches the program with a single argument, a filename that
57
   contains a JSON-encoded structure (the input message)
58

    
59
#. if the script finishes with exit code different from zero, it is
60
   considered a general failure and the full output will be reported to
61
   the users; this can be the case when the allocator can't parse the
62
   input message
63

    
64
#. if the allocator finishes with exit code zero, it is expected to
65
   output (on its stdout) a JSON-encoded structure (the response)
66

    
67
Input message
68
~~~~~~~~~~~~~
69

    
70
The input message will be the JSON encoding of a dictionary containing
71
the following:
72

    
73
version
74
  the version of the protocol; this document
75
  specifies version 1
76

    
77
cluster_name
78
  the cluster name
79

    
80
cluster_tags
81
  the list of cluster tags
82

    
83
request
84
  a dictionary containing the request data:
85

    
86
  type
87
    the request type; this can be either ``allocate`` or ``relocate``;
88
    the ``allocate`` request is used when a new instance needs to be
89
    placed on the cluster, while the ``relocate`` request is used when
90
    an existing instance needs to be moved within the cluster
91

    
92
  name
93
    the name of the instance; if the request is a realocation, then
94
    this name will be found in the list of instances (see below),
95
    otherwise is the FQDN of the new instance
96

    
97
  required_nodes
98
    how many nodes should the algorithm return; while this information
99
    can be deduced from the instace's disk template, it's better if
100
    this computation is left to Ganeti as then allocator scripts are
101
    less sensitive to changes to the disk templates
102

    
103
  disk_space_total
104
    the total disk space that will be used by this instance on the
105
    (new) nodes; again, this information can be computed from the list
106
    of instance disks and its template type, but Ganeti is better
107
    suited to compute it
108

    
109
  If the request is an allocation, then there are extra fields in the
110
  request dictionary:
111

    
112
  disks
113
    list of dictionaries holding the disk definitions for this
114
    instance (in the order they are exported to the hypervisor):
115

    
116
    mode
117
      either ``r`` or ``w`` denoting if the disk is read-only or
118
      writable
119

    
120
    size
121
      the size of this disk in mebibytes
122

    
123
  nics
124
    a list of dictionaries holding the network interfaces for this
125
    instance, containing:
126

    
127
    ip
128
      the IP address that Ganeti know for this instance, or null
129

    
130
    mac
131
      the MAC address for this interface
132

    
133
    bridge
134
      the bridge to which this interface will be connected
135

    
136
  vcpus
137
    the number of VCPUs for the instance
138

    
139
  disk_template
140
    the disk template for the instance
141

    
142
  memory
143
   the memory size for the instance
144

    
145
  os
146
   the OS type for the instance
147

    
148
  tags
149
    the list of the instance's tags
150

    
151

    
152
  If the request is of type relocate, then there is one more entry in
153
  the request dictionary, named ``relocate_from``, and it contains a
154
  list of nodes to move the instance away from; note that with Ganeti
155
  2.0, this list will always contain a single node, the current
156
  secondary of the instance.
157

    
158
instances
159
  a dictionary with the data for the current existing instance on the
160
  cluster, indexed by instance name; the contents are similar to the
161
  instance definitions for the allocate mode, with the addition of:
162

    
163
  should_run
164
    if this instance is set to run (but not the actual status of the
165
    instance)
166

    
167
  nodes
168
    list of nodes on which this instance is placed; the primary node
169
    of the instance is always the first one
170

    
171
nodes
172
  dictionary with the data for the nodes in the cluster, indexed by
173
  the node name; the dict contains:
174

    
175
  total_disk
176
    the total disk size of this node (mebibytes)
177

    
178
  free_disk
179
    the free disk space on the node
180

    
181
  total_memory
182
    the total memory size
183

    
184
  free_memory
185
    free memory on the node; note that currently this does not take
186
    into account the instances which are down on the node
187

    
188
  total_cpus
189
    the physical number of CPUs present on the machine; depending on
190
    the hypervisor, this might or might not be equal to how many CPUs
191
    the node operating system sees;
192

    
193
  primary_ip
194
    the primary IP address of the node
195

    
196
  secondary_ip
197
    the secondary IP address of the node (the one used for the DRBD
198
    replication); note that this can be the same as the primary one
199

    
200
  tags
201
    list with the tags of the node
202

    
203
Respone message
204
~~~~~~~~~~~~~~~
205

    
206
The response message is much more simple than the input one. It is
207
also a dict having three keys:
208

    
209
success
210
  a boolean value denoting if the allocation was successfull or not
211

    
212
info
213
  a string with information from the scripts; if the allocation fails,
214
  this will be shown to the user
215

    
216
nodes
217
  the list of nodes computed by the algorithm; even if the algorithm
218
  failed (i.e. success is false), this must be returned as an empty
219
  list; also note that the length of this list must equal the
220
  ``requested_nodes`` entry in the input message, otherwise Ganeti
221
  will consider the result as failed
222

    
223
Examples
224
--------
225

    
226
Input messages to scripts
227
~~~~~~~~~~~~~~~~~~~~~~~~~
228

    
229
Input message, new instance allocation::
230

    
231
  {
232
    "cluster_tags": [],
233
    "request": {
234
      "required_nodes": 2,
235
      "name": "instance3.example.com",
236
      "tags": [
237
        "type:test",
238
        "owner:foo"
239
      ],
240
      "type": "allocate",
241
      "disks": [
242
        {
243
          "mode": "w",
244
          "size": 1024
245
        },
246
        {
247
          "mode": "w",
248
          "size": 2048
249
        }
250
      ],
251
      "nics": [
252
        {
253
          "ip": null,
254
          "mac": "00:11:22:33:44:55",
255
          "bridge": null
256
        }
257
      ],
258
      "vcpus": 1,
259
      "disk_template": "drbd",
260
      "memory": 2048,
261
      "disk_space_total": 3328,
262
      "os": "etch-image"
263
    },
264
    "cluster_name": "cluster1.example.com",
265
    "instances": {
266
      "instance1.example.com": {
267
        "tags": [],
268
        "should_run": false,
269
        "disks": [
270
          {
271
            "mode": "w",
272
            "size": 64
273
          },
274
          {
275
            "mode": "w",
276
            "size": 512
277
          }
278
        ],
279
        "nics": [
280
          {
281
            "ip": null,
282
            "mac": "aa:00:00:00:60:bf",
283
            "bridge": "xen-br0"
284
          }
285
        ],
286
        "vcpus": 1,
287
        "disk_template": "plain",
288
        "memory": 128,
289
        "nodes": [
290
          "nodee1.com"
291
        ],
292
        "os": "etch-image"
293
      },
294
      "instance2.example.com": {
295
        "tags": [],
296
        "should_run": false,
297
        "disks": [
298
          {
299
            "mode": "w",
300
            "size": 512
301
          },
302
          {
303
            "mode": "w",
304
            "size": 256
305
          }
306
        ],
307
        "nics": [
308
          {
309
            "ip": null,
310
            "mac": "aa:00:00:55:f8:38",
311
            "bridge": "xen-br0"
312
          }
313
        ],
314
        "vcpus": 1,
315
        "disk_template": "drbd",
316
        "memory": 512,
317
        "nodes": [
318
          "node2.example.com",
319
          "node3.example.com"
320
        ],
321
        "os": "etch-image"
322
      }
323
    },
324
    "version": 1,
325
    "nodes": {
326
      "node1.example.com": {
327
        "total_disk": 858276,
328
        "primary_ip": "192.168.1.1",
329
        "secondary_ip": "192.168.2.1",
330
        "tags": [],
331
        "free_memory": 3505,
332
        "free_disk": 856740,
333
        "total_memory": 4095
334
      },
335
      "node2.example.com": {
336
        "total_disk": 858240,
337
        "primary_ip": "192.168.1.3",
338
        "secondary_ip": "192.168.2.3",
339
        "tags": ["test"],
340
        "free_memory": 3505,
341
        "free_disk": 848320,
342
        "total_memory": 4095
343
      },
344
      "node3.example.com.com": {
345
        "total_disk": 572184,
346
        "primary_ip": "192.168.1.3",
347
        "secondary_ip": "192.168.2.3",
348
        "tags": [],
349
        "free_memory": 3505,
350
        "free_disk": 570648,
351
        "total_memory": 4095
352
      }
353
    }
354
  }
355

    
356
Input message, reallocation. Since only the request entry in the input
357
message is changed, we show only this changed entry::
358

    
359
  "request": {
360
    "relocate_from": [
361
      "node3.example.com"
362
    ],
363
    "required_nodes": 1,
364
    "type": "relocate",
365
    "name": "instance2.example.com",
366
    "disk_space_total": 832
367
  },
368

    
369

    
370
Response messages
371
~~~~~~~~~~~~~~~~~
372
Successful response message::
373

    
374
  {
375
    "info": "Allocation successful",
376
    "nodes": [
377
      "node2.example.com",
378
      "node1.example.com"
379
    ],
380
    "success": true
381
  }
382

    
383
Failed response message::
384

    
385
  {
386
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
387
    "nodes": [],
388
    "success": false
389
  }
390

    
391
Command line messages
392
~~~~~~~~~~~~~~~~~~~~~
393
::
394

    
395
  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance3
396
  Selected nodes for the instance: node1.example.com
397
  * creating instance disks...
398
  [...]
399

    
400
  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance4
401
  Failure: prerequisites not met for this operation:
402
  Can't compute nodes using iallocator 'dumb-allocator': Can't find a suitable node for position 1 (already selected: )
403

    
404
  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator dumb-allocator -o etch-image instance5
405
  Failure: prerequisites not met for this operation:
406
  Can't compute nodes using iallocator 'dumb-allocator': Can't find a suitable node for position 2 (already selected: node1.example.com)