Statistics
| Branch: | Tag: | Revision:

root / doc / iallocator.rst @ 6f547f96

History | View | Annotate | Download (13.5 kB)

1
Ganeti automatic instance allocation
2
====================================
3

    
4
Documents Ganeti version 2.1
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Currently in Ganeti the admin has to specify the exact locations for
12
an instance's node(s). This prevents a completely automatic node
13
evacuation, and is in general a nuisance.
14

    
15
The *iallocator* framework will enable automatic placement via
16
external scripts, which allows customization of the cluster layout per
17
the site's requirements.
18

    
19
User-visible changes
20
~~~~~~~~~~~~~~~~~~~~
21

    
22
There are two parts of the ganeti operation that are impacted by the
23
auto-allocation: how the cluster knows what the allocator algorithms
24
are and how the admin uses these in creating instances.
25

    
26
An allocation algorithm is just the filename of a program installed in
27
a defined list of directories.
28

    
29
Cluster configuration
30
~~~~~~~~~~~~~~~~~~~~~
31

    
32
At configure time, the list of the directories can be selected via the
33
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
34
comma-separated list of directories. If not given, this defaults to
35
``$libdir/ganeti/iallocators``, i.e. for an installation under
36
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.
37

    
38
Ganeti will then search for allocator script in the configured list,
39
using the first one whose filename matches the one given by the user.
40

    
41
Command line interface changes
42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43

    
44
The node selection options in instanece add and instance replace disks
45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46
``-I``), which will cause the auto-assignement of nodes with the
47
passed iallocator. The selected node(s) will be show as part of the
48
command output.
49

    
50
IAllocator API
51
--------------
52

    
53
The protocol for communication between Ganeti and an allocator script
54
will be the following:
55

    
56
#. ganeti launches the program with a single argument, a filename that
57
   contains a JSON-encoded structure (the input message)
58

    
59
#. if the script finishes with exit code different from zero, it is
60
   considered a general failure and the full output will be reported to
61
   the users; this can be the case when the allocator can't parse the
62
   input message
63

    
64
#. if the allocator finishes with exit code zero, it is expected to
65
   output (on its stdout) a JSON-encoded structure (the response)
66

    
67
Input message
68
~~~~~~~~~~~~~
69

    
70
The input message will be the JSON encoding of a dictionary containing
71
the following:
72

    
73
version
74
  the version of the protocol; this document
75
  specifies version 2
76

    
77
cluster_name
78
  the cluster name
79

    
80
cluster_tags
81
  the list of cluster tags
82

    
83
enabled_hypervisors
84
  the list of enabled hypervisors
85

    
86
request
87
  a dictionary containing the request data:
88

    
89
  type
90
    the request type; this can be either ``allocate``, ``relocate`` or
91
    ``multi-evacuate``; the ``allocate`` request is used when a new
92
    instance needs to be placed on the cluster, while the ``relocate``
93
    request is used when an existing instance needs to be moved within
94
    the cluster; the ``multi-evacuate`` protocol requests that the
95
    script computes the optimal relocate solution for all secondary
96
    instances of the given nodes
97

    
98
  The following keys are needed in allocate/relocate mode:
99

    
100
  name
101
    the name of the instance; if the request is a realocation, then this
102
    name will be found in the list of instances (see below), otherwise
103
    is the FQDN of the new instance
104

    
105
  required_nodes
106
    how many nodes should the algorithm return; while this information
107
    can be deduced from the instace's disk template, it's better if
108
    this computation is left to Ganeti as then allocator scripts are
109
    less sensitive to changes to the disk templates
110

    
111
  disk_space_total
112
    the total disk space that will be used by this instance on the
113
    (new) nodes; again, this information can be computed from the list
114
    of instance disks and its template type, but Ganeti is better
115
    suited to compute it
116

    
117
  If the request is an allocation, then there are extra fields in the
118
  request dictionary:
119

    
120
  disks
121
    list of dictionaries holding the disk definitions for this
122
    instance (in the order they are exported to the hypervisor):
123

    
124
    mode
125
      either ``r`` or ``w`` denoting if the disk is read-only or
126
      writable
127

    
128
    size
129
      the size of this disk in mebibytes
130

    
131
  nics
132
    a list of dictionaries holding the network interfaces for this
133
    instance, containing:
134

    
135
    ip
136
      the IP address that Ganeti know for this instance, or null
137

    
138
    mac
139
      the MAC address for this interface
140

    
141
    bridge
142
      the bridge to which this interface will be connected
143

    
144
  vcpus
145
    the number of VCPUs for the instance
146

    
147
  disk_template
148
    the disk template for the instance
149

    
150
  memory
151
   the memory size for the instance
152

    
153
  os
154
   the OS type for the instance
155

    
156
  tags
157
    the list of the instance's tags
158

    
159
  hypervisor
160
    the hypervisor of this instance
161

    
162

    
163
  If the request is of type relocate, then there is one more entry in
164
  the request dictionary, named ``relocate_from``, and it contains a
165
  list of nodes to move the instance away from; note that with Ganeti
166
  2.0, this list will always contain a single node, the current
167
  secondary of the instance.
168

    
169
  The multi-evacuate mode has instead a single request argument:
170

    
171
  nodes
172
    the names of the nodes to be evacuated
173

    
174
nodegroups
175
  a dictionary with the data for the cluster's node groups; it is keyed
176
  on the group UUID, and the values are a dictionary with the following
177
  keys:
178

    
179
  name
180
    the node group name
181
  alloc_policy
182
    the allocation policy of the node group
183

    
184
instances
185
  a dictionary with the data for the current existing instance on the
186
  cluster, indexed by instance name; the contents are similar to the
187
  instance definitions for the allocate mode, with the addition of:
188

    
189
  admin_up
190
    if this instance is set to run (but not the actual status of the
191
    instance)
192

    
193
  nodes
194
    list of nodes on which this instance is placed; the primary node
195
    of the instance is always the first one
196

    
197
nodes
198
  dictionary with the data for the nodes in the cluster, indexed by
199
  the node name; the dict contains [*]_ :
200

    
201
  total_disk
202
    the total disk size of this node (mebibytes)
203

    
204
  free_disk
205
    the free disk space on the node
206

    
207
  total_memory
208
    the total memory size
209

    
210
  free_memory
211
    free memory on the node; note that currently this does not take
212
    into account the instances which are down on the node
213

    
214
  total_cpus
215
    the physical number of CPUs present on the machine; depending on
216
    the hypervisor, this might or might not be equal to how many CPUs
217
    the node operating system sees;
218

    
219
  primary_ip
220
    the primary IP address of the node
221

    
222
  secondary_ip
223
    the secondary IP address of the node (the one used for the DRBD
224
    replication); note that this can be the same as the primary one
225

    
226
  tags
227
    list with the tags of the node
228

    
229
  master_candidate:
230
    a boolean flag denoting whether this node is a master candidate
231

    
232
  drained:
233
    a boolean flag denoting whether this node is being drained
234

    
235
  offline:
236
    a boolean flag denoting whether this node is offline
237

    
238
  i_pri_memory:
239
    total memory required by primary instances
240

    
241
  i_pri_up_memory:
242
    total memory required by running primary instances
243

    
244
  group:
245
    the node group that this node belongs to
246

    
247
  No allocations should be made on nodes having either the ``drained``
248
  or ``offline`` flags set. More details about these of node status
249
  flags is available in the manpage :manpage:`ganeti(7)`.
250

    
251
.. [*] Note that no run-time data is present for offline, drained or
252
   non-vm_capable nodes; this means the tags total_memory,
253
   reserved_memory, free_memory, total_disk, free_disk, total_cpus,
254
   i_pri_memory and i_pri_up memory will be absent
255

    
256

    
257
Response message
258
~~~~~~~~~~~~~~~~
259

    
260
The response message is much more simple than the input one. It is
261
also a dict having three keys:
262

    
263
success
264
  a boolean value denoting if the allocation was successful or not
265

    
266
info
267
  a string with information from the scripts; if the allocation fails,
268
  this will be shown to the user
269

    
270
result
271
  the output of the algorithm; even if the algorithm failed
272
  (i.e. success is false), this must be returned as an empty list
273

    
274
  for allocate/relocate, this is the list of node(s) for the instance;
275
  note that the length of this list must equal the ``requested_nodes``
276
  entry in the input message, otherwise Ganeti will consider the result
277
  as failed
278

    
279
  for multi-evacuation mode, this is a list of lists; each element of
280
  the list is a list of instance name and the new secondary node
281

    
282
.. note:: Current Ganeti version accepts either ``result`` or ``nodes``
283
   as a backwards-compatibility measure (older versions only supported
284
   ``nodes``)
285

    
286
Examples
287
--------
288

    
289
Input messages to scripts
290
~~~~~~~~~~~~~~~~~~~~~~~~~
291

    
292
Input message, new instance allocation::
293

    
294
  {
295
    "cluster_tags": [],
296
    "request": {
297
      "required_nodes": 2,
298
      "name": "instance3.example.com",
299
      "tags": [
300
        "type:test",
301
        "owner:foo"
302
      ],
303
      "type": "allocate",
304
      "disks": [
305
        {
306
          "mode": "w",
307
          "size": 1024
308
        },
309
        {
310
          "mode": "w",
311
          "size": 2048
312
        }
313
      ],
314
      "nics": [
315
        {
316
          "ip": null,
317
          "mac": "00:11:22:33:44:55",
318
          "bridge": null
319
        }
320
      ],
321
      "vcpus": 1,
322
      "disk_template": "drbd",
323
      "memory": 2048,
324
      "disk_space_total": 3328,
325
      "os": "debootstrap+default"
326
    },
327
    "cluster_name": "cluster1.example.com",
328
    "instances": {
329
      "instance1.example.com": {
330
        "tags": [],
331
        "should_run": false,
332
        "disks": [
333
          {
334
            "mode": "w",
335
            "size": 64
336
          },
337
          {
338
            "mode": "w",
339
            "size": 512
340
          }
341
        ],
342
        "nics": [
343
          {
344
            "ip": null,
345
            "mac": "aa:00:00:00:60:bf",
346
            "bridge": "xen-br0"
347
          }
348
        ],
349
        "vcpus": 1,
350
        "disk_template": "plain",
351
        "memory": 128,
352
        "nodes": [
353
          "nodee1.com"
354
        ],
355
        "os": "debootstrap+default"
356
      },
357
      "instance2.example.com": {
358
        "tags": [],
359
        "should_run": false,
360
        "disks": [
361
          {
362
            "mode": "w",
363
            "size": 512
364
          },
365
          {
366
            "mode": "w",
367
            "size": 256
368
          }
369
        ],
370
        "nics": [
371
          {
372
            "ip": null,
373
            "mac": "aa:00:00:55:f8:38",
374
            "bridge": "xen-br0"
375
          }
376
        ],
377
        "vcpus": 1,
378
        "disk_template": "drbd",
379
        "memory": 512,
380
        "nodes": [
381
          "node2.example.com",
382
          "node3.example.com"
383
        ],
384
        "os": "debootstrap+default"
385
      }
386
    },
387
    "version": 1,
388
    "nodes": {
389
      "node1.example.com": {
390
        "total_disk": 858276,
391
        "primary_ip": "198.51.100.1",
392
        "secondary_ip": "192.0.2.1",
393
        "tags": [],
394
        "free_memory": 3505,
395
        "free_disk": 856740,
396
        "total_memory": 4095
397
      },
398
      "node2.example.com": {
399
        "total_disk": 858240,
400
        "primary_ip": "198.51.100.2",
401
        "secondary_ip": "192.0.2.2",
402
        "tags": ["test"],
403
        "free_memory": 3505,
404
        "free_disk": 848320,
405
        "total_memory": 4095
406
      },
407
      "node3.example.com.com": {
408
        "total_disk": 572184,
409
        "primary_ip": "198.51.100.3",
410
        "secondary_ip": "192.0.2.3",
411
        "tags": [],
412
        "free_memory": 3505,
413
        "free_disk": 570648,
414
        "total_memory": 4095
415
      }
416
    }
417
  }
418

    
419
Input message, reallocation. Since only the request entry in the input
420
message is changed, we show only this changed entry::
421

    
422
  "request": {
423
    "relocate_from": [
424
      "node3.example.com"
425
    ],
426
    "required_nodes": 1,
427
    "type": "relocate",
428
    "name": "instance2.example.com",
429
    "disk_space_total": 832
430
  },
431

    
432

    
433
Input message, node evacuation::
434

    
435
  "request": {
436
    "evac_nodes": [
437
      "node2"
438
    ],
439
    "type": "multi-evacuate"
440
  },
441

    
442

    
443
Response messages
444
~~~~~~~~~~~~~~~~~
445
Successful response message::
446

    
447
  {
448
    "info": "Allocation successful",
449
    "result": [
450
      "node2.example.com",
451
      "node1.example.com"
452
    ],
453
    "success": true
454
  }
455

    
456
Failed response message::
457

    
458
  {
459
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
460
    "result": [],
461
    "success": false
462
  }
463

    
464
Successful node evacuation message::
465

    
466
  {
467
    "info": "Request successful",
468
    "result": [
469
      [
470
        "instance1",
471
        "node3"
472
      ],
473
      [
474
        "instance2",
475
        "node1"
476
      ]
477
    ],
478
    "success": true
479
  }
480

    
481

    
482
Command line messages
483
~~~~~~~~~~~~~~~~~~~~~
484
::
485

    
486
  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance3
487
  Selected nodes for the instance: node1.example.com
488
  * creating instance disks...
489
  [...]
490

    
491
  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance4
492
  Failure: prerequisites not met for this operation:
493
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 1 (already selected: )
494

    
495
  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance5
496
  Failure: prerequisites not met for this operation:
497
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 2 (already selected: node1.example.com)
498

    
499
Reference implementation
500
~~~~~~~~~~~~~~~~~~~~~~~~
501

    
502
Ganeti's default iallocator is "hail" which is part of the separate
503
ganeti-htools project. In order to see its source code please clone
504
``git://git.ganeti.org/htools.git``. Note that htools is implemented
505
using the Haskell programming language.
506

    
507
.. vim: set textwidth=72 :
508
.. Local Variables:
509
.. mode: rst
510
.. fill-column: 72
511
.. End: