Statistics
| Branch: | Tag: | Revision:

root / doc / iallocator.rst @ 25ee7fd8

History | View | Annotate | Download (15.9 kB)

1
Ganeti automatic instance allocation
2
====================================
3

    
4
Documents Ganeti version 2.4
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Currently in Ganeti the admin has to specify the exact locations for
12
an instance's node(s). This prevents a completely automatic node
13
evacuation, and is in general a nuisance.
14

    
15
The *iallocator* framework will enable automatic placement via
16
external scripts, which allows customization of the cluster layout per
17
the site's requirements.
18

    
19
User-visible changes
20
~~~~~~~~~~~~~~~~~~~~
21

    
22
There are two parts of the ganeti operation that are impacted by the
23
auto-allocation: how the cluster knows what the allocator algorithms
24
are and how the admin uses these in creating instances.
25

    
26
An allocation algorithm is just the filename of a program installed in
27
a defined list of directories.
28

    
29
Cluster configuration
30
~~~~~~~~~~~~~~~~~~~~~
31

    
32
At configure time, the list of the directories can be selected via the
33
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
34
comma-separated list of directories. If not given, this defaults to
35
``$libdir/ganeti/iallocators``, i.e. for an installation under
36
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.
37

    
38
Ganeti will then search for allocator script in the configured list,
39
using the first one whose filename matches the one given by the user.
40

    
41
Command line interface changes
42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43

    
44
The node selection options in instanece add and instance replace disks
45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46
``-I``), which will cause the auto-assignement of nodes with the
47
passed iallocator. The selected node(s) will be show as part of the
48
command output.
49

    
50
IAllocator API
51
--------------
52

    
53
The protocol for communication between Ganeti and an allocator script
54
will be the following:
55

    
56
#. ganeti launches the program with a single argument, a filename that
57
   contains a JSON-encoded structure (the input message)
58

    
59
#. if the script finishes with exit code different from zero, it is
60
   considered a general failure and the full output will be reported to
61
   the users; this can be the case when the allocator can't parse the
62
   input message
63

    
64
#. if the allocator finishes with exit code zero, it is expected to
65
   output (on its stdout) a JSON-encoded structure (the response)
66

    
67
Input message
68
~~~~~~~~~~~~~
69

    
70
The input message will be the JSON encoding of a dictionary containing
71
all the required information to perform the operation. We explain the
72
contents of this dictionary in two parts: common information that every
73
type of operation requires, and operation-specific information.
74

    
75
Common information
76
++++++++++++++++++
77

    
78
All input dictionaries to the IAllocator must carry the following keys:
79

    
80
version
81
  the version of the protocol; this document
82
  specifies version 2
83

    
84
cluster_name
85
  the cluster name
86

    
87
cluster_tags
88
  the list of cluster tags
89

    
90
enabled_hypervisors
91
  the list of enabled hypervisors
92

    
93
request
94
  a dictionary containing the details of the request; the keys vary
95
  depending on the type of operation that's being requested, as
96
  explained in `Operation-specific input`_ below.
97

    
98
nodegroups
99
  a dictionary with the data for the cluster's node groups; it is keyed
100
  on the group UUID, and the values are a dictionary with the following
101
  keys:
102

    
103
  name
104
    the node group name
105
  alloc_policy
106
    the allocation policy of the node group (consult the semantics of
107
    this attribute in the :manpage:`gnt-group(8)` manpage)
108

    
109
instances
110
  a dictionary with the data for the current existing instance on the
111
  cluster, indexed by instance name; the contents are similar to the
112
  instance definitions for the allocate mode, with the addition of:
113

    
114
  admin_up
115
    if this instance is set to run (but not the actual status of the
116
    instance)
117

    
118
  nodes
119
    list of nodes on which this instance is placed; the primary node
120
    of the instance is always the first one
121

    
122
nodes
123
  dictionary with the data for the nodes in the cluster, indexed by
124
  the node name; the dict contains [*]_ :
125

    
126
  total_disk
127
    the total disk size of this node (mebibytes)
128

    
129
  free_disk
130
    the free disk space on the node
131

    
132
  total_memory
133
    the total memory size
134

    
135
  free_memory
136
    free memory on the node; note that currently this does not take
137
    into account the instances which are down on the node
138

    
139
  total_cpus
140
    the physical number of CPUs present on the machine; depending on
141
    the hypervisor, this might or might not be equal to how many CPUs
142
    the node operating system sees;
143

    
144
  primary_ip
145
    the primary IP address of the node
146

    
147
  secondary_ip
148
    the secondary IP address of the node (the one used for the DRBD
149
    replication); note that this can be the same as the primary one
150

    
151
  tags
152
    list with the tags of the node
153

    
154
  master_candidate:
155
    a boolean flag denoting whether this node is a master candidate
156

    
157
  drained:
158
    a boolean flag denoting whether this node is being drained
159

    
160
  offline:
161
    a boolean flag denoting whether this node is offline
162

    
163
  i_pri_memory:
164
    total memory required by primary instances
165

    
166
  i_pri_up_memory:
167
    total memory required by running primary instances
168

    
169
  group:
170
    the node group that this node belongs to
171

    
172
  No allocations should be made on nodes having either the ``drained``
173
  or ``offline`` flags set. More details about these of node status
174
  flags is available in the manpage :manpage:`ganeti(7)`.
175

    
176
.. [*] Note that no run-time data is present for offline, drained or
177
   non-vm_capable nodes; this means the tags total_memory,
178
   reserved_memory, free_memory, total_disk, free_disk, total_cpus,
179
   i_pri_memory and i_pri_up memory will be absent
180

    
181
Operation-specific input
182
++++++++++++++++++++++++
183

    
184
All input dictionaries to the IAllocator carry, in the ``request``
185
dictionary, detailed information about the operation that's being
186
requested. The required keys vary depending on the type of operation, as
187
follows.
188

    
189
In all cases, it includes:
190

    
191
  type
192
    the request type; this can be either ``allocate``, ``relocate``,
193
    ``multi-relocate`` or ``multi-evacuate``. The ``allocate`` request
194
    is used when a new instance needs to be placed on the cluster. The
195
    ``relocate`` request is used when an existing instance needs to be
196
    moved within its node group.
197

    
198
    The ``multi-evacuate`` protocol used to request that the script
199
    computes the optimal relocate solution for all secondary instances
200
    of the given nodes. It is now deprecated and should no longer be
201
    used.
202

    
203
    The ``change-group`` request is used to relocate multiple instances
204
    across multiple node groups. ``node-evacuate`` evacuates instances
205
    off their node(s). These are described in a separate :ref:`design
206
    document <multi-reloc-detailed-design>`.
207

    
208
For both allocate and relocate mode, the following extra keys are needed
209
in the ``request`` dictionary:
210

    
211
  name
212
    the name of the instance; if the request is a realocation, then this
213
    name will be found in the list of instances (see below), otherwise
214
    is the FQDN of the new instance; type *string*
215

    
216
  required_nodes
217
    how many nodes should the algorithm return; while this information
218
    can be deduced from the instace's disk template, it's better if
219
    this computation is left to Ganeti as then allocator scripts are
220
    less sensitive to changes to the disk templates; type *integer*
221

    
222
  disk_space_total
223
    the total disk space that will be used by this instance on the
224
    (new) nodes; again, this information can be computed from the list
225
    of instance disks and its template type, but Ganeti is better
226
    suited to compute it; type *integer*
227

    
228
.. pyassert::
229

    
230
   constants.DISK_ACCESS_SET == set([constants.DISK_RDONLY,
231
     constants.DISK_RDWR])
232

    
233
Allocation needs, in addition:
234

    
235
  disks
236
    list of dictionaries holding the disk definitions for this
237
    instance (in the order they are exported to the hypervisor):
238

    
239
    mode
240
      either :pyeval:`constants.DISK_RDONLY` or
241
      :pyeval:`constants.DISK_RDWR` denoting if the disk is read-only or
242
      writable
243

    
244
    size
245
      the size of this disk in mebibytes
246

    
247
  nics
248
    a list of dictionaries holding the network interfaces for this
249
    instance, containing:
250

    
251
    ip
252
      the IP address that Ganeti know for this instance, or null
253

    
254
    mac
255
      the MAC address for this interface
256

    
257
    bridge
258
      the bridge to which this interface will be connected
259

    
260
  vcpus
261
    the number of VCPUs for the instance
262

    
263
  disk_template
264
    the disk template for the instance
265

    
266
  memory
267
   the memory size for the instance
268

    
269
  os
270
   the OS type for the instance
271

    
272
  tags
273
    the list of the instance's tags
274

    
275
  hypervisor
276
    the hypervisor of this instance
277

    
278
Relocation:
279

    
280
  relocate_from
281
     a list of nodes to move the instance away from (note that with
282
     Ganeti 2.0, this list will always contain a single node, the
283
     current secondary of the instance); type *list of strings*
284

    
285
As for ``node-evacuate``, it needs the following request arguments:
286

    
287
  instances
288
    a list of instance names to evacuate; type *list of strings*
289

    
290
  evac_mode
291
    specify which instances to evacuate; one of ``primary-only``,
292
    ``secondary-only``, ``all``, type *string*
293

    
294

    
295
``change-group`` needs the following request arguments:
296

    
297
  instances
298
    a list of instance names whose group to change; type
299
    *list of strings*
300

    
301
  target_groups
302
    must either be the empty list, or contain a list of group UUIDs that
303
    should be considered for relocating instances to; type
304
    *list of strings*
305

    
306
Finally, in the case of multi-evacuate, there's one single request
307
argument (in addition to ``type``):
308

    
309
  evac_nodes
310
    the names of the nodes to be evacuated; type *list of strings*
311

    
312
Response message
313
~~~~~~~~~~~~~~~~
314

    
315
The response message is much more simple than the input one. It is
316
also a dict having three keys:
317

    
318
success
319
  a boolean value denoting if the allocation was successful or not
320

    
321
info
322
  a string with information from the scripts; if the allocation fails,
323
  this will be shown to the user
324

    
325
result
326
  the output of the algorithm; even if the algorithm failed
327
  (i.e. success is false), this must be returned as an empty list
328

    
329
  for allocate/relocate, this is the list of node(s) for the instance;
330
  note that the length of this list must equal the ``requested_nodes``
331
  entry in the input message, otherwise Ganeti will consider the result
332
  as failed
333

    
334
  for multi-relocate mode, this is a list of lists of serialized
335
  opcodes. See the :ref:`design document <multi-reloc-result>` for a
336
  detailed dscription.
337

    
338
  for multi-evacuation mode, this is a list of lists; each element of
339
  the list is a list of instance name and the new secondary node
340

    
341
.. note:: Current Ganeti version accepts either ``result`` or ``nodes``
342
   as a backwards-compatibility measure (older versions only supported
343
   ``nodes``)
344

    
345
Examples
346
--------
347

    
348
Input messages to scripts
349
~~~~~~~~~~~~~~~~~~~~~~~~~
350

    
351
Input message, new instance allocation (common elements are listed this
352
time, but not included in further examples below)::
353

    
354
  {
355
    "version": 2,
356
    "cluster_name": "cluster1.example.com",
357
    "cluster_tags": [],
358
    "enabled_hypervisors": [
359
      "xen-pvm"
360
    ],
361
    "nodegroups": {
362
      "f4e06e0d-528a-4963-a5ad-10f3e114232d": {
363
        "name": "default",
364
        "alloc_policy": "preferred"
365
      }
366
    },
367
    "instances": {
368
      "instance1.example.com": {
369
        "tags": [],
370
        "should_run": false,
371
        "disks": [
372
          {
373
            "mode": "w",
374
            "size": 64
375
          },
376
          {
377
            "mode": "w",
378
            "size": 512
379
          }
380
        ],
381
        "nics": [
382
          {
383
            "ip": null,
384
            "mac": "aa:00:00:00:60:bf",
385
            "bridge": "xen-br0"
386
          }
387
        ],
388
        "vcpus": 1,
389
        "disk_template": "plain",
390
        "memory": 128,
391
        "nodes": [
392
          "nodee1.com"
393
        ],
394
        "os": "debootstrap+default"
395
      },
396
      "instance2.example.com": {
397
        "tags": [],
398
        "should_run": false,
399
        "disks": [
400
          {
401
            "mode": "w",
402
            "size": 512
403
          },
404
          {
405
            "mode": "w",
406
            "size": 256
407
          }
408
        ],
409
        "nics": [
410
          {
411
            "ip": null,
412
            "mac": "aa:00:00:55:f8:38",
413
            "bridge": "xen-br0"
414
          }
415
        ],
416
        "vcpus": 1,
417
        "disk_template": "drbd",
418
        "memory": 512,
419
        "nodes": [
420
          "node2.example.com",
421
          "node3.example.com"
422
        ],
423
        "os": "debootstrap+default"
424
      }
425
    },
426
    "nodes": {
427
      "node1.example.com": {
428
        "total_disk": 858276,
429
        "primary_ip": "198.51.100.1",
430
        "secondary_ip": "192.0.2.1",
431
        "tags": [],
432
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
433
        "free_memory": 3505,
434
        "free_disk": 856740,
435
        "total_memory": 4095
436
      },
437
      "node2.example.com": {
438
        "total_disk": 858240,
439
        "primary_ip": "198.51.100.2",
440
        "secondary_ip": "192.0.2.2",
441
        "tags": ["test"],
442
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
443
        "free_memory": 3505,
444
        "free_disk": 848320,
445
        "total_memory": 4095
446
      },
447
      "node3.example.com.com": {
448
        "total_disk": 572184,
449
        "primary_ip": "198.51.100.3",
450
        "secondary_ip": "192.0.2.3",
451
        "tags": [],
452
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
453
        "free_memory": 3505,
454
        "free_disk": 570648,
455
        "total_memory": 4095
456
      }
457
    },
458
    "request": {
459
      "type": "allocate",
460
      "name": "instance3.example.com",
461
      "required_nodes": 2,
462
      "disk_space_total": 3328,
463
      "disks": [
464
        {
465
          "mode": "w",
466
          "size": 1024
467
        },
468
        {
469
          "mode": "w",
470
          "size": 2048
471
        }
472
      ],
473
      "nics": [
474
        {
475
          "ip": null,
476
          "mac": "00:11:22:33:44:55",
477
          "bridge": null
478
        }
479
      ],
480
      "vcpus": 1,
481
      "disk_template": "drbd",
482
      "memory": 2048,
483
      "os": "debootstrap+default",
484
      "tags": [
485
        "type:test",
486
        "owner:foo"
487
      ],
488
      hypervisor: "xen-pvm"
489
    }
490
  }
491

    
492
Input message, reallocation::
493

    
494
  {
495
    "version": 2,
496
    ...
497
    "request": {
498
      "type": "relocate",
499
      "name": "instance2.example.com",
500
      "required_nodes": 1,
501
      "disk_space_total": 832,
502
      "relocate_from": [
503
        "node3.example.com"
504
      ]
505
    }
506
  }
507

    
508
Input message, node evacuation::
509

    
510
  {
511
    "version": 2,
512
    ...
513
    "request": {
514
      "type": "multi-evacuate",
515
      "evac_nodes": [
516
        "node2"
517
      ],
518
    }
519
  }
520

    
521

    
522
Response messages
523
~~~~~~~~~~~~~~~~~
524
Successful response message::
525

    
526
  {
527
    "success": true,
528
    "info": "Allocation successful",
529
    "result": [
530
      "node2.example.com",
531
      "node1.example.com"
532
    ]
533
  }
534

    
535
Failed response message::
536

    
537
  {
538
    "success": false,
539
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
540
    "result": []
541
  }
542

    
543
Successful node evacuation message::
544

    
545
  {
546
    "success": true,
547
    "info": "Request successful",
548
    "result": [
549
      [
550
        "instance1",
551
        "node3"
552
      ],
553
      [
554
        "instance2",
555
        "node1"
556
      ]
557
    ]
558
  }
559

    
560

    
561
Command line messages
562
~~~~~~~~~~~~~~~~~~~~~
563
::
564

    
565
  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance3
566
  Selected nodes for the instance: node1.example.com
567
  * creating instance disks...
568
  [...]
569

    
570
  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance4
571
  Failure: prerequisites not met for this operation:
572
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 1 (already selected: )
573

    
574
  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance5
575
  Failure: prerequisites not met for this operation:
576
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 2 (already selected: node1.example.com)
577

    
578
Reference implementation
579
~~~~~~~~~~~~~~~~~~~~~~~~
580

    
581
Ganeti's default iallocator is "hail" which is available when "htools"
582
components have been enabled at build time (see :doc:`install-quick` for
583
more details).
584

    
585
.. vim: set textwidth=72 :
586
.. Local Variables:
587
.. mode: rst
588
.. fill-column: 72
589
.. End: