Statistics
| Branch: | Tag: | Revision:

root / doc / iallocator.rst @ b830193c

History | View | Annotate | Download (16.5 kB)

1
Ganeti automatic instance allocation
2
====================================
3

    
4
Documents Ganeti version 2.7
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Currently in Ganeti the admin has to specify the exact locations for
12
an instance's node(s). This prevents a completely automatic node
13
evacuation, and is in general a nuisance.
14

    
15
The *iallocator* framework will enable automatic placement via
16
external scripts, which allows customization of the cluster layout per
17
the site's requirements.
18

    
19
User-visible changes
20
~~~~~~~~~~~~~~~~~~~~
21

    
22
There are two parts of the ganeti operation that are impacted by the
23
auto-allocation: how the cluster knows what the allocator algorithms
24
are and how the admin uses these in creating instances.
25

    
26
An allocation algorithm is just the filename of a program installed in
27
a defined list of directories.
28

    
29
Cluster configuration
30
~~~~~~~~~~~~~~~~~~~~~
31

    
32
At configure time, the list of the directories can be selected via the
33
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
34
comma-separated list of directories. If not given, this defaults to
35
``$libdir/ganeti/iallocators``, i.e. for an installation under
36
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.
37

    
38
Ganeti will then search for allocator script in the configured list,
39
using the first one whose filename matches the one given by the user.
40

    
41
Command line interface changes
42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43

    
44
The node selection options in instance add and instance replace disks
45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46
``-I``), which will cause the auto-assignement of nodes with the
47
passed iallocator. The selected node(s) will be show as part of the
48
command output.
49

    
50
IAllocator API
51
--------------
52

    
53
The protocol for communication between Ganeti and an allocator script
54
will be the following:
55

    
56
#. ganeti launches the program with a single argument, a filename that
57
   contains a JSON-encoded structure (the input message)
58

    
59
#. if the script finishes with exit code different from zero, it is
60
   considered a general failure and the full output will be reported to
61
   the users; this can be the case when the allocator can't parse the
62
   input message
63

    
64
#. if the allocator finishes with exit code zero, it is expected to
65
   output (on its stdout) a JSON-encoded structure (the response)
66

    
67
Input message
68
~~~~~~~~~~~~~
69

    
70
The input message will be the JSON encoding of a dictionary containing
71
all the required information to perform the operation. We explain the
72
contents of this dictionary in two parts: common information that every
73
type of operation requires, and operation-specific information.
74

    
75
Common information
76
++++++++++++++++++
77

    
78
All input dictionaries to the IAllocator must carry the following keys:
79

    
80
version
81
  the version of the protocol; this document
82
  specifies version 2
83

    
84
cluster_name
85
  the cluster name
86

    
87
cluster_tags
88
  the list of cluster tags
89

    
90
enabled_hypervisors
91
  the list of enabled hypervisors
92

    
93
ipolicy
94
  the cluster-wide instance policy (for information; the per-node group
95
  values take precedence and should be used instead)
96

    
97
request
98
  a dictionary containing the details of the request; the keys vary
99
  depending on the type of operation that's being requested, as
100
  explained in `Operation-specific input`_ below.
101

    
102
nodegroups
103
  a dictionary with the data for the cluster's node groups; it is keyed
104
  on the group UUID, and the values are a dictionary with the following
105
  keys:
106

    
107
  name
108
    the node group name
109
  alloc_policy
110
    the allocation policy of the node group (consult the semantics of
111
    this attribute in the :manpage:`gnt-group(8)` manpage)
112
  ipolicy
113
    the instance policy of the node group
114

    
115
instances
116
  a dictionary with the data for the current existing instance on the
117
  cluster, indexed by instance name; the contents are similar to the
118
  instance definitions for the allocate mode, with the addition of:
119

    
120
  admin_state
121
    if this instance is set to run (but not the actual status of the
122
    instance)
123

    
124
  nodes
125
    list of nodes on which this instance is placed; the primary node
126
    of the instance is always the first one
127

    
128
nodes
129
  dictionary with the data for the nodes in the cluster, indexed by
130
  the node name; the dict contains [*]_ :
131

    
132
  total_disk
133
    the total disk size of this node (mebibytes)
134

    
135
  free_disk
136
    the free disk space on the node
137

    
138
  total_memory
139
    the total memory size
140

    
141
  free_memory
142
    free memory on the node; note that currently this does not take
143
    into account the instances which are down on the node
144

    
145
  total_cpus
146
    the physical number of CPUs present on the machine; depending on
147
    the hypervisor, this might or might not be equal to how many CPUs
148
    the node operating system sees;
149

    
150
  primary_ip
151
    the primary IP address of the node
152

    
153
  secondary_ip
154
    the secondary IP address of the node (the one used for the DRBD
155
    replication); note that this can be the same as the primary one
156

    
157
  tags
158
    list with the tags of the node
159

    
160
  master_candidate:
161
    a boolean flag denoting whether this node is a master candidate
162

    
163
  drained:
164
    a boolean flag denoting whether this node is being drained
165

    
166
  offline:
167
    a boolean flag denoting whether this node is offline
168

    
169
  i_pri_memory:
170
    total memory required by primary instances
171

    
172
  i_pri_up_memory:
173
    total memory required by running primary instances
174

    
175
  group:
176
    the node group that this node belongs to
177

    
178
  No allocations should be made on nodes having either the ``drained``
179
  or ``offline`` flags set. More details about these of node status
180
  flags is available in the manpage :manpage:`ganeti(7)`.
181

    
182
.. [*] Note that no run-time data is present for offline, drained or
183
   non-vm_capable nodes; this means the tags total_memory,
184
   reserved_memory, free_memory, total_disk, free_disk, total_cpus,
185
   i_pri_memory and i_pri_up memory will be absent
186

    
187
Operation-specific input
188
++++++++++++++++++++++++
189

    
190
All input dictionaries to the IAllocator carry, in the ``request``
191
dictionary, detailed information about the operation that's being
192
requested. The required keys vary depending on the type of operation, as
193
follows.
194

    
195
In all cases, it includes:
196

    
197
  type
198
    the request type; this can be either ``allocate``, ``relocate``,
199
    ``change-group`` or ``node-evacuate``. The
200
    ``allocate`` request is used when a new instance needs to be placed
201
    on the cluster. The ``relocate`` request is used when an existing
202
    instance needs to be moved within its node group.
203

    
204
    The ``multi-evacuate`` protocol used to request that the script
205
    computes the optimal relocate solution for all secondary instances
206
    of the given nodes. It is now deprecated and needs only be
207
    implemented if backwards compatibility with Ganeti 2.4 and lower is
208
    needed.
209

    
210
    The ``change-group`` request is used to relocate multiple instances
211
    across multiple node groups. ``node-evacuate`` evacuates instances
212
    off their node(s). These are described in a separate :ref:`design
213
    document <multi-reloc-detailed-design>`.
214

    
215
    The ``multi-allocate`` request is used to allocate multiple
216
    instances on the cluster. The request is beside of that very
217
    similiar to the ``allocate`` one. For more details look at
218
    :doc:`Ganeti bulk create <design-bulk-create>`.
219

    
220
For both allocate and relocate mode, the following extra keys are needed
221
in the ``request`` dictionary:
222

    
223
  name
224
    the name of the instance; if the request is a realocation, then this
225
    name will be found in the list of instances (see below), otherwise
226
    is the FQDN of the new instance; type *string*
227

    
228
  required_nodes
229
    how many nodes should the algorithm return; while this information
230
    can be deduced from the instace's disk template, it's better if
231
    this computation is left to Ganeti as then allocator scripts are
232
    less sensitive to changes to the disk templates; type *integer*
233

    
234
  disk_space_total
235
    the total disk space that will be used by this instance on the
236
    (new) nodes; again, this information can be computed from the list
237
    of instance disks and its template type, but Ganeti is better
238
    suited to compute it; type *integer*
239

    
240
.. pyassert::
241

    
242
   constants.DISK_ACCESS_SET == set([constants.DISK_RDONLY,
243
     constants.DISK_RDWR])
244

    
245
Allocation needs, in addition:
246

    
247
  disks
248
    list of dictionaries holding the disk definitions for this
249
    instance (in the order they are exported to the hypervisor):
250

    
251
    mode
252
      either :pyeval:`constants.DISK_RDONLY` or
253
      :pyeval:`constants.DISK_RDWR` denoting if the disk is read-only or
254
      writable
255

    
256
    size
257
      the size of this disk in mebibytes
258

    
259
  nics
260
    a list of dictionaries holding the network interfaces for this
261
    instance, containing:
262

    
263
    ip
264
      the IP address that Ganeti know for this instance, or null
265

    
266
    mac
267
      the MAC address for this interface
268

    
269
    bridge
270
      the bridge to which this interface will be connected
271

    
272
  vcpus
273
    the number of VCPUs for the instance
274

    
275
  disk_template
276
    the disk template for the instance
277

    
278
  memory
279
   the memory size for the instance
280

    
281
  os
282
   the OS type for the instance
283

    
284
  tags
285
    the list of the instance's tags
286

    
287
  hypervisor
288
    the hypervisor of this instance
289

    
290
Relocation:
291

    
292
  relocate_from
293
     a list of nodes to move the instance away from; for DRBD-based
294
     instances, this will contain a single node, the current secondary
295
     of the instance, whereas for shared-storage instance, this will
296
     contain also a single node, the current primary of the instance;
297
     type *list of strings*
298

    
299
As for ``node-evacuate``, it needs the following request arguments:
300

    
301
  instances
302
    a list of instance names to evacuate; type *list of strings*
303

    
304
  evac_mode
305
    specify which instances to evacuate; one of ``primary-only``,
306
    ``secondary-only``, ``all``, type *string*
307

    
308
``change-group`` needs the following request arguments:
309

    
310
  instances
311
    a list of instance names whose group to change; type
312
    *list of strings*
313

    
314
  target_groups
315
    must either be the empty list, or contain a list of group UUIDs that
316
    should be considered for relocating instances to; type
317
    *list of strings*
318

    
319
``multi-allocate`` needs the following request arguments:
320

    
321
  instances
322
    a list of request dicts
323

    
324
Response message
325
~~~~~~~~~~~~~~~~
326

    
327
The response message is much more simple than the input one. It is
328
also a dict having three keys:
329

    
330
success
331
  a boolean value denoting if the allocation was successful or not
332

    
333
info
334
  a string with information from the scripts; if the allocation fails,
335
  this will be shown to the user
336

    
337
result
338
  the output of the algorithm; even if the algorithm failed
339
  (i.e. success is false), this must be returned as an empty list
340

    
341
  for allocate/relocate, this is the list of node(s) for the instance;
342
  note that the length of this list must equal the ``requested_nodes``
343
  entry in the input message, otherwise Ganeti will consider the result
344
  as failed
345

    
346
  for the ``node-evacuate`` and ``change-group`` modes, this is a
347
  dictionary containing, among other information, a list of lists of
348
  serialized opcodes; see the :ref:`design document
349
  <multi-reloc-result>` for a detailed description
350

    
351
  for the ``multi-allocate`` mode this is a tuple of 2 lists, the first
352
  being element of the tuple is a list of succeeded allocation, with the
353
  instance name as first element of each entry and the node placement in
354
  the second. The second element of the tuple is the instance list of
355
  failed allocations.
356

    
357
.. note:: Current Ganeti version accepts either ``result`` or ``nodes``
358
   as a backwards-compatibility measure (older versions only supported
359
   ``nodes``)
360

    
361
Examples
362
--------
363

    
364
Input messages to scripts
365
~~~~~~~~~~~~~~~~~~~~~~~~~
366

    
367
Input message, new instance allocation (common elements are listed this
368
time, but not included in further examples below)::
369

    
370
  {
371
    "version": 2,
372
    "cluster_name": "cluster1.example.com",
373
    "cluster_tags": [],
374
    "enabled_hypervisors": [
375
      "xen-pvm"
376
    ],
377
    "nodegroups": {
378
      "f4e06e0d-528a-4963-a5ad-10f3e114232d": {
379
        "name": "default",
380
        "alloc_policy": "preferred"
381
      }
382
    },
383
    "instances": {
384
      "instance1.example.com": {
385
        "tags": [],
386
        "should_run": false,
387
        "disks": [
388
          {
389
            "mode": "w",
390
            "size": 64
391
          },
392
          {
393
            "mode": "w",
394
            "size": 512
395
          }
396
        ],
397
        "nics": [
398
          {
399
            "ip": null,
400
            "mac": "aa:00:00:00:60:bf",
401
            "bridge": "xen-br0"
402
          }
403
        ],
404
        "vcpus": 1,
405
        "disk_template": "plain",
406
        "memory": 128,
407
        "nodes": [
408
          "nodee1.com"
409
        ],
410
        "os": "debootstrap+default"
411
      },
412
      "instance2.example.com": {
413
        "tags": [],
414
        "should_run": false,
415
        "disks": [
416
          {
417
            "mode": "w",
418
            "size": 512
419
          },
420
          {
421
            "mode": "w",
422
            "size": 256
423
          }
424
        ],
425
        "nics": [
426
          {
427
            "ip": null,
428
            "mac": "aa:00:00:55:f8:38",
429
            "bridge": "xen-br0"
430
          }
431
        ],
432
        "vcpus": 1,
433
        "disk_template": "drbd",
434
        "memory": 512,
435
        "nodes": [
436
          "node2.example.com",
437
          "node3.example.com"
438
        ],
439
        "os": "debootstrap+default"
440
      }
441
    },
442
    "nodes": {
443
      "node1.example.com": {
444
        "total_disk": 858276,
445
        "primary_ip": "198.51.100.1",
446
        "secondary_ip": "192.0.2.1",
447
        "tags": [],
448
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
449
        "free_memory": 3505,
450
        "free_disk": 856740,
451
        "total_memory": 4095
452
      },
453
      "node2.example.com": {
454
        "total_disk": 858240,
455
        "primary_ip": "198.51.100.2",
456
        "secondary_ip": "192.0.2.2",
457
        "tags": ["test"],
458
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
459
        "free_memory": 3505,
460
        "free_disk": 848320,
461
        "total_memory": 4095
462
      },
463
      "node3.example.com.com": {
464
        "total_disk": 572184,
465
        "primary_ip": "198.51.100.3",
466
        "secondary_ip": "192.0.2.3",
467
        "tags": [],
468
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
469
        "free_memory": 3505,
470
        "free_disk": 570648,
471
        "total_memory": 4095
472
      }
473
    },
474
    "request": {
475
      "type": "allocate",
476
      "name": "instance3.example.com",
477
      "required_nodes": 2,
478
      "disk_space_total": 3328,
479
      "disks": [
480
        {
481
          "mode": "w",
482
          "size": 1024
483
        },
484
        {
485
          "mode": "w",
486
          "size": 2048
487
        }
488
      ],
489
      "nics": [
490
        {
491
          "ip": null,
492
          "mac": "00:11:22:33:44:55",
493
          "bridge": null
494
        }
495
      ],
496
      "vcpus": 1,
497
      "disk_template": "drbd",
498
      "memory": 2048,
499
      "os": "debootstrap+default",
500
      "tags": [
501
        "type:test",
502
        "owner:foo"
503
      ],
504
      hypervisor: "xen-pvm"
505
    }
506
  }
507

    
508
Input message, reallocation::
509

    
510
  {
511
    "version": 2,
512
    ...
513
    "request": {
514
      "type": "relocate",
515
      "name": "instance2.example.com",
516
      "required_nodes": 1,
517
      "disk_space_total": 832,
518
      "relocate_from": [
519
        "node3.example.com"
520
      ]
521
    }
522
  }
523

    
524

    
525
Response messages
526
~~~~~~~~~~~~~~~~~
527
Successful response message::
528

    
529
  {
530
    "success": true,
531
    "info": "Allocation successful",
532
    "result": [
533
      "node2.example.com",
534
      "node1.example.com"
535
    ]
536
  }
537

    
538
Failed response message::
539

    
540
  {
541
    "success": false,
542
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
543
    "result": []
544
  }
545

    
546
Successful node evacuation message::
547

    
548
  {
549
    "success": true,
550
    "info": "Request successful",
551
    "result": [
552
      [
553
        "instance1",
554
        "node3"
555
      ],
556
      [
557
        "instance2",
558
        "node1"
559
      ]
560
    ]
561
  }
562

    
563

    
564
Command line messages
565
~~~~~~~~~~~~~~~~~~~~~
566
::
567

    
568
  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance3
569
  Selected nodes for the instance: node1.example.com
570
  * creating instance disks...
571
  [...]
572

    
573
  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance4
574
  Failure: prerequisites not met for this operation:
575
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 1 (already selected: )
576

    
577
  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance5
578
  Failure: prerequisites not met for this operation:
579
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 2 (already selected: node1.example.com)
580

    
581
Reference implementation
582
~~~~~~~~~~~~~~~~~~~~~~~~
583

    
584
Ganeti's default iallocator is "hail" which is available when "htools"
585
components have been enabled at build time (see :doc:`install-quick` for
586
more details).
587

    
588
.. vim: set textwidth=72 :
589
.. Local Variables:
590
.. mode: rst
591
.. fill-column: 72
592
.. End: