Statistics
| Branch: | Tag: | Revision:

root / doc / iallocator.rst @ e7e8b4db

History | View | Annotate | Download (17.4 kB)

1
Ganeti automatic instance allocation
2
====================================
3

    
4
Documents Ganeti version 2.7
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Currently in Ganeti the admin has to specify the exact locations for
12
an instance's node(s). This prevents a completely automatic node
13
evacuation, and is in general a nuisance.
14

    
15
The *iallocator* framework will enable automatic placement via
16
external scripts, which allows customization of the cluster layout per
17
the site's requirements.
18

    
19
User-visible changes
20
~~~~~~~~~~~~~~~~~~~~
21

    
22
There are two parts of the ganeti operation that are impacted by the
23
auto-allocation: how the cluster knows what the allocator algorithms
24
are and how the admin uses these in creating instances.
25

    
26
An allocation algorithm is just the filename of a program installed in
27
a defined list of directories.
28

    
29
Cluster configuration
30
~~~~~~~~~~~~~~~~~~~~~
31

    
32
At configure time, the list of the directories can be selected via the
33
``--with-iallocator-search-path=LIST`` option, where *LIST* is a
34
comma-separated list of directories. If not given, this defaults to
35
``$libdir/ganeti/iallocators``, i.e. for an installation under
36
``/usr``, this will be ``/usr/lib/ganeti/iallocators``.
37

    
38
Ganeti will then search for allocator script in the configured list,
39
using the first one whose filename matches the one given by the user.
40

    
41
Command line interface changes
42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
43

    
44
The node selection options in instance add and instance replace disks
45
can be replace by the new ``--iallocator=NAME`` option (shortened to
46
``-I``), which will cause the auto-assignement of nodes with the
47
passed iallocator. The selected node(s) will be show as part of the
48
command output.
49

    
50
IAllocator API
51
--------------
52

    
53
The protocol for communication between Ganeti and an allocator script
54
will be the following:
55

    
56
#. ganeti launches the program with a single argument, a filename that
57
   contains a JSON-encoded structure (the input message)
58

    
59
#. if the script finishes with exit code different from zero, it is
60
   considered a general failure and the full output will be reported to
61
   the users; this can be the case when the allocator can't parse the
62
   input message
63

    
64
#. if the allocator finishes with exit code zero, it is expected to
65
   output (on its stdout) a JSON-encoded structure (the response)
66

    
67
Input message
68
~~~~~~~~~~~~~
69

    
70
The input message will be the JSON encoding of a dictionary containing
71
all the required information to perform the operation. We explain the
72
contents of this dictionary in two parts: common information that every
73
type of operation requires, and operation-specific information.
74

    
75
Common information
76
++++++++++++++++++
77

    
78
All input dictionaries to the IAllocator must carry the following keys:
79

    
80
version
81
  the version of the protocol; this document
82
  specifies version 2
83

    
84
cluster_name
85
  the cluster name
86

    
87
cluster_tags
88
  the list of cluster tags
89

    
90
enabled_hypervisors
91
  the list of enabled hypervisors
92

    
93
ipolicy
94
  the cluster-wide instance policy (for information; the per-node group
95
  values take precedence and should be used instead)
96

    
97
request
98
  a dictionary containing the details of the request; the keys vary
99
  depending on the type of operation that's being requested, as
100
  explained in `Operation-specific input`_ below.
101

    
102
nodegroups
103
  a dictionary with the data for the cluster's node groups; it is keyed
104
  on the group UUID, and the values are a dictionary with the following
105
  keys:
106

    
107
  name
108
    the node group name
109
  alloc_policy
110
    the allocation policy of the node group (consult the semantics of
111
    this attribute in the :manpage:`gnt-group(8)` manpage)
112
  ipolicy
113
    the instance policy of the node group
114
  tags
115
    the list of node group tags
116

    
117
instances
118
  a dictionary with the data for the current existing instance on the
119
  cluster, indexed by instance name; the contents are similar to the
120
  instance definitions for the allocate mode, with the addition of:
121

    
122
  admin_state
123
    if this instance is set to run (but not the actual status of the
124
    instance)
125

    
126
  nodes
127
    list of nodes on which this instance is placed; the primary node
128
    of the instance is always the first one
129

    
130
nodes
131
  dictionary with the data for the nodes in the cluster, indexed by
132
  the node name; the dict contains [*]_ :
133

    
134
  total_disk
135
    the total disk size of this node (mebibytes)
136

    
137
  free_disk
138
    the free disk space on the node
139

    
140
  total_memory
141
    the total memory size
142

    
143
  free_memory
144
    free memory on the node; note that currently this does not take
145
    into account the instances which are down on the node
146

    
147
  total_cpus
148
    the physical number of CPUs present on the machine; depending on
149
    the hypervisor, this might or might not be equal to how many CPUs
150
    the node operating system sees;
151

    
152
  primary_ip
153
    the primary IP address of the node
154

    
155
  secondary_ip
156
    the secondary IP address of the node (the one used for the DRBD
157
    replication); note that this can be the same as the primary one
158

    
159
  tags
160
    list with the tags of the node
161

    
162
  master_candidate:
163
    a boolean flag denoting whether this node is a master candidate
164

    
165
  drained:
166
    a boolean flag denoting whether this node is being drained
167

    
168
  offline:
169
    a boolean flag denoting whether this node is offline
170

    
171
  i_pri_memory:
172
    total memory required by primary instances
173

    
174
  i_pri_up_memory:
175
    total memory required by running primary instances
176

    
177
  group:
178
    the node group that this node belongs to
179

    
180
  No allocations should be made on nodes having either the ``drained``
181
  or ``offline`` flags set. More details about these of node status
182
  flags is available in the manpage :manpage:`ganeti(7)`.
183

    
184
.. [*] Note that no run-time data is present for offline, drained or
185
   non-vm_capable nodes; this means the tags total_memory,
186
   reserved_memory, free_memory, total_disk, free_disk, total_cpus,
187
   i_pri_memory and i_pri_up memory will be absent
188

    
189
Operation-specific input
190
++++++++++++++++++++++++
191

    
192
All input dictionaries to the IAllocator carry, in the ``request``
193
dictionary, detailed information about the operation that's being
194
requested. The required keys vary depending on the type of operation, as
195
follows.
196

    
197
In all cases, it includes:
198

    
199
  type
200
    the request type; this can be either ``allocate``, ``relocate``,
201
    ``change-group`` or ``node-evacuate``. The
202
    ``allocate`` request is used when a new instance needs to be placed
203
    on the cluster. The ``relocate`` request is used when an existing
204
    instance needs to be moved within its node group.
205

    
206
    The ``multi-evacuate`` protocol used to request that the script
207
    computes the optimal relocate solution for all secondary instances
208
    of the given nodes. It is now deprecated and needs only be
209
    implemented if backwards compatibility with Ganeti 2.4 and lower is
210
    needed.
211

    
212
    The ``change-group`` request is used to relocate multiple instances
213
    across multiple node groups. ``node-evacuate`` evacuates instances
214
    off their node(s). These are described in a separate :ref:`design
215
    document <multi-reloc-detailed-design>`.
216

    
217
    The ``multi-allocate`` request is used to allocate multiple
218
    instances on the cluster. The request is beside of that very
219
    similiar to the ``allocate`` one. For more details look at
220
    :doc:`Ganeti bulk create <design-bulk-create>`.
221

    
222
For both allocate and relocate mode, the following extra keys are needed
223
in the ``request`` dictionary:
224

    
225
  name
226
    the name of the instance; if the request is a realocation, then this
227
    name will be found in the list of instances (see below), otherwise
228
    is the FQDN of the new instance; type *string*
229

    
230
  required_nodes
231
    how many nodes should the algorithm return; while this information
232
    can be deduced from the instace's disk template, it's better if
233
    this computation is left to Ganeti as then allocator scripts are
234
    less sensitive to changes to the disk templates; type *integer*
235

    
236
  disk_space_total
237
    the total disk space that will be used by this instance on the
238
    (new) nodes; again, this information can be computed from the list
239
    of instance disks and its template type, but Ganeti is better
240
    suited to compute it; type *integer*
241

    
242
.. pyassert::
243

    
244
   constants.DISK_ACCESS_SET == set([constants.DISK_RDONLY,
245
     constants.DISK_RDWR])
246

    
247
Allocation needs, in addition:
248

    
249
  disks
250
    list of dictionaries holding the disk definitions for this
251
    instance (in the order they are exported to the hypervisor):
252

    
253
    mode
254
      either :pyeval:`constants.DISK_RDONLY` or
255
      :pyeval:`constants.DISK_RDWR` denoting if the disk is read-only or
256
      writable
257

    
258
    size
259
      the size of this disk in mebibytes
260

    
261
  nics
262
    a list of dictionaries holding the network interfaces for this
263
    instance, containing:
264

    
265
    ip
266
      the IP address that Ganeti know for this instance, or null
267

    
268
    mac
269
      the MAC address for this interface
270

    
271
    bridge
272
      the bridge to which this interface will be connected
273

    
274
  vcpus
275
    the number of VCPUs for the instance
276

    
277
  disk_template
278
    the disk template for the instance
279

    
280
  memory
281
   the memory size for the instance
282

    
283
  os
284
   the OS type for the instance
285

    
286
  tags
287
    the list of the instance's tags
288

    
289
  hypervisor
290
    the hypervisor of this instance
291

    
292
Relocation:
293

    
294
  relocate_from
295
     a list of nodes to move the instance away from; for DRBD-based
296
     instances, this will contain a single node, the current secondary
297
     of the instance, whereas for shared-storage instance, this will
298
     contain also a single node, the current primary of the instance;
299
     type *list of strings*
300

    
301
As for ``node-evacuate``, it needs the following request arguments:
302

    
303
  instances
304
    a list of instance names to evacuate; type *list of strings*
305

    
306
  evac_mode
307
    specify which instances to evacuate; one of ``primary-only``,
308
    ``secondary-only``, ``all``, type *string*
309

    
310
``change-group`` needs the following request arguments:
311

    
312
  instances
313
    a list of instance names whose group to change; type
314
    *list of strings*
315

    
316
  target_groups
317
    must either be the empty list, or contain a list of group UUIDs that
318
    should be considered for relocating instances to; type
319
    *list of strings*
320

    
321
``multi-allocate`` needs the following request arguments:
322

    
323
  instances
324
    a list of request dicts
325

    
326
Response message
327
~~~~~~~~~~~~~~~~
328

    
329
The response message is much more simple than the input one. It is
330
also a dict having three keys:
331

    
332
success
333
  a boolean value denoting if the allocation was successful or not
334

    
335
info
336
  a string with information from the scripts; if the allocation fails,
337
  this will be shown to the user
338

    
339
result
340
  the output of the algorithm; even if the algorithm failed
341
  (i.e. success is false), this must be returned as an empty list
342

    
343
  for allocate/relocate, this is the list of node(s) for the instance;
344
  note that the length of this list must equal the ``requested_nodes``
345
  entry in the input message, otherwise Ganeti will consider the result
346
  as failed
347

    
348
  for the ``node-evacuate`` and ``change-group`` modes, this is a
349
  dictionary containing, among other information, a list of lists of
350
  serialized opcodes; see the :ref:`design document
351
  <multi-reloc-result>` for a detailed description
352

    
353
  for the ``multi-allocate`` mode this is a tuple of 2 lists, the first
354
  being element of the tuple is a list of succeeded allocation, with the
355
  instance name as first element of each entry and the node placement in
356
  the second. The second element of the tuple is the instance list of
357
  failed allocations.
358

    
359
.. note:: Current Ganeti version accepts either ``result`` or ``nodes``
360
   as a backwards-compatibility measure (older versions only supported
361
   ``nodes``)
362

    
363
Examples
364
--------
365

    
366
Input messages to scripts
367
~~~~~~~~~~~~~~~~~~~~~~~~~
368

    
369
Input message, new instance allocation (common elements are listed this
370
time, but not included in further examples below)::
371

    
372
  {
373
    "version": 2,
374
    "cluster_name": "cluster1.example.com",
375
    "cluster_tags": [],
376
    "enabled_hypervisors": [
377
      "xen-pvm"
378
    ],
379
    "nodegroups": {
380
      "f4e06e0d-528a-4963-a5ad-10f3e114232d": {
381
        "name": "default",
382
        "alloc_policy": "preferred",
383
        "ipolicy": {
384
          "disk-templates": ["drbd", "plain"],
385
          "minmax": [
386
            {
387
              "max": {
388
                "cpu-count": 2,
389
                "disk-count": 8,
390
                "disk-size": 2048,
391
                "memory-size": 12800,
392
                "nic-count": 8,
393
                "spindle-use": 8
394
              },
395
              "min": {
396
                "cpu-count": 1,
397
                "disk-count": 1,
398
                "disk-size": 1024,
399
                "memory-size": 128,
400
                "nic-count": 1,
401
                "spindle-use": 1
402
              }
403
            }
404
          ],
405
          "spindle-ratio": 32.0,
406
          "std": {
407
            "cpu-count": 1,
408
            "disk-count": 1,
409
            "disk-size": 1024,
410
            "memory-size": 128,
411
            "nic-count": 1,
412
            "spindle-use": 1
413
          },
414
          "vcpu-ratio": 4.0
415
        },
416
        "tags": ["ng-tag-1", "ng-tag-2"]
417
      }
418
    },
419
    "instances": {
420
      "instance1.example.com": {
421
        "tags": [],
422
        "should_run": false,
423
        "disks": [
424
          {
425
            "mode": "w",
426
            "size": 64
427
          },
428
          {
429
            "mode": "w",
430
            "size": 512
431
          }
432
        ],
433
        "nics": [
434
          {
435
            "ip": null,
436
            "mac": "aa:00:00:00:60:bf",
437
            "bridge": "xen-br0"
438
          }
439
        ],
440
        "vcpus": 1,
441
        "disk_template": "plain",
442
        "memory": 128,
443
        "nodes": [
444
          "nodee1.com"
445
        ],
446
        "os": "debootstrap+default"
447
      },
448
      "instance2.example.com": {
449
        "tags": [],
450
        "should_run": false,
451
        "disks": [
452
          {
453
            "mode": "w",
454
            "size": 512
455
          },
456
          {
457
            "mode": "w",
458
            "size": 256
459
          }
460
        ],
461
        "nics": [
462
          {
463
            "ip": null,
464
            "mac": "aa:00:00:55:f8:38",
465
            "bridge": "xen-br0"
466
          }
467
        ],
468
        "vcpus": 1,
469
        "disk_template": "drbd",
470
        "memory": 512,
471
        "nodes": [
472
          "node2.example.com",
473
          "node3.example.com"
474
        ],
475
        "os": "debootstrap+default"
476
      }
477
    },
478
    "nodes": {
479
      "node1.example.com": {
480
        "total_disk": 858276,
481
        "primary_ip": "198.51.100.1",
482
        "secondary_ip": "192.0.2.1",
483
        "tags": [],
484
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
485
        "free_memory": 3505,
486
        "free_disk": 856740,
487
        "total_memory": 4095
488
      },
489
      "node2.example.com": {
490
        "total_disk": 858240,
491
        "primary_ip": "198.51.100.2",
492
        "secondary_ip": "192.0.2.2",
493
        "tags": ["test"],
494
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
495
        "free_memory": 3505,
496
        "free_disk": 848320,
497
        "total_memory": 4095
498
      },
499
      "node3.example.com.com": {
500
        "total_disk": 572184,
501
        "primary_ip": "198.51.100.3",
502
        "secondary_ip": "192.0.2.3",
503
        "tags": [],
504
        "group": "f4e06e0d-528a-4963-a5ad-10f3e114232d",
505
        "free_memory": 3505,
506
        "free_disk": 570648,
507
        "total_memory": 4095
508
      }
509
    },
510
    "request": {
511
      "type": "allocate",
512
      "name": "instance3.example.com",
513
      "required_nodes": 2,
514
      "disk_space_total": 3328,
515
      "disks": [
516
        {
517
          "mode": "w",
518
          "size": 1024
519
        },
520
        {
521
          "mode": "w",
522
          "size": 2048
523
        }
524
      ],
525
      "nics": [
526
        {
527
          "ip": null,
528
          "mac": "00:11:22:33:44:55",
529
          "bridge": null
530
        }
531
      ],
532
      "vcpus": 1,
533
      "disk_template": "drbd",
534
      "memory": 2048,
535
      "os": "debootstrap+default",
536
      "tags": [
537
        "type:test",
538
        "owner:foo"
539
      ],
540
      hypervisor: "xen-pvm"
541
    }
542
  }
543

    
544
Input message, reallocation::
545

    
546
  {
547
    "version": 2,
548
    ...
549
    "request": {
550
      "type": "relocate",
551
      "name": "instance2.example.com",
552
      "required_nodes": 1,
553
      "disk_space_total": 832,
554
      "relocate_from": [
555
        "node3.example.com"
556
      ]
557
    }
558
  }
559

    
560

    
561
Response messages
562
~~~~~~~~~~~~~~~~~
563
Successful response message::
564

    
565
  {
566
    "success": true,
567
    "info": "Allocation successful",
568
    "result": [
569
      "node2.example.com",
570
      "node1.example.com"
571
    ]
572
  }
573

    
574
Failed response message::
575

    
576
  {
577
    "success": false,
578
    "info": "Can't find a suitable node for position 2 (already selected: node2.example.com)",
579
    "result": []
580
  }
581

    
582
Successful node evacuation message::
583

    
584
  {
585
    "success": true,
586
    "info": "Request successful",
587
    "result": [
588
      [
589
        "instance1",
590
        "node3"
591
      ],
592
      [
593
        "instance2",
594
        "node1"
595
      ]
596
    ]
597
  }
598

    
599

    
600
Command line messages
601
~~~~~~~~~~~~~~~~~~~~~
602
::
603

    
604
  # gnt-instance add -t plain -m 2g --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance3
605
  Selected nodes for the instance: node1.example.com
606
  * creating instance disks...
607
  [...]
608

    
609
  # gnt-instance add -t plain -m 3400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance4
610
  Failure: prerequisites not met for this operation:
611
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 1 (already selected: )
612

    
613
  # gnt-instance add -t drbd -m 1400m --os-size 1g --swap-size 512m --iallocator hail -o debootstrap+default instance5
614
  Failure: prerequisites not met for this operation:
615
  Can't compute nodes using iallocator 'hail': Can't find a suitable node for position 2 (already selected: node1.example.com)
616

    
617
Reference implementation
618
~~~~~~~~~~~~~~~~~~~~~~~~
619

    
620
Ganeti's default iallocator is "hail" which is available when "htools"
621
components have been enabled at build time (see :doc:`install-quick` for
622
more details).
623

    
624
.. vim: set textwidth=72 :
625
.. Local Variables:
626
.. mode: rst
627
.. fill-column: 72
628
.. End: