Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.0.rst @ f2af0bec

History | View | Annotate | Download (75.4 kB)

1
=================
2
Ganeti 2.0 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.0 compared to
6
the 1.2 version.
7

    
8
The 2.0 version will constitute a rewrite of the 'core' architecture,
9
paving the way for additional features in future 2.x versions.
10

    
11
.. contents:: :depth: 3
12

    
13
Objective
14
=========
15

    
16
Ganeti 1.2 has many scalability issues and restrictions due to its
17
roots as software for managing small and 'static' clusters.
18

    
19
Version 2.0 will attempt to remedy first the scalability issues and
20
then the restrictions.
21

    
22
Background
23
==========
24

    
25
While Ganeti 1.2 is usable, it severely limits the flexibility of the
26
cluster administration and imposes a very rigid model. It has the
27
following main scalability issues:
28

    
29
- only one operation at a time on the cluster [#]_
30
- poor handling of node failures in the cluster
31
- mixing hypervisors in a cluster not allowed
32

    
33
It also has a number of artificial restrictions, due to historical
34
design:
35

    
36
- fixed number of disks (two) per instance
37
- fixed number of NICs
38

    
39
.. [#] Replace disks will release the lock, but this is an exception
40
       and not a recommended way to operate
41

    
42
The 2.0 version is intended to address some of these problems, and
43
create a more flexible code base for future developments.
44

    
45
Among these problems, the single-operation at a time restriction is
46
biggest issue with the current version of Ganeti. It is such a big
47
impediment in operating bigger clusters that many times one is tempted
48
to remove the lock just to do a simple operation like start instance
49
while an OS installation is running.
50

    
51
Scalability problems
52
--------------------
53

    
54
Ganeti 1.2 has a single global lock, which is used for all cluster
55
operations.  This has been painful at various times, for example:
56

    
57
- It is impossible for two people to efficiently interact with a cluster
58
  (for example for debugging) at the same time.
59
- When batch jobs are running it's impossible to do other work (for
60
  example failovers/fixes) on a cluster.
61

    
62
This poses scalability problems: as clusters grow in node and instance
63
size it's a lot more likely that operations which one could conceive
64
should run in parallel (for example because they happen on different
65
nodes) are actually stalling each other while waiting for the global
66
lock, without a real reason for that to happen.
67

    
68
One of the main causes of this global lock (beside the higher
69
difficulty of ensuring data consistency in a more granular lock model)
70
is the fact that currently there is no long-lived process in Ganeti
71
that can coordinate multiple operations. Each command tries to acquire
72
the so called *cmd* lock and when it succeeds, it takes complete
73
ownership of the cluster configuration and state.
74

    
75
Other scalability problems are due the design of the DRBD device
76
model, which assumed at its creation a low (one to four) number of
77
instances per node, which is no longer true with today's hardware.
78

    
79
Artificial restrictions
80
-----------------------
81

    
82
Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per
83
instance model. This is a purely artificial restrictions, but it
84
touches multiple areas (configuration, import/export, command line)
85
that it's more fitted to a major release than a minor one.
86

    
87
Architecture issues
88
-------------------
89

    
90
The fact that each command is a separate process that reads the
91
cluster state, executes the command, and saves the new state is also
92
an issue on big clusters where the configuration data for the cluster
93
begins to be non-trivial in size.
94

    
95
Overview
96
========
97

    
98
In order to solve the scalability problems, a rewrite of the core
99
design of Ganeti is required. While the cluster operations themselves
100
won't change (e.g. start instance will do the same things, the way
101
these operations are scheduled internally will change radically.
102

    
103
The new design will change the cluster architecture to:
104

    
105
.. image:: arch-2.0.png
106

    
107
This differs from the 1.2 architecture by the addition of the master
108
daemon, which will be the only entity to talk to the node daemons.
109

    
110

    
111
Detailed design
112
===============
113

    
114
The changes for 2.0 can be split into roughly three areas:
115

    
116
- core changes that affect the design of the software
117
- features (or restriction removals) but which do not have a wide
118
  impact on the design
119
- user-level and API-level changes which translate into differences for
120
  the operation of the cluster
121

    
122
Core changes
123
------------
124

    
125
The main changes will be switching from a per-process model to a
126
daemon based model, where the individual gnt-* commands will be
127
clients that talk to this daemon (see `Master daemon`_). This will
128
allow us to get rid of the global cluster lock for most operations,
129
having instead a per-object lock (see `Granular locking`_). Also, the
130
daemon will be able to queue jobs, and this will allow the individual
131
clients to submit jobs without waiting for them to finish, and also
132
see the result of old requests (see `Job Queue`_).
133

    
134
Beside these major changes, another 'core' change but that will not be
135
as visible to the users will be changing the model of object attribute
136
storage, and separate that into name spaces (such that an Xen PVM
137
instance will not have the Xen HVM parameters). This will allow future
138
flexibility in defining additional parameters. For more details see
139
`Object parameters`_.
140

    
141
The various changes brought in by the master daemon model and the
142
read-write RAPI will require changes to the cluster security; we move
143
away from Twisted and use HTTP(s) for intra- and extra-cluster
144
communications. For more details, see the security document in the
145
doc/ directory.
146

    
147
Master daemon
148
~~~~~~~~~~~~~
149

    
150
In Ganeti 2.0, we will have the following *entities*:
151

    
152
- the master daemon (on the master node)
153
- the node daemon (on all nodes)
154
- the command line tools (on the master node)
155
- the RAPI daemon (on the master node)
156

    
157
The master-daemon related interaction paths are:
158

    
159
- (CLI tools/RAPI daemon) and the master daemon, via the so called
160
  *LUXI* API
161
- the master daemon and the node daemons, via the node RPC
162

    
163
There are also some additional interaction paths for exceptional cases:
164

    
165
- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
166
  and ``gnt-cluster command``)
167
- master failover is a special case when a non-master node will SSH
168
  and do node-RPC calls to the current master
169

    
170
The protocol between the master daemon and the node daemons will be
171
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
172
using a simple PUT/GET of JSON-encoded messages. This is done due to
173
difficulties in working with the Twisted framework and its protocols
174
in a multithreaded environment, which we can overcome by using a
175
simpler stack (see the caveats section).
176

    
177
The protocol between the CLI/RAPI and the master daemon will be a
178
custom one (called *LUXI*): on a UNIX socket on the master node, with
179
rights restricted by filesystem permissions, the CLI/RAPI will talk to
180
the master daemon using JSON-encoded messages.
181

    
182
The operations supported over this internal protocol will be encoded
183
via a python library that will expose a simple API for its
184
users. Internally, the protocol will simply encode all objects in JSON
185
format and decode them on the receiver side.
186

    
187
For more details about the RAPI daemon see `Remote API changes`_, and
188
for the node daemon see `Node daemon changes`_.
189

    
190
The LUXI protocol
191
+++++++++++++++++
192

    
193
As described above, the protocol for making requests or queries to the
194
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
195
messages.
196

    
197
The choice of UNIX was in order to get rid of the need of
198
authentication and authorisation inside Ganeti; for 2.0, the
199
permissions on the Unix socket itself will determine the access
200
rights.
201

    
202
We will have two main classes of operations over this API:
203

    
204
- cluster query functions
205
- job related functions
206

    
207
The cluster query functions are usually short-duration, and are the
208
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
209
internally implemented still with these opcodes). The clients are
210
guaranteed to receive the response in a reasonable time via a timeout.
211

    
212
The job-related functions will be:
213

    
214
- submit job
215
- query job (which could also be categorized in the query-functions)
216
- archive job (see the job queue design doc)
217
- wait for job change, which allows a client to wait without polling
218

    
219
For more details of the actual operation list, see the `Job Queue`_.
220

    
221
Both requests and responses will consist of a JSON-encoded message
222
followed by the ``ETX`` character (ASCII decimal 3), which is not a
223
valid character in JSON messages and thus can serve as a message
224
delimiter. The contents of the messages will be a dictionary with two
225
fields:
226

    
227
:method:
228
  the name of the method called
229
:args:
230
  the arguments to the method, as a list (no keyword arguments allowed)
231

    
232
Responses will follow the same format, with the two fields being:
233

    
234
:success:
235
  a boolean denoting the success of the operation
236
:result:
237
  the actual result, or error message in case of failure
238

    
239
There are two special value for the result field:
240

    
241
- in the case that the operation failed, and this field is a list of
242
  length two, the client library will try to interpret is as an
243
  exception, the first element being the exception type and the second
244
  one the actual exception arguments; this will allow a simple method of
245
  passing Ganeti-related exception across the interface
246
- for the *WaitForChange* call (that waits on the server for a job to
247
  change status), if the result is equal to ``nochange`` instead of the
248
  usual result for this call (a list of changes), then the library will
249
  internally retry the call; this is done in order to differentiate
250
  internally between master daemon hung and job simply not changed
251

    
252
Users of the API that don't use the provided python library should
253
take care of the above two cases.
254

    
255

    
256
Master daemon implementation
257
++++++++++++++++++++++++++++
258

    
259
The daemon will be based around a main I/O thread that will wait for
260
new requests from the clients, and that does the setup/shutdown of the
261
other thread (pools).
262

    
263
There will two other classes of threads in the daemon:
264

    
265
- job processing threads, part of a thread pool, and which are
266
  long-lived, started at daemon startup and terminated only at shutdown
267
  time
268
- client I/O threads, which are the ones that talk the local protocol
269
  (LUXI) to the clients, and are short-lived
270

    
271
Master startup/failover
272
+++++++++++++++++++++++
273

    
274
In Ganeti 1.x there is no protection against failing over the master
275
to a node with stale configuration. In effect, the responsibility of
276
correct failovers falls on the admin. This is true both for the new
277
master and for when an old, offline master startup.
278

    
279
Since in 2.x we are extending the cluster state to cover the job queue
280
and have a daemon that will execute by itself the job queue, we want
281
to have more resilience for the master role.
282

    
283
The following algorithm will happen whenever a node is ready to
284
transition to the master role, either at startup time or at node
285
failover:
286

    
287
#. read the configuration file and parse the node list
288
   contained within
289

    
290
#. query all the nodes and make sure we obtain an agreement via
291
   a quorum of at least half plus one nodes for the following:
292

    
293
    - we have the latest configuration and job list (as
294
      determined by the serial number on the configuration and
295
      highest job ID on the job queue)
296

    
297
    - there is not even a single node having a newer
298
      configuration file
299

    
300
    - if we are not failing over (but just starting), the
301
      quorum agrees that we are the designated master
302

    
303
    - if any of the above is false, we prevent the current operation
304
      (i.e. we don't become the master)
305

    
306
#. at this point, the node transitions to the master role
307

    
308
#. for all the in-progress jobs, mark them as failed, with
309
   reason unknown or something similar (master failed, etc.)
310

    
311
Since due to exceptional conditions we could have a situation in which
312
no node can become the master due to inconsistent data, we will have
313
an override switch for the master daemon startup that will assume the
314
current node has the right data and will replicate all the
315
configuration files to the other nodes.
316

    
317
**Note**: the above algorithm is by no means an election algorithm; it
318
is a *confirmation* of the master role currently held by a node.
319

    
320
Logging
321
+++++++
322

    
323
The logging system will be switched completely to the standard python
324
logging module; currently it's logging-based, but exposes a different
325
API, which is just overhead. As such, the code will be switched over
326
to standard logging calls, and only the setup will be custom.
327

    
328
With this change, we will remove the separate debug/info/error logs,
329
and instead have always one logfile per daemon model:
330

    
331
- master-daemon.log for the master daemon
332
- node-daemon.log for the node daemon (this is the same as in 1.2)
333
- rapi-daemon.log for the RAPI daemon logs
334
- rapi-access.log, an additional log file for the RAPI that will be
335
  in the standard HTTP log format for possible parsing by other tools
336

    
337
Since the :term:`watcher` will only submit jobs to the master for
338
startup of the instances, its log file will contain less information
339
than before, mainly that it will start the instance, but not the
340
results.
341

    
342
Node daemon changes
343
+++++++++++++++++++
344

    
345
The only change to the node daemon is that, since we need better
346
concurrency, we don't process the inter-node RPC calls in the node
347
daemon itself, but we fork and process each request in a separate
348
child.
349

    
350
Since we don't have many calls, and we only fork (not exec), the
351
overhead should be minimal.
352

    
353
Caveats
354
+++++++
355

    
356
A discussed alternative is to keep the current individual processes
357
touching the cluster configuration model. The reasons we have not
358
chosen this approach is:
359

    
360
- the speed of reading and unserializing the cluster state
361
  today is not small enough that we can ignore it; the addition of
362
  the job queue will make the startup cost even higher. While this
363
  runtime cost is low, it can be on the order of a few seconds on
364
  bigger clusters, which for very quick commands is comparable to
365
  the actual duration of the computation itself
366

    
367
- individual commands would make it harder to implement a
368
  fire-and-forget job request, along the lines "start this
369
  instance but do not wait for it to finish"; it would require a
370
  model of backgrounding the operation and other things that are
371
  much better served by a daemon-based model
372

    
373
Another area of discussion is moving away from Twisted in this new
374
implementation. While Twisted has its advantages, there are also many
375
disadvantages to using it:
376

    
377
- first and foremost, it's not a library, but a framework; thus, if
378
  you use twisted, all the code needs to be 'twiste-ized' and written
379
  in an asynchronous manner, using deferreds; while this method works,
380
  it's not a common way to code and it requires that the entire process
381
  workflow is based around a single *reactor* (Twisted name for a main
382
  loop)
383
- the more advanced granular locking that we want to implement would
384
  require, if written in the async-manner, deep integration with the
385
  Twisted stack, to such an extend that business-logic is inseparable
386
  from the protocol coding; we felt that this is an unreasonable
387
  request, and that a good protocol library should allow complete
388
  separation of low-level protocol calls and business logic; by
389
  comparison, the threaded approach combined with HTTPs protocol
390
  required (for the first iteration) absolutely no changes from the 1.2
391
  code, and later changes for optimizing the inter-node RPC calls
392
  required just syntactic changes (e.g.  ``rpc.call_...`` to
393
  ``self.rpc.call_...``)
394

    
395
Another issue is with the Twisted API stability - during the Ganeti
396
1.x lifetime, we had to to implement many times workarounds to changes
397
in the Twisted version, so that for example 1.2 is able to use both
398
Twisted 2.x and 8.x.
399

    
400
In the end, since we already had an HTTP server library for the RAPI,
401
we just reused that for inter-node communication.
402

    
403

    
404
Granular locking
405
~~~~~~~~~~~~~~~~
406

    
407
We want to make sure that multiple operations can run in parallel on a
408
Ganeti Cluster. In order for this to happen we need to make sure
409
concurrently run operations don't step on each other toes and break the
410
cluster.
411

    
412
This design addresses how we are going to deal with locking so that:
413

    
414
- we preserve data coherency
415
- we prevent deadlocks
416
- we prevent job starvation
417

    
418
Reaching the maximum possible parallelism is a Non-Goal. We have
419
identified a set of operations that are currently bottlenecks and need
420
to be parallelised and have worked on those. In the future it will be
421
possible to address other needs, thus making the cluster more and more
422
parallel one step at a time.
423

    
424
This section only talks about parallelising Ganeti level operations, aka
425
Logical Units, and the locking needed for that. Any other
426
synchronization lock needed internally by the code is outside its scope.
427

    
428
Library details
429
+++++++++++++++
430

    
431
The proposed library has these features:
432

    
433
- internally managing all the locks, making the implementation
434
  transparent from their usage
435
- automatically grabbing multiple locks in the right order (avoid
436
  deadlock)
437
- ability to transparently handle conversion to more granularity
438
- support asynchronous operation (future goal)
439

    
440
Locking will be valid only on the master node and will not be a
441
distributed operation. Therefore, in case of master failure, the
442
operations currently running will be aborted and the locks will be
443
lost; it remains to the administrator to cleanup (if needed) the
444
operation result (e.g. make sure an instance is either installed
445
correctly or removed).
446

    
447
A corollary of this is that a master-failover operation with both
448
masters alive needs to happen while no operations are running, and
449
therefore no locks are held.
450

    
451
All the locks will be represented by objects (like
452
``lockings.SharedLock``), and the individual locks for each object
453
will be created at initialisation time, from the config file.
454

    
455
The API will have a way to grab one or more than one locks at the same
456
time.  Any attempt to grab a lock while already holding one in the wrong
457
order will be checked for, and fail.
458

    
459

    
460
The Locks
461
+++++++++
462

    
463
At the first stage we have decided to provide the following locks:
464

    
465
- One "config file" lock
466
- One lock per node in the cluster
467
- One lock per instance in the cluster
468

    
469
All the instance locks will need to be taken before the node locks, and
470
the node locks before the config lock. Locks will need to be acquired at
471
the same time for multiple instances and nodes, and internal ordering
472
will be dealt within the locking library, which, for simplicity, will
473
just use alphabetical order.
474

    
475
Each lock has the following three possible statuses:
476

    
477
- unlocked (anyone can grab the lock)
478
- shared (anyone can grab/have the lock but only in shared mode)
479
- exclusive (no one else can grab/have the lock)
480

    
481
Handling conversion to more granularity
482
+++++++++++++++++++++++++++++++++++++++
483

    
484
In order to convert to a more granular approach transparently each time
485
we split a lock into more we'll create a "metalock", which will depend
486
on those sub-locks and live for the time necessary for all the code to
487
convert (or forever, in some conditions). When a metalock exists all
488
converted code must acquire it in shared mode, so it can run
489
concurrently, but still be exclusive with old code, which acquires it
490
exclusively.
491

    
492
In the beginning the only such lock will be what replaces the current
493
"command" lock, and will acquire all the locks in the system, before
494
proceeding. This lock will be called the "Big Ganeti Lock" because
495
holding that one will avoid any other concurrent Ganeti operations.
496

    
497
We might also want to devise more metalocks (eg. all nodes, all
498
nodes+config) in order to make it easier for some parts of the code to
499
acquire what it needs without specifying it explicitly.
500

    
501
In the future things like the node locks could become metalocks, should
502
we decide to split them into an even more fine grained approach, but
503
this will probably be only after the first 2.0 version has been
504
released.
505

    
506
Adding/Removing locks
507
+++++++++++++++++++++
508

    
509
When a new instance or a new node is created an associated lock must be
510
added to the list. The relevant code will need to inform the locking
511
library of such a change.
512

    
513
This needs to be compatible with every other lock in the system,
514
especially metalocks that guarantee to grab sets of resources without
515
specifying them explicitly. The implementation of this will be handled
516
in the locking library itself.
517

    
518
When instances or nodes disappear from the cluster the relevant locks
519
must be removed. This is easier than adding new elements, as the code
520
which removes them must own them exclusively already, and thus deals
521
with metalocks exactly as normal code acquiring those locks. Any
522
operation queuing on a removed lock will fail after its removal.
523

    
524
Asynchronous operations
525
+++++++++++++++++++++++
526

    
527
For the first version the locking library will only export synchronous
528
operations, which will block till the needed lock are held, and only
529
fail if the request is impossible or somehow erroneous.
530

    
531
In the future we may want to implement different types of asynchronous
532
operations such as:
533

    
534
- try to acquire this lock set and fail if not possible
535
- try to acquire one of these lock sets and return the first one you
536
  were able to get (or after a timeout) (select/poll like)
537

    
538
These operations can be used to prioritize operations based on available
539
locks, rather than making them just blindly queue for acquiring them.
540
The inherent risk, though, is that any code using the first operation,
541
or setting a timeout for the second one, is susceptible to starvation
542
and thus may never be able to get the required locks and complete
543
certain tasks. Considering this providing/using these operations should
544
not be among our first priorities.
545

    
546
Locking granularity
547
+++++++++++++++++++
548

    
549
For the first version of this code we'll convert each Logical Unit to
550
acquire/release the locks it needs, so locking will be at the Logical
551
Unit level.  In the future we may want to split logical units in
552
independent "tasklets" with their own locking requirements. A different
553
design doc (or mini design doc) will cover the move from Logical Units
554
to tasklets.
555

    
556
Code examples
557
+++++++++++++
558

    
559
In general when acquiring locks we should use a code path equivalent
560
to::
561

    
562
  lock.acquire()
563
  try:
564
    ...
565
    # other code
566
  finally:
567
    lock.release()
568

    
569
This makes sure we release all locks, and avoid possible deadlocks. Of
570
course extra care must be used not to leave, if possible locked
571
structures in an unusable state. Note that with Python 2.5 a simpler
572
syntax will be possible, but we want to keep compatibility with Python
573
2.4 so the new constructs should not be used.
574

    
575
In order to avoid this extra indentation and code changes everywhere in
576
the Logical Units code, we decided to allow LUs to declare locks, and
577
then execute their code with their locks acquired. In the new world LUs
578
are called like this::
579

    
580
  # user passed names are expanded to the internal lock/resource name,
581
  # then known needed locks are declared
582
  lu.ExpandNames()
583
  ... some locking/adding of locks may happen ...
584
  # late declaration of locks for one level: this is useful because sometimes
585
  # we can't know which resource we need before locking the previous level
586
  lu.DeclareLocks() # for each level (cluster, instance, node)
587
  ... more locking/adding of locks can happen ...
588
  # these functions are called with the proper locks held
589
  lu.CheckPrereq()
590
  lu.Exec()
591
  ... locks declared for removal are removed, all acquired locks released ...
592

    
593
The Processor and the LogicalUnit class will contain exact documentation
594
on how locks are supposed to be declared.
595

    
596
Caveats
597
+++++++
598

    
599
This library will provide an easy upgrade path to bring all the code to
600
granular locking without breaking everything, and it will also guarantee
601
against a lot of common errors. Code switching from the old "lock
602
everything" lock to the new system, though, needs to be carefully
603
scrutinised to be sure it is really acquiring all the necessary locks,
604
and none has been overlooked or forgotten.
605

    
606
The code can contain other locks outside of this library, to synchronise
607
other threaded code (eg for the job queue) but in general these should
608
be leaf locks or carefully structured non-leaf ones, to avoid deadlock
609
race conditions.
610

    
611

    
612
Job Queue
613
~~~~~~~~~
614

    
615
Granular locking is not enough to speed up operations, we also need a
616
queue to store these and to be able to process as many as possible in
617
parallel.
618

    
619
A Ganeti job will consist of multiple ``OpCodes`` which are the basic
620
element of operation in Ganeti 1.2 (and will remain as such). Most
621
command-level commands are equivalent to one OpCode, or in some cases
622
to a sequence of opcodes, all of the same type (e.g. evacuating a node
623
will generate N opcodes of type replace disks).
624

    
625

    
626
Job executionโ€”โ€œLife of a Ganeti jobโ€
627
++++++++++++++++++++++++++++++++++++
628

    
629
#. Job gets submitted by the client. A new job identifier is generated
630
   and assigned to the job. The job is then automatically replicated
631
   [#replic]_ to all nodes in the cluster. The identifier is returned to
632
   the client.
633
#. A pool of worker threads waits for new jobs. If all are busy, the job
634
   has to wait and the first worker finishing its work will grab it.
635
   Otherwise any of the waiting threads will pick up the new job.
636
#. Client waits for job status updates by calling a waiting RPC
637
   function. Log message may be shown to the user. Until the job is
638
   started, it can also be canceled.
639
#. As soon as the job is finished, its final result and status can be
640
   retrieved from the server.
641
#. If the client archives the job, it gets moved to a history directory.
642
   There will be a method to archive all jobs older than a a given age.
643

    
644
.. [#replic] We need replication in order to maintain the consistency
645
   across all nodes in the system; the master node only differs in the
646
   fact that now it is running the master daemon, but it if fails and we
647
   do a master failover, the jobs are still visible on the new master
648
   (though marked as failed).
649

    
650
Failures to replicate a job to other nodes will be only flagged as
651
errors in the master daemon log if more than half of the nodes failed,
652
otherwise we ignore the failure, and rely on the fact that the next
653
update (for still running jobs) will retry the update. For finished
654
jobs, it is less of a problem.
655

    
656
Future improvements will look into checking the consistency of the job
657
list and jobs themselves at master daemon startup.
658

    
659

    
660
Job storage
661
+++++++++++
662

    
663
Jobs are stored in the filesystem as individual files, serialized
664
using JSON (standard serialization mechanism in Ganeti).
665

    
666
The choice of storing each job in its own file was made because:
667

    
668
- a file can be atomically replaced
669
- a file can easily be replicated to other nodes
670
- checking consistency across nodes can be implemented very easily,
671
  since all job files should be (at a given moment in time) identical
672

    
673
The other possible choices that were discussed and discounted were:
674

    
675
- single big file with all job data: not feasible due to difficult
676
  updates
677
- in-process databases: hard to replicate the entire database to the
678
  other nodes, and replicating individual operations does not mean wee
679
  keep consistency
680

    
681

    
682
Queue structure
683
+++++++++++++++
684

    
685
All file operations have to be done atomically by writing to a temporary
686
file and subsequent renaming. Except for log messages, every change in a
687
job is stored and replicated to other nodes.
688

    
689
::
690

    
691
  /var/lib/ganeti/queue/
692
    job-1 (JSON encoded job description and status)
693
    [โ€ฆ]
694
    job-37
695
    job-38
696
    job-39
697
    lock (Queue managing process opens this file in exclusive mode)
698
    serial (Last job ID used)
699
    version (Queue format version)
700

    
701

    
702
Locking
703
+++++++
704

    
705
Locking in the job queue is a complicated topic. It is called from more
706
than one thread and must be thread-safe. For simplicity, a single lock
707
is used for the whole job queue.
708

    
709
A more detailed description can be found in doc/locking.rst.
710

    
711

    
712
Internal RPC
713
++++++++++++
714

    
715
RPC calls available between Ganeti master and node daemons:
716

    
717
jobqueue_update(file_name, content)
718
  Writes a file in the job queue directory.
719
jobqueue_purge()
720
  Cleans the job queue directory completely, including archived job.
721
jobqueue_rename(old, new)
722
  Renames a file in the job queue directory.
723

    
724

    
725
Client RPC
726
++++++++++
727

    
728
RPC between Ganeti clients and the Ganeti master daemon supports the
729
following operations:
730

    
731
SubmitJob(ops)
732
  Submits a list of opcodes and returns the job identifier. The
733
  identifier is guaranteed to be unique during the lifetime of a
734
  cluster.
735
WaitForJobChange(job_id, fields, [โ€ฆ], timeout)
736
  This function waits until a job changes or a timeout expires. The
737
  condition for when a job changed is defined by the fields passed and
738
  the last log message received.
739
QueryJobs(job_ids, fields)
740
  Returns field values for the job identifiers passed.
741
CancelJob(job_id)
742
  Cancels the job specified by identifier. This operation may fail if
743
  the job is already running, canceled or finished.
744
ArchiveJob(job_id)
745
  Moves a job into the โ€ฆ/archive/ directory. This operation will fail if
746
  the job has not been canceled or finished.
747

    
748

    
749
Job and opcode status
750
+++++++++++++++++++++
751

    
752
Each job and each opcode has, at any time, one of the following states:
753

    
754
Queued
755
  The job/opcode was submitted, but did not yet start.
756
Waiting
757
  The job/opcode is waiting for a lock to proceed.
758
Running
759
  The job/opcode is running.
760
Canceled
761
  The job/opcode was canceled before it started.
762
Success
763
  The job/opcode ran and finished successfully.
764
Error
765
  The job/opcode was aborted with an error.
766

    
767
If the master is aborted while a job is running, the job will be set to
768
the Error status once the master started again.
769

    
770

    
771
History
772
+++++++
773

    
774
Archived jobs are kept in a separate directory,
775
``/var/lib/ganeti/queue/archive/``.  This is done in order to speed up
776
the queue handling: by default, the jobs in the archive are not
777
touched by any functions. Only the current (unarchived) jobs are
778
parsed, loaded, and verified (if implemented) by the master daemon.
779

    
780

    
781
Ganeti updates
782
++++++++++++++
783

    
784
The queue has to be completely empty for Ganeti updates with changes
785
in the job queue structure. In order to allow this, there will be a
786
way to prevent new jobs entering the queue.
787

    
788

    
789
Object parameters
790
~~~~~~~~~~~~~~~~~
791

    
792
Across all cluster configuration data, we have multiple classes of
793
parameters:
794

    
795
A. cluster-wide parameters (e.g. name of the cluster, the master);
796
   these are the ones that we have today, and are unchanged from the
797
   current model
798

    
799
#. node parameters
800

    
801
#. instance specific parameters, e.g. the name of disks (LV), that
802
   cannot be shared with other instances
803

    
804
#. instance parameters, that are or can be the same for many
805
   instances, but are not hypervisor related; e.g. the number of VCPUs,
806
   or the size of memory
807

    
808
#. instance parameters that are hypervisor specific (e.g. kernel_path
809
   or PAE mode)
810

    
811

    
812
The following definitions for instance parameters will be used below:
813

    
814
:hypervisor parameter:
815
  a hypervisor parameter (or hypervisor specific parameter) is defined
816
  as a parameter that is interpreted by the hypervisor support code in
817
  Ganeti and usually is specific to a particular hypervisor (like the
818
  kernel path for :term:`PVM` which makes no sense for :term:`HVM`).
819

    
820
:backend parameter:
821
  a backend parameter is defined as an instance parameter that can be
822
  shared among a list of instances, and is either generic enough not
823
  to be tied to a given hypervisor or cannot influence at all the
824
  hypervisor behaviour.
825

    
826
  For example: memory, vcpus, auto_balance
827

    
828
  All these parameters will be encoded into constants.py with the prefix
829
  "BE\_" and the whole list of parameters will exist in the set
830
  "BES_PARAMETERS"
831

    
832
:proper parameter:
833
  a parameter whose value is unique to the instance (e.g. the name of a
834
  LV, or the MAC of a NIC)
835

    
836
As a general rule, for all kind of parameters, โ€œNoneโ€ (or in
837
JSON-speak, โ€œnilโ€) will no longer be a valid value for a parameter. As
838
such, only non-default parameters will be saved as part of objects in
839
the serialization step, reducing the size of the serialized format.
840

    
841
Cluster parameters
842
++++++++++++++++++
843

    
844
Cluster parameters remain as today, attributes at the top level of the
845
Cluster object. In addition, two new attributes at this level will
846
hold defaults for the instances:
847

    
848
- hvparams, a dictionary indexed by hypervisor type, holding default
849
  values for hypervisor parameters that are not defined/overridden by
850
  the instances of this hypervisor type
851

    
852
- beparams, a dictionary holding (for 2.0) a single element 'default',
853
  which holds the default value for backend parameters
854

    
855
Node parameters
856
+++++++++++++++
857

    
858
Node-related parameters are very few, and we will continue using the
859
same model for these as previously (attributes on the Node object).
860

    
861
There are three new node flags, described in a separate section "node
862
flags" below.
863

    
864
Instance parameters
865
+++++++++++++++++++
866

    
867
As described before, the instance parameters are split in three:
868
instance proper parameters, unique to each instance, instance
869
hypervisor parameters and instance backend parameters.
870

    
871
The โ€œhvparamsโ€ and โ€œbeparamsโ€ are kept in two dictionaries at instance
872
level. Only non-default parameters are stored (but once customized, a
873
parameter will be kept, even with the same value as the default one,
874
until reset).
875

    
876
The names for hypervisor parameters in the instance.hvparams subtree
877
should be choosen as generic as possible, especially if specific
878
parameters could conceivably be useful for more than one hypervisor,
879
e.g. ``instance.hvparams.vnc_console_port`` instead of using both
880
``instance.hvparams.hvm_vnc_console_port`` and
881
``instance.hvparams.kvm_vnc_console_port``.
882

    
883
There are some special cases related to disks and NICs (for example):
884
a disk has both Ganeti-related parameters (e.g. the name of the LV)
885
and hypervisor-related parameters (how the disk is presented to/named
886
in the instance). The former parameters remain as proper-instance
887
parameters, while the latter value are migrated to the hvparams
888
structure. In 2.0, we will have only globally-per-instance such
889
hypervisor parameters, and not per-disk ones (e.g. all NICs will be
890
exported as of the same type).
891

    
892
Starting from the 1.2 list of instance parameters, here is how they
893
will be mapped to the three classes of parameters:
894

    
895
- name (P)
896
- primary_node (P)
897
- os (P)
898
- hypervisor (P)
899
- status (P)
900
- memory (BE)
901
- vcpus (BE)
902
- nics (P)
903
- disks (P)
904
- disk_template (P)
905
- network_port (P)
906
- kernel_path (HV)
907
- initrd_path (HV)
908
- hvm_boot_order (HV)
909
- hvm_acpi (HV)
910
- hvm_pae (HV)
911
- hvm_cdrom_image_path (HV)
912
- hvm_nic_type (HV)
913
- hvm_disk_type (HV)
914
- vnc_bind_address (HV)
915
- serial_no (P)
916

    
917

    
918
Parameter validation
919
++++++++++++++++++++
920

    
921
To support the new cluster parameter design, additional features will
922
be required from the hypervisor support implementations in Ganeti.
923

    
924
The hypervisor support  implementation API will be extended with the
925
following features:
926

    
927
:PARAMETERS: class-level attribute holding the list of valid parameters
928
  for this hypervisor
929
:CheckParamSyntax(hvparams): checks that the given parameters are
930
  valid (as in the names are valid) for this hypervisor; usually just
931
  comparing ``hvparams.keys()`` and ``cls.PARAMETERS``; this is a class
932
  method that can be called from within master code (i.e. cmdlib) and
933
  should be safe to do so
934
:ValidateParameters(hvparams): verifies the values of the provided
935
  parameters against this hypervisor; this is a method that will be
936
  called on the target node, from backend.py code, and as such can
937
  make node-specific checks (e.g. kernel_path checking)
938

    
939
Default value application
940
+++++++++++++++++++++++++
941

    
942
The application of defaults to an instance is done in the Cluster
943
object, via two new methods as follows:
944

    
945
- ``Cluster.FillHV(instance)``, returns 'filled' hvparams dict, based on
946
  instance's hvparams and cluster's ``hvparams[instance.hypervisor]``
947

    
948
- ``Cluster.FillBE(instance, be_type="default")``, which returns the
949
  beparams dict, based on the instance and cluster beparams
950

    
951
The FillHV/BE transformations will be used, for example, in the
952
RpcRunner when sending an instance for activation/stop, and the sent
953
instance hvparams/beparams will have the final value (noded code doesn't
954
know about defaults).
955

    
956
LU code will need to self-call the transformation, if needed.
957

    
958
Opcode changes
959
++++++++++++++
960

    
961
The parameter changes will have impact on the OpCodes, especially on
962
the following ones:
963

    
964
- ``OpInstanceCreate``, where the new hv and be parameters will be sent
965
  as dictionaries; note that all hv and be parameters are now optional,
966
  as the values can be instead taken from the cluster
967
- ``OpInstanceQuery``, where we have to be able to query these new
968
  parameters; the syntax for names will be ``hvparam/$NAME`` and
969
  ``beparam/$NAME`` for querying an individual parameter out of one
970
  dictionary, and ``hvparams``, respectively ``beparams``, for the whole
971
  dictionaries
972
- ``OpModifyInstance``, where the the modified parameters are sent as
973
  dictionaries
974

    
975
Additionally, we will need new OpCodes to modify the cluster-level
976
defaults for the be/hv sets of parameters.
977

    
978
Caveats
979
+++++++
980

    
981
One problem that might appear is that our classification is not
982
complete or not good enough, and we'll need to change this model. As
983
the last resort, we will need to rollback and keep 1.2 style.
984

    
985
Another problem is that classification of one parameter is unclear
986
(e.g. ``network_port``, is this BE or HV?); in this case we'll take
987
the risk of having to move parameters later between classes.
988

    
989
Security
990
++++++++
991

    
992
The only security issue that we foresee is if some new parameters will
993
have sensitive value. If so, we will need to have a way to export the
994
config data while purging the sensitive value.
995

    
996
E.g. for the drbd shared secrets, we could export these with the
997
values replaced by an empty string.
998

    
999
Node flags
1000
~~~~~~~~~~
1001

    
1002
Ganeti 2.0 adds three node flags that change the way nodes are handled
1003
within Ganeti and the related infrastructure (iallocator interaction,
1004
RAPI data export).
1005

    
1006
*master candidate* flag
1007
+++++++++++++++++++++++
1008

    
1009
Ganeti 2.0 allows more scalability in operation by introducing
1010
parallelization. However, a new bottleneck is reached that is the
1011
synchronization and replication of cluster configuration to all nodes
1012
in the cluster.
1013

    
1014
This breaks scalability as the speed of the replication decreases
1015
roughly with the size of the nodes in the cluster. The goal of the
1016
master candidate flag is to change this O(n) into O(1) with respect to
1017
job and configuration data propagation.
1018

    
1019
Only nodes having this flag set (let's call this set of nodes the
1020
*candidate pool*) will have jobs and configuration data replicated.
1021

    
1022
The cluster will have a new parameter (runtime changeable) called
1023
``candidate_pool_size`` which represents the number of candidates the
1024
cluster tries to maintain (preferably automatically).
1025

    
1026
This will impact the cluster operations as follows:
1027

    
1028
- jobs and config data will be replicated only to a fixed set of nodes
1029
- master fail-over will only be possible to a node in the candidate pool
1030
- cluster verify needs changing to account for these two roles
1031
- external scripts will no longer have access to the configuration
1032
  file (this is not recommended anyway)
1033

    
1034

    
1035
The caveats of this change are:
1036

    
1037
- if all candidates are lost (completely), cluster configuration is
1038
  lost (but it should be backed up external to the cluster anyway)
1039

    
1040
- failed nodes which are candidate must be dealt with properly, so
1041
  that we don't lose too many candidates at the same time; this will be
1042
  reported in cluster verify
1043

    
1044
- the 'all equal' concept of ganeti is no longer true
1045

    
1046
- the partial distribution of config data means that all nodes will
1047
  have to revert to ssconf files for master info (as in 1.2)
1048

    
1049
Advantages:
1050

    
1051
- speed on a 100+ nodes simulated cluster is greatly enhanced, even
1052
  for a simple operation; ``gnt-instance remove`` on a diskless instance
1053
  remove goes from ~9seconds to ~2 seconds
1054

    
1055
- node failure of non-candidates will be less impacting on the cluster
1056

    
1057
The default value for the candidate pool size will be set to 10 but
1058
this can be changed at cluster creation and modified any time later.
1059

    
1060
Testing on simulated big clusters with sequential and parallel jobs
1061
show that this value (10) is a sweet-spot from performance and load
1062
point of view.
1063

    
1064
*offline* flag
1065
++++++++++++++
1066

    
1067
In order to support better the situation in which nodes are offline
1068
(e.g. for repair) without altering the cluster configuration, Ganeti
1069
needs to be told and needs to properly handle this state for nodes.
1070

    
1071
This will result in simpler procedures, and less mistakes, when the
1072
amount of node failures is high on an absolute scale (either due to
1073
high failure rate or simply big clusters).
1074

    
1075
Nodes having this attribute set will not be contacted for inter-node
1076
RPC calls, will not be master candidates, and will not be able to host
1077
instances as primaries.
1078

    
1079
Setting this attribute on a node:
1080

    
1081
- will not be allowed if the node is the master
1082
- will not be allowed if the node has primary instances
1083
- will cause the node to be demoted from the master candidate role (if
1084
  it was), possibly causing another node to be promoted to that role
1085

    
1086
This attribute will impact the cluster operations as follows:
1087

    
1088
- querying these nodes for anything will fail instantly in the RPC
1089
  library, with a specific RPC error (RpcResult.offline == True)
1090

    
1091
- they will be listed in the Other section of cluster verify
1092

    
1093
The code is changed in the following ways:
1094

    
1095
- RPC calls were be converted to skip such nodes:
1096

    
1097
  - RpcRunner-instance-based RPC calls are easy to convert
1098

    
1099
  - static/classmethod RPC calls are harder to convert, and were left
1100
    alone
1101

    
1102
- the RPC results were unified so that this new result state (offline)
1103
  can be differentiated
1104

    
1105
- master voting still queries in repair nodes, as we need to ensure
1106
  consistency in case the (wrong) masters have old data, and nodes have
1107
  come back from repairs
1108

    
1109
Caveats:
1110

    
1111
- some operation semantics are less clear (e.g. what to do on instance
1112
  start with offline secondary?); for now, these will just fail as if
1113
  the flag is not set (but faster)
1114
- 2-node cluster with one node offline needs manual startup of the
1115
  master with a special flag to skip voting (as the master can't get a
1116
  quorum there)
1117

    
1118
One of the advantages of implementing this flag is that it will allow
1119
in the future automation tools to automatically put the node in
1120
repairs and recover from this state, and the code (should/will) handle
1121
this much better than just timing out. So, future possible
1122
improvements (for later versions):
1123

    
1124
- watcher will detect nodes which fail RPC calls, will attempt to ssh
1125
  to them, if failure will put them offline
1126
- watcher will try to ssh and query the offline nodes, if successful
1127
  will take them off the repair list
1128

    
1129
Alternatives considered: The RPC call model in 2.0 is, by default,
1130
much nicer - errors are logged in the background, and job/opcode
1131
execution is clearer, so we could simply not introduce this. However,
1132
having this state will make both the codepaths clearer (offline
1133
vs. temporary failure) and the operational model (it's not a node with
1134
errors, but an offline node).
1135

    
1136

    
1137
*drained* flag
1138
++++++++++++++
1139

    
1140
Due to parallel execution of jobs in Ganeti 2.0, we could have the
1141
following situation:
1142

    
1143
- gnt-node migrate + failover is run
1144
- gnt-node evacuate is run, which schedules a long-running 6-opcode
1145
  job for the node
1146
- partway through, a new job comes in that runs an iallocator script,
1147
  which finds the above node as empty and a very good candidate
1148
- gnt-node evacuate has finished, but now it has to be run again, to
1149
  clean the above instance(s)
1150

    
1151
In order to prevent this situation, and to be able to get nodes into
1152
proper offline status easily, a new *drained* flag was added to the
1153
nodes.
1154

    
1155
This flag (which actually means "is being, or was drained, and is
1156
expected to go offline"), will prevent allocations on the node, but
1157
otherwise all other operations (start/stop instance, query, etc.) are
1158
working without any restrictions.
1159

    
1160
Interaction between flags
1161
+++++++++++++++++++++++++
1162

    
1163
While these flags are implemented as separate flags, they are
1164
mutually-exclusive and are acting together with the master node role
1165
as a single *node status* value. In other words, a flag is only in one
1166
of these roles at a given time. The lack of any of these flags denote
1167
a regular node.
1168

    
1169
The current node status is visible in the ``gnt-cluster verify``
1170
output, and the individual flags can be examined via separate flags in
1171
the ``gnt-node list`` output.
1172

    
1173
These new flags will be exported in both the iallocator input message
1174
and via RAPI, see the respective man pages for the exact names.
1175

    
1176
Feature changes
1177
---------------
1178

    
1179
The main feature-level changes will be:
1180

    
1181
- a number of disk related changes
1182
- removal of fixed two-disk, one-nic per instance limitation
1183

    
1184
Disk handling changes
1185
~~~~~~~~~~~~~~~~~~~~~
1186

    
1187
The storage options available in Ganeti 1.x were introduced based on
1188
then-current software (first DRBD 0.7 then later DRBD 8) and the
1189
estimated usage patters. However, experience has later shown that some
1190
assumptions made initially are not true and that more flexibility is
1191
needed.
1192

    
1193
One main assumption made was that disk failures should be treated as
1194
'rare' events, and that each of them needs to be manually handled in
1195
order to ensure data safety; however, both these assumptions are false:
1196

    
1197
- disk failures can be a common occurrence, based on usage patterns or
1198
  cluster size
1199
- our disk setup is robust enough (referring to DRBD8 + LVM) that we
1200
  could automate more of the recovery
1201

    
1202
Note that we still don't have fully-automated disk recovery as a goal,
1203
but our goal is to reduce the manual work needed.
1204

    
1205
As such, we plan the following main changes:
1206

    
1207
- DRBD8 is much more flexible and stable than its previous version
1208
  (0.7), such that removing the support for the ``remote_raid1``
1209
  template and focusing only on DRBD8 is easier
1210

    
1211
- dynamic discovery of DRBD devices is not actually needed in a cluster
1212
  that where the DRBD namespace is controlled by Ganeti; switching to a
1213
  static assignment (done at either instance creation time or change
1214
  secondary time) will change the disk activation time from O(n) to
1215
  O(1), which on big clusters is a significant gain
1216

    
1217
- remove the hard dependency on LVM (currently all available storage
1218
  types are ultimately backed by LVM volumes) by introducing file-based
1219
  storage
1220

    
1221
Additionally, a number of smaller enhancements are also planned:
1222
- support variable number of disks
1223
- support read-only disks
1224

    
1225
Future enhancements in the 2.x series, which do not require base design
1226
changes, might include:
1227

    
1228
- enhancement of the LVM allocation method in order to try to keep
1229
  all of an instance's virtual disks on the same physical
1230
  disks
1231

    
1232
- add support for DRBD8 authentication at handshake time in
1233
  order to ensure each device connects to the correct peer
1234

    
1235
- remove the restrictions on failover only to the secondary
1236
  which creates very strict rules on cluster allocation
1237

    
1238
DRBD minor allocation
1239
+++++++++++++++++++++
1240

    
1241
Currently, when trying to identify or activate a new DRBD (or MD)
1242
device, the code scans all in-use devices in order to see if we find
1243
one that looks similar to our parameters and is already in the desired
1244
state or not. Since this needs external commands to be run, it is very
1245
slow when more than a few devices are already present.
1246

    
1247
Therefore, we will change the discovery model from dynamic to
1248
static. When a new device is logically created (added to the
1249
configuration) a free minor number is computed from the list of
1250
devices that should exist on that node and assigned to that
1251
device.
1252

    
1253
At device activation, if the minor is already in use, we check if
1254
it has our parameters; if not so, we just destroy the device (if
1255
possible, otherwise we abort) and start it with our own
1256
parameters.
1257

    
1258
This means that we in effect take ownership of the minor space for
1259
that device type; if there's a user-created DRBD minor, it will be
1260
automatically removed.
1261

    
1262
The change will have the effect of reducing the number of external
1263
commands run per device from a constant number times the index of the
1264
first free DRBD minor to just a constant number.
1265

    
1266
Removal of obsolete device types (MD, DRBD7)
1267
++++++++++++++++++++++++++++++++++++++++++++
1268

    
1269
We need to remove these device types because of two issues. First,
1270
DRBD7 has bad failure modes in case of dual failures (both network and
1271
disk - it cannot propagate the error up the device stack and instead
1272
just panics. Second, due to the asymmetry between primary and
1273
secondary in MD+DRBD mode, we cannot do live failover (not even if we
1274
had MD+DRBD8).
1275

    
1276
File-based storage support
1277
++++++++++++++++++++++++++
1278

    
1279
Using files instead of logical volumes for instance storage would
1280
allow us to get rid of the hard requirement for volume groups for
1281
testing clusters and it would also allow usage of SAN storage to do
1282
live failover taking advantage of this storage solution.
1283

    
1284
Better LVM allocation
1285
+++++++++++++++++++++
1286

    
1287
Currently, the LV to PV allocation mechanism is a very simple one: at
1288
each new request for a logical volume, tell LVM to allocate the volume
1289
in order based on the amount of free space. This is good for
1290
simplicity and for keeping the usage equally spread over the available
1291
physical disks, however it introduces a problem that an instance could
1292
end up with its (currently) two drives on two physical disks, or
1293
(worse) that the data and metadata for a DRBD device end up on
1294
different drives.
1295

    
1296
This is bad because it causes unneeded ``replace-disks`` operations in
1297
case of a physical failure.
1298

    
1299
The solution is to batch allocations for an instance and make the LVM
1300
handling code try to allocate as close as possible all the storage of
1301
one instance. We will still allow the logical volumes to spill over to
1302
additional disks as needed.
1303

    
1304
Note that this clustered allocation can only be attempted at initial
1305
instance creation, or at change secondary node time. At add disk time,
1306
or at replacing individual disks, it's not easy enough to compute the
1307
current disk map so we'll not attempt the clustering.
1308

    
1309
DRBD8 peer authentication at handshake
1310
++++++++++++++++++++++++++++++++++++++
1311

    
1312
DRBD8 has a new feature that allow authentication of the peer at
1313
connect time. We can use this to prevent connecting to the wrong peer
1314
more that securing the connection. Even though we never had issues
1315
with wrong connections, it would be good to implement this.
1316

    
1317

    
1318
LVM self-repair (optional)
1319
++++++++++++++++++++++++++
1320

    
1321
The complete failure of a physical disk is very tedious to
1322
troubleshoot, mainly because of the many failure modes and the many
1323
steps needed. We can safely automate some of the steps, more
1324
specifically the ``vgreduce --removemissing`` using the following
1325
method:
1326

    
1327
#. check if all nodes have consistent volume groups
1328
#. if yes, and previous status was yes, do nothing
1329
#. if yes, and previous status was no, save status and restart
1330
#. if no, and previous status was no, do nothing
1331
#. if no, and previous status was yes:
1332
    #. if more than one node is inconsistent, do nothing
1333
    #. if only one node is inconsistent:
1334
        #. run ``vgreduce --removemissing``
1335
        #. log this occurrence in the Ganeti log in a form that
1336
           can be used for monitoring
1337
        #. [FUTURE] run ``replace-disks`` for all
1338
           instances affected
1339

    
1340
Failover to any node
1341
++++++++++++++++++++
1342

    
1343
With a modified disk activation sequence, we can implement the
1344
*failover to any* functionality, removing many of the layout
1345
restrictions of a cluster:
1346

    
1347
- the need to reserve memory on the current secondary: this gets reduced
1348
  to a must to reserve memory anywhere on the cluster
1349

    
1350
- the need to first failover and then replace secondary for an
1351
  instance: with failover-to-any, we can directly failover to
1352
  another node, which also does the replace disks at the same
1353
  step
1354

    
1355
In the following, we denote the current primary by P1, the current
1356
secondary by S1, and the new primary and secondaries by P2 and S2. P2
1357
is fixed to the node the user chooses, but the choice of S2 can be
1358
made between P1 and S1. This choice can be constrained, depending on
1359
which of P1 and S1 has failed.
1360

    
1361
- if P1 has failed, then S1 must become S2, and live migration is not
1362
  possible
1363
- if S1 has failed, then P1 must become S2, and live migration could be
1364
  possible (in theory, but this is not a design goal for 2.0)
1365

    
1366
The algorithm for performing the failover is straightforward:
1367

    
1368
- verify that S2 (the node the user has chosen to keep as secondary) has
1369
  valid data (is consistent)
1370

    
1371
- tear down the current DRBD association and setup a DRBD pairing
1372
  between P2 (P2 is indicated by the user) and S2; since P2 has no data,
1373
  it will start re-syncing from S2
1374

    
1375
- as soon as P2 is in state SyncTarget (i.e. after the resync has
1376
  started but before it has finished), we can promote it to primary role
1377
  (r/w) and start the instance on P2
1378

    
1379
- as soon as the P2?S2 sync has finished, we can remove
1380
  the old data on the old node that has not been chosen for
1381
  S2
1382

    
1383
Caveats: during the P2?S2 sync, a (non-transient) network error
1384
will cause I/O errors on the instance, so (if a longer instance
1385
downtime is acceptable) we can postpone the restart of the instance
1386
until the resync is done. However, disk I/O errors on S2 will cause
1387
data loss, since we don't have a good copy of the data anymore, so in
1388
this case waiting for the sync to complete is not an option. As such,
1389
it is recommended that this feature is used only in conjunction with
1390
proper disk monitoring.
1391

    
1392

    
1393
Live migration note: While failover-to-any is possible for all choices
1394
of S2, migration-to-any is possible only if we keep P1 as S2.
1395

    
1396
Caveats
1397
+++++++
1398

    
1399
The dynamic device model, while more complex, has an advantage: it
1400
will not reuse by mistake the DRBD device of another instance, since
1401
it always looks for either our own or a free one.
1402

    
1403
The static one, in contrast, will assume that given a minor number N,
1404
it's ours and we can take over. This needs careful implementation such
1405
that if the minor is in use, either we are able to cleanly shut it
1406
down, or we abort the startup. Otherwise, it could be that we start
1407
syncing between two instance's disks, causing data loss.
1408

    
1409

    
1410
Variable number of disk/NICs per instance
1411
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1412

    
1413
Variable number of disks
1414
++++++++++++++++++++++++
1415

    
1416
In order to support high-security scenarios (for example read-only sda
1417
and read-write sdb), we need to make a fully flexibly disk
1418
definition. This has less impact that it might look at first sight:
1419
only the instance creation has hard coded number of disks, not the disk
1420
handling code. The block device handling and most of the instance
1421
handling code is already working with "the instance's disks" as
1422
opposed to "the two disks of the instance", but some pieces are not
1423
(e.g. import/export) and the code needs a review to ensure safety.
1424

    
1425
The objective is to be able to specify the number of disks at
1426
instance creation, and to be able to toggle from read-only to
1427
read-write a disk afterward.
1428

    
1429
Variable number of NICs
1430
+++++++++++++++++++++++
1431

    
1432
Similar to the disk change, we need to allow multiple network
1433
interfaces per instance. This will affect the internal code (some
1434
function will have to stop assuming that ``instance.nics`` is a list
1435
of length one), the OS API which currently can export/import only one
1436
instance, and the command line interface.
1437

    
1438
Interface changes
1439
-----------------
1440

    
1441
There are two areas of interface changes: API-level changes (the OS
1442
interface and the RAPI interface) and the command line interface
1443
changes.
1444

    
1445
OS interface
1446
~~~~~~~~~~~~
1447

    
1448
The current Ganeti OS interface, version 5, is tailored for Ganeti 1.2.
1449
The interface is composed by a series of scripts which get called with
1450
certain parameters to perform OS-dependent operations on the cluster.
1451
The current scripts are:
1452

    
1453
create
1454
  called when a new instance is added to the cluster
1455
export
1456
  called to export an instance disk to a stream
1457
import
1458
  called to import from a stream to a new instance
1459
rename
1460
  called to perform the os-specific operations necessary for renaming an
1461
  instance
1462

    
1463
Currently these scripts suffer from the limitations of Ganeti 1.2: for
1464
example they accept exactly one block and one swap devices to operate
1465
on, rather than any amount of generic block devices, they blindly assume
1466
that an instance will have just one network interface to operate, they
1467
can not be configured to optimise the instance for a particular
1468
hypervisor.
1469

    
1470
Since in Ganeti 2.0 we want to support multiple hypervisors, and a
1471
non-fixed number of network and disks the OS interface need to change to
1472
transmit the appropriate amount of information about an instance to its
1473
managing operating system, when operating on it. Moreover since some old
1474
assumptions usually used in OS scripts are no longer valid we need to
1475
re-establish a common knowledge on what can be assumed and what cannot
1476
be regarding Ganeti environment.
1477

    
1478

    
1479
When designing the new OS API our priorities are:
1480
- ease of use
1481
- future extensibility
1482
- ease of porting from the old API
1483
- modularity
1484

    
1485
As such we want to limit the number of scripts that must be written to
1486
support an OS, and make it easy to share code between them by uniforming
1487
their input.  We also will leave the current script structure unchanged,
1488
as far as we can, and make a few of the scripts (import, export and
1489
rename) optional. Most information will be passed to the script through
1490
environment variables, for ease of access and at the same time ease of
1491
using only the information a script needs.
1492

    
1493

    
1494
The Scripts
1495
+++++++++++
1496

    
1497
As in Ganeti 1.2, every OS which wants to be installed in Ganeti needs
1498
to support the following functionality, through scripts:
1499

    
1500
create:
1501
  used to create a new instance running that OS. This script should
1502
  prepare the block devices, and install them so that the new OS can
1503
  boot under the specified hypervisor.
1504
export (optional):
1505
  used to export an installed instance using the given OS to a format
1506
  which can be used to import it back into a new instance.
1507
import (optional):
1508
  used to import an exported instance into a new one. This script is
1509
  similar to create, but the new instance should have the content of the
1510
  export, rather than contain a pristine installation.
1511
rename (optional):
1512
  used to perform the internal OS-specific operations needed to rename
1513
  an instance.
1514

    
1515
If any optional script is not implemented Ganeti will refuse to perform
1516
the given operation on instances using the non-implementing OS. Of
1517
course the create script is mandatory, and it doesn't make sense to
1518
support the either the export or the import operation but not both.
1519

    
1520
Incompatibilities with 1.2
1521
__________________________
1522

    
1523
We expect the following incompatibilities between the OS scripts for 1.2
1524
and the ones for 2.0:
1525

    
1526
- Input parameters: in 1.2 those were passed on the command line, in 2.0
1527
  we'll use environment variables, as there will be a lot more
1528
  information and not all OSes may care about all of it.
1529
- Number of calls: export scripts will be called once for each device
1530
  the instance has, and import scripts once for every exported disk.
1531
  Imported instances will be forced to have a number of disks greater or
1532
  equal to the one of the export.
1533
- Some scripts are not compulsory: if such a script is missing the
1534
  relevant operations will be forbidden for instances of that OS. This
1535
  makes it easier to distinguish between unsupported operations and
1536
  no-op ones (if any).
1537

    
1538

    
1539
Input
1540
_____
1541

    
1542
Rather than using command line flags, as they do now, scripts will
1543
accept inputs from environment variables. We expect the following input
1544
values:
1545

    
1546
OS_API_VERSION
1547
  The version of the OS API that the following parameters comply with;
1548
  this is used so that in the future we could have OSes supporting
1549
  multiple versions and thus Ganeti send the proper version in this
1550
  parameter
1551
INSTANCE_NAME
1552
  Name of the instance acted on
1553
HYPERVISOR
1554
  The hypervisor the instance should run on (e.g. 'xen-pvm', 'xen-hvm',
1555
  'kvm')
1556
DISK_COUNT
1557
  The number of disks this instance will have
1558
NIC_COUNT
1559
  The number of NICs this instance will have
1560
DISK_<N>_PATH
1561
  Path to the Nth disk.
1562
DISK_<N>_ACCESS
1563
  W if read/write, R if read only. OS scripts are not supposed to touch
1564
  read-only disks, but will be passed them to know.
1565
DISK_<N>_FRONTEND_TYPE
1566
  Type of the disk as seen by the instance. Can be 'scsi', 'ide',
1567
  'virtio'
1568
DISK_<N>_BACKEND_TYPE
1569
  Type of the disk as seen from the node. Can be 'block', 'file:loop' or
1570
  'file:blktap'
1571
NIC_<N>_MAC
1572
  Mac address for the Nth network interface
1573
NIC_<N>_IP
1574
  Ip address for the Nth network interface, if available
1575
NIC_<N>_BRIDGE
1576
  Node bridge the Nth network interface will be connected to
1577
NIC_<N>_FRONTEND_TYPE
1578
  Type of the Nth NIC as seen by the instance. For example 'virtio',
1579
  'rtl8139', etc.
1580
DEBUG_LEVEL
1581
  Whether more out should be produced, for debugging purposes. Currently
1582
  the only valid values are 0 and 1.
1583

    
1584
These are only the basic variables we are thinking of now, but more
1585
may come during the implementation and they will be documented in the
1586
:manpage:`ganeti-os-api` man page. All these variables will be
1587
available to all scripts.
1588

    
1589
Some scripts will need a few more information to work. These will have
1590
per-script variables, such as for example:
1591

    
1592
OLD_INSTANCE_NAME
1593
  rename: the name the instance should be renamed from.
1594
EXPORT_DEVICE
1595
  export: device to be exported, a snapshot of the actual device. The
1596
  data must be exported to stdout.
1597
EXPORT_INDEX
1598
  export: sequential number of the instance device targeted.
1599
IMPORT_DEVICE
1600
  import: device to send the data to, part of the new instance. The data
1601
  must be imported from stdin.
1602
IMPORT_INDEX
1603
  import: sequential number of the instance device targeted.
1604

    
1605
(Rationale for INSTANCE_NAME as an environment variable: the instance
1606
name is always needed and we could pass it on the command line. On the
1607
other hand, though, this would force scripts to both access the
1608
environment and parse the command line, so we'll move it for
1609
uniformity.)
1610

    
1611

    
1612
Output/Behaviour
1613
________________
1614

    
1615
As discussed scripts should only send user-targeted information to
1616
stderr. The create and import scripts are supposed to format/initialise
1617
the given block devices and install the correct instance data. The
1618
export script is supposed to export instance data to stdout in a format
1619
understandable by the the import script. The data will be compressed by
1620
Ganeti, so no compression should be done. The rename script should only
1621
modify the instance's knowledge of what its name is.
1622

    
1623
Other declarative style features
1624
++++++++++++++++++++++++++++++++
1625

    
1626
Similar to Ganeti 1.2, OS specifications will need to provide a
1627
'ganeti_api_version' containing list of numbers matching the
1628
version(s) of the API they implement. Ganeti itself will always be
1629
compatible with one version of the API and may maintain backwards
1630
compatibility if it's feasible to do so. The numbers are one-per-line,
1631
so an OS supporting both version 5 and version 20 will have a file
1632
containing two lines. This is different from Ganeti 1.2, which only
1633
supported one version number.
1634

    
1635
In addition to that an OS will be able to declare that it does support
1636
only a subset of the Ganeti hypervisors, by declaring them in the
1637
'hypervisors' file.
1638

    
1639

    
1640
Caveats/Notes
1641
+++++++++++++
1642

    
1643
We might want to have a "default" import/export behaviour that just
1644
dumps all disks and restores them. This can save work as most systems
1645
will just do this, while allowing flexibility for different systems.
1646

    
1647
Environment variables are limited in size, but we expect that there will
1648
be enough space to store the information we need. If we discover that
1649
this is not the case we may want to go to a more complex API such as
1650
storing those information on the filesystem and providing the OS script
1651
with the path to a file where they are encoded in some format.
1652

    
1653

    
1654

    
1655
Remote API changes
1656
~~~~~~~~~~~~~~~~~~
1657

    
1658
The first Ganeti remote API (RAPI) was designed and deployed with the
1659
Ganeti 1.2.5 release.  That version provide read-only access to the
1660
cluster state. Fully functional read-write API demands significant
1661
internal changes which will be implemented in version 2.0.
1662

    
1663
We decided to go with implementing the Ganeti RAPI in a RESTful way,
1664
which is aligned with key features we looking. It is simple,
1665
stateless, scalable and extensible paradigm of API implementation. As
1666
transport it uses HTTP over SSL, and we are implementing it with JSON
1667
encoding, but in a way it possible to extend and provide any other
1668
one.
1669

    
1670
Design
1671
++++++
1672

    
1673
The Ganeti RAPI is implemented as independent daemon, running on the
1674
same node with the same permission level as Ganeti master
1675
daemon. Communication is done through the LUXI library to the master
1676
daemon. In order to keep communication asynchronous RAPI processes two
1677
types of client requests:
1678

    
1679
- queries: server is able to answer immediately
1680
- job submission: some time is required for a useful response
1681

    
1682
In the query case requested data send back to client in the HTTP
1683
response body. Typical examples of queries would be: list of nodes,
1684
instances, cluster info, etc.
1685

    
1686
In the case of job submission, the client receive a job ID, the
1687
identifier which allows to query the job progress in the job queue
1688
(see `Job Queue`_).
1689

    
1690
Internally, each exported object has an version identifier, which is
1691
used as a state identifier in the HTTP header E-Tag field for
1692
requests/responses to avoid race conditions.
1693

    
1694

    
1695
Resource representation
1696
+++++++++++++++++++++++
1697

    
1698
The key difference of using REST instead of others API is that REST
1699
requires separation of services via resources with unique URIs. Each
1700
of them should have limited amount of state and support standard HTTP
1701
methods: GET, POST, DELETE, PUT.
1702

    
1703
For example in Ganeti's case we can have a set of URI:
1704

    
1705
 - ``/{clustername}/instances``
1706
 - ``/{clustername}/instances/{instancename}``
1707
 - ``/{clustername}/instances/{instancename}/tag``
1708
 - ``/{clustername}/tag``
1709

    
1710
A GET request to ``/{clustername}/instances`` will return the list of
1711
instances, a POST to ``/{clustername}/instances`` should create a new
1712
instance, a DELETE ``/{clustername}/instances/{instancename}`` should
1713
delete the instance, a GET ``/{clustername}/tag`` should return get
1714
cluster tags.
1715

    
1716
Each resource URI will have a version prefix. The resource IDs are to
1717
be determined.
1718

    
1719
Internal encoding might be JSON, XML, or any other. The JSON encoding
1720
fits nicely in Ganeti RAPI needs. The client can request a specific
1721
representation via the Accept field in the HTTP header.
1722

    
1723
REST uses HTTP as its transport and application protocol for resource
1724
access. The set of possible responses is a subset of standard HTTP
1725
responses.
1726

    
1727
The statelessness model provides additional reliability and
1728
transparency to operations (e.g. only one request needs to be analyzed
1729
to understand the in-progress operation, not a sequence of multiple
1730
requests/responses).
1731

    
1732

    
1733
Security
1734
++++++++
1735

    
1736
With the write functionality security becomes a much bigger an issue.
1737
The Ganeti RAPI uses basic HTTP authentication on top of an
1738
SSL-secured connection to grant access to an exported resource. The
1739
password is stored locally in an Apache-style ``.htpasswd`` file. Only
1740
one level of privileges is supported.
1741

    
1742
Caveats
1743
+++++++
1744

    
1745
The model detailed above for job submission requires the client to
1746
poll periodically for updates to the job; an alternative would be to
1747
allow the client to request a callback, or a 'wait for updates' call.
1748

    
1749
The callback model was not considered due to the following two issues:
1750

    
1751
- callbacks would require a new model of allowed callback URLs,
1752
  together with a method of managing these
1753
- callbacks only work when the client and the master are in the same
1754
  security domain, and they fail in the other cases (e.g. when there is
1755
  a firewall between the client and the RAPI daemon that only allows
1756
  client-to-RAPI calls, which is usual in DMZ cases)
1757

    
1758
The 'wait for updates' method is not suited to the HTTP protocol,
1759
where requests are supposed to be short-lived.
1760

    
1761
Command line changes
1762
~~~~~~~~~~~~~~~~~~~~
1763

    
1764
Ganeti 2.0 introduces several new features as well as new ways to
1765
handle instance resources like disks or network interfaces. This
1766
requires some noticeable changes in the way command line arguments are
1767
handled.
1768

    
1769
- extend and modify command line syntax to support new features
1770
- ensure consistent patterns in command line arguments to reduce
1771
  cognitive load
1772

    
1773
The design changes that require these changes are, in no particular
1774
order:
1775

    
1776
- flexible instance disk handling: support a variable number of disks
1777
  with varying properties per instance,
1778
- flexible instance network interface handling: support a variable
1779
  number of network interfaces with varying properties per instance
1780
- multiple hypervisors: multiple hypervisors can be active on the same
1781
  cluster, each supporting different parameters,
1782
- support for device type CDROM (via ISO image)
1783

    
1784
As such, there are several areas of Ganeti where the command line
1785
arguments will change:
1786

    
1787
- Cluster configuration
1788

    
1789
  - cluster initialization
1790
  - cluster default configuration
1791

    
1792
- Instance configuration
1793

    
1794
  - handling of network cards for instances,
1795
  - handling of disks for instances,
1796
  - handling of CDROM devices and
1797
  - handling of hypervisor specific options.
1798

    
1799
There are several areas of Ganeti where the command line arguments
1800
will change:
1801

    
1802
- Cluster configuration
1803

    
1804
  - cluster initialization
1805
  - cluster default configuration
1806

    
1807
- Instance configuration
1808

    
1809
  - handling of network cards for instances,
1810
  - handling of disks for instances,
1811
  - handling of CDROM devices and
1812
  - handling of hypervisor specific options.
1813

    
1814
Notes about device removal/addition
1815
+++++++++++++++++++++++++++++++++++
1816

    
1817
To avoid problems with device location changes (e.g. second network
1818
interface of the instance becoming the first or third and the like)
1819
the list of network/disk devices is treated as a stack, i.e. devices
1820
can only be added/removed at the end of the list of devices of each
1821
class (disk or network) for each instance.
1822

    
1823
gnt-instance commands
1824
+++++++++++++++++++++
1825

    
1826
The commands for gnt-instance will be modified and extended to allow
1827
for the new functionality:
1828

    
1829
- the add command will be extended to support the new device and
1830
  hypervisor options,
1831
- the modify command continues to handle all modifications to
1832
  instances, but will be extended with new arguments for handling
1833
  devices.
1834

    
1835
Network Device Options
1836
++++++++++++++++++++++
1837

    
1838
The generic format of the network device option is:
1839

    
1840
  --net $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1841

    
1842
:$DEVNUM: device number, unsigned integer, starting at 0,
1843
:$OPTION: device option, string,
1844
:$VALUE: device option value, string.
1845

    
1846
Currently, the following device options will be defined (open to
1847
further changes):
1848

    
1849
:mac: MAC address of the network interface, accepts either a valid
1850
  MAC address or the string 'auto'. If 'auto' is specified, a new MAC
1851
  address will be generated randomly. If the mac device option is not
1852
  specified, the default value 'auto' is assumed.
1853
:bridge: network bridge the network interface is connected
1854
  to. Accepts either a valid bridge name (the specified bridge must
1855
  exist on the node(s)) as string or the string 'auto'. If 'auto' is
1856
  specified, the default brigde is used. If the bridge option is not
1857
  specified, the default value 'auto' is assumed.
1858

    
1859
Disk Device Options
1860
+++++++++++++++++++
1861

    
1862
The generic format of the disk device option is:
1863

    
1864
  --disk $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1865

    
1866
:$DEVNUM: device number, unsigned integer, starting at 0,
1867
:$OPTION: device option, string,
1868
:$VALUE: device option value, string.
1869

    
1870
Currently, the following device options will be defined (open to
1871
further changes):
1872

    
1873
:size: size of the disk device, either a positive number, specifying
1874
  the disk size in mebibytes, or a number followed by a magnitude suffix
1875
  (M for mebibytes, G for gibibytes). Also accepts the string 'auto' in
1876
  which case the default disk size will be used. If the size option is
1877
  not specified, 'auto' is assumed. This option is not valid for all
1878
  disk layout types.
1879
:access: access mode of the disk device, a single letter, valid values
1880
  are:
1881

    
1882
  - *w*: read/write access to the disk device or
1883
  - *r*: read-only access to the disk device.
1884

    
1885
  If the access mode is not specified, the default mode of read/write
1886
  access will be configured.
1887
:path: path to the image file for the disk device, string. No default
1888
  exists. This option is not valid for all disk layout types.
1889

    
1890
Adding devices
1891
++++++++++++++
1892

    
1893
To add devices to an already existing instance, use the device type
1894
specific option to gnt-instance modify. Currently, there are two
1895
device type specific options supported:
1896

    
1897
:--net: for network interface cards
1898
:--disk: for disk devices
1899

    
1900
The syntax to the device specific options is similar to the generic
1901
device options, but instead of specifying a device number like for
1902
gnt-instance add, you specify the magic string add. The new device
1903
will always be appended at the end of the list of devices of this type
1904
for the specified instance, e.g. if the instance has disk devices 0,1
1905
and 2, the newly added disk device will be disk device 3.
1906

    
1907
Example: gnt-instance modify --net add:mac=auto test-instance
1908

    
1909
Removing devices
1910
++++++++++++++++
1911

    
1912
Removing devices from and instance is done via gnt-instance
1913
modify. The same device specific options as for adding instances are
1914
used. Instead of a device number and further device options, only the
1915
magic string remove is specified. It will always remove the last
1916
device in the list of devices of this type for the instance specified,
1917
e.g. if the instance has disk devices 0, 1, 2 and 3, the disk device
1918
number 3 will be removed.
1919

    
1920
Example: gnt-instance modify --net remove test-instance
1921

    
1922
Modifying devices
1923
+++++++++++++++++
1924

    
1925
Modifying devices is also done with device type specific options to
1926
the gnt-instance modify command. There are currently two device type
1927
options supported:
1928

    
1929
:--net: for network interface cards
1930
:--disk: for disk devices
1931

    
1932
The syntax to the device specific options is similar to the generic
1933
device options. The device number you specify identifies the device to
1934
be modified.
1935

    
1936
Example::
1937

    
1938
  gnt-instance modify --disk 2:access=r
1939

    
1940
Hypervisor Options
1941
++++++++++++++++++
1942

    
1943
Ganeti 2.0 will support more than one hypervisor. Different
1944
hypervisors have various options that only apply to a specific
1945
hypervisor. Those hypervisor specific options are treated specially
1946
via the ``--hypervisor`` option. The generic syntax of the hypervisor
1947
option is as follows::
1948

    
1949
  --hypervisor $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1950

    
1951
:$HYPERVISOR: symbolic name of the hypervisor to use, string,
1952
  has to match the supported hypervisors. Example: xen-pvm
1953

    
1954
:$OPTION: hypervisor option name, string
1955
:$VALUE: hypervisor option value, string
1956

    
1957
The hypervisor option for an instance can be set on instance creation
1958
time via the ``gnt-instance add`` command. If the hypervisor for an
1959
instance is not specified upon instance creation, the default
1960
hypervisor will be used.
1961

    
1962
Modifying hypervisor parameters
1963
+++++++++++++++++++++++++++++++
1964

    
1965
The hypervisor parameters of an existing instance can be modified
1966
using ``--hypervisor`` option of the ``gnt-instance modify``
1967
command. However, the hypervisor type of an existing instance can not
1968
be changed, only the particular hypervisor specific option can be
1969
changed. Therefore, the format of the option parameters has been
1970
simplified to omit the hypervisor name and only contain the comma
1971
separated list of option-value pairs.
1972

    
1973
Example::
1974

    
1975
  gnt-instance modify --hypervisor cdrom=/srv/boot.iso,boot_order=cdrom:network test-instance
1976

    
1977
gnt-cluster commands
1978
++++++++++++++++++++
1979

    
1980
The command for gnt-cluster will be extended to allow setting and
1981
changing the default parameters of the cluster:
1982

    
1983
- The init command will be extend to support the defaults option to
1984
  set the cluster defaults upon cluster initialization.
1985
- The modify command will be added to modify the cluster
1986
  parameters. It will support the --defaults option to change the
1987
  cluster defaults.
1988

    
1989
Cluster defaults
1990

    
1991
The generic format of the cluster default setting option is:
1992

    
1993
  --defaults $OPTION=$VALUE[,$OPTION=$VALUE]
1994

    
1995
:$OPTION: cluster default option, string,
1996
:$VALUE: cluster default option value, string.
1997

    
1998
Currently, the following cluster default options are defined (open to
1999
further changes):
2000

    
2001
:hypervisor: the default hypervisor to use for new instances,
2002
  string. Must be a valid hypervisor known to and supported by the
2003
  cluster.
2004
:disksize: the disksize for newly created instance disks, where
2005
  applicable. Must be either a positive number, in which case the unit
2006
  of megabyte is assumed, or a positive number followed by a supported
2007
  magnitude symbol (M for megabyte or G for gigabyte).
2008
:bridge: the default network bridge to use for newly created instance
2009
  network interfaces, string. Must be a valid bridge name of a bridge
2010
  existing on the node(s).
2011

    
2012
Hypervisor cluster defaults
2013
+++++++++++++++++++++++++++
2014

    
2015
The generic format of the hypervisor cluster wide default setting
2016
option is::
2017

    
2018
  --hypervisor-defaults $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
2019

    
2020
:$HYPERVISOR: symbolic name of the hypervisor whose defaults you want
2021
  to set, string
2022
:$OPTION: cluster default option, string,
2023
:$VALUE: cluster default option value, string.
2024

    
2025
.. vim: set textwidth=72 :