Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.0.rst @ f7b769b1

History | View | Annotate | Download (75.3 kB)

1
=================
2
Ganeti 2.0 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.0 compared to
6
the 1.2 version.
7

    
8
The 2.0 version will constitute a rewrite of the 'core' architecture,
9
paving the way for additional features in future 2.x versions.
10

    
11
.. contents:: :depth: 3
12

    
13
Objective
14
=========
15

    
16
Ganeti 1.2 has many scalability issues and restrictions due to its
17
roots as software for managing small and 'static' clusters.
18

    
19
Version 2.0 will attempt to remedy first the scalability issues and
20
then the restrictions.
21

    
22
Background
23
==========
24

    
25
While Ganeti 1.2 is usable, it severely limits the flexibility of the
26
cluster administration and imposes a very rigid model. It has the
27
following main scalability issues:
28

    
29
- only one operation at a time on the cluster [#]_
30
- poor handling of node failures in the cluster
31
- mixing hypervisors in a cluster not allowed
32

    
33
It also has a number of artificial restrictions, due to historical
34
design:
35

    
36
- fixed number of disks (two) per instance
37
- fixed number of NICs
38

    
39
.. [#] Replace disks will release the lock, but this is an exception
40
       and not a recommended way to operate
41

    
42
The 2.0 version is intended to address some of these problems, and
43
create a more flexible code base for future developments.
44

    
45
Among these problems, the single-operation at a time restriction is
46
biggest issue with the current version of Ganeti. It is such a big
47
impediment in operating bigger clusters that many times one is tempted
48
to remove the lock just to do a simple operation like start instance
49
while an OS installation is running.
50

    
51
Scalability problems
52
--------------------
53

    
54
Ganeti 1.2 has a single global lock, which is used for all cluster
55
operations.  This has been painful at various times, for example:
56

    
57
- It is impossible for two people to efficiently interact with a cluster
58
  (for example for debugging) at the same time.
59
- When batch jobs are running it's impossible to do other work (for
60
  example failovers/fixes) on a cluster.
61

    
62
This poses scalability problems: as clusters grow in node and instance
63
size it's a lot more likely that operations which one could conceive
64
should run in parallel (for example because they happen on different
65
nodes) are actually stalling each other while waiting for the global
66
lock, without a real reason for that to happen.
67

    
68
One of the main causes of this global lock (beside the higher
69
difficulty of ensuring data consistency in a more granular lock model)
70
is the fact that currently there is no long-lived process in Ganeti
71
that can coordinate multiple operations. Each command tries to acquire
72
the so called *cmd* lock and when it succeeds, it takes complete
73
ownership of the cluster configuration and state.
74

    
75
Other scalability problems are due the design of the DRBD device
76
model, which assumed at its creation a low (one to four) number of
77
instances per node, which is no longer true with today's hardware.
78

    
79
Artificial restrictions
80
-----------------------
81

    
82
Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per
83
instance model. This is a purely artificial restrictions, but it
84
touches multiple areas (configuration, import/export, command line)
85
that it's more fitted to a major release than a minor one.
86

    
87
Architecture issues
88
-------------------
89

    
90
The fact that each command is a separate process that reads the
91
cluster state, executes the command, and saves the new state is also
92
an issue on big clusters where the configuration data for the cluster
93
begins to be non-trivial in size.
94

    
95
Overview
96
========
97

    
98
In order to solve the scalability problems, a rewrite of the core
99
design of Ganeti is required. While the cluster operations themselves
100
won't change (e.g. start instance will do the same things, the way
101
these operations are scheduled internally will change radically.
102

    
103
The new design will change the cluster architecture to:
104

    
105
.. image:: arch-2.0.png
106

    
107
This differs from the 1.2 architecture by the addition of the master
108
daemon, which will be the only entity to talk to the node daemons.
109

    
110

    
111
Detailed design
112
===============
113

    
114
The changes for 2.0 can be split into roughly three areas:
115

    
116
- core changes that affect the design of the software
117
- features (or restriction removals) but which do not have a wide
118
  impact on the design
119
- user-level and API-level changes which translate into differences for
120
  the operation of the cluster
121

    
122
Core changes
123
------------
124

    
125
The main changes will be switching from a per-process model to a
126
daemon based model, where the individual gnt-* commands will be
127
clients that talk to this daemon (see `Master daemon`_). This will
128
allow us to get rid of the global cluster lock for most operations,
129
having instead a per-object lock (see `Granular locking`_). Also, the
130
daemon will be able to queue jobs, and this will allow the individual
131
clients to submit jobs without waiting for them to finish, and also
132
see the result of old requests (see `Job Queue`_).
133

    
134
Beside these major changes, another 'core' change but that will not be
135
as visible to the users will be changing the model of object attribute
136
storage, and separate that into name spaces (such that an Xen PVM
137
instance will not have the Xen HVM parameters). This will allow future
138
flexibility in defining additional parameters. For more details see
139
`Object parameters`_.
140

    
141
The various changes brought in by the master daemon model and the
142
read-write RAPI will require changes to the cluster security; we move
143
away from Twisted and use HTTP(s) for intra- and extra-cluster
144
communications. For more details, see the security document in the
145
doc/ directory.
146

    
147
Master daemon
148
~~~~~~~~~~~~~
149

    
150
In Ganeti 2.0, we will have the following *entities*:
151

    
152
- the master daemon (on the master node)
153
- the node daemon (on all nodes)
154
- the command line tools (on the master node)
155
- the RAPI daemon (on the master node)
156

    
157
The master-daemon related interaction paths are:
158

    
159
- (CLI tools/RAPI daemon) and the master daemon, via the so called
160
  *LUXI* API
161
- the master daemon and the node daemons, via the node RPC
162

    
163
There are also some additional interaction paths for exceptional cases:
164

    
165
- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
166
  and ``gnt-cluster command``)
167
- master failover is a special case when a non-master node will SSH
168
  and do node-RPC calls to the current master
169

    
170
The protocol between the master daemon and the node daemons will be
171
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
172
using a simple PUT/GET of JSON-encoded messages. This is done due to
173
difficulties in working with the Twisted framework and its protocols
174
in a multithreaded environment, which we can overcome by using a
175
simpler stack (see the caveats section).
176

    
177
The protocol between the CLI/RAPI and the master daemon will be a
178
custom one (called *LUXI*): on a UNIX socket on the master node, with
179
rights restricted by filesystem permissions, the CLI/RAPI will talk to
180
the master daemon using JSON-encoded messages.
181

    
182
The operations supported over this internal protocol will be encoded
183
via a python library that will expose a simple API for its
184
users. Internally, the protocol will simply encode all objects in JSON
185
format and decode them on the receiver side.
186

    
187
For more details about the RAPI daemon see `Remote API changes`_, and
188
for the node daemon see `Node daemon changes`_.
189

    
190
The LUXI protocol
191
+++++++++++++++++
192

    
193
As described above, the protocol for making requests or queries to the
194
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
195
messages.
196

    
197
The choice of UNIX was in order to get rid of the need of
198
authentication and authorisation inside Ganeti; for 2.0, the
199
permissions on the Unix socket itself will determine the access
200
rights.
201

    
202
We will have two main classes of operations over this API:
203

    
204
- cluster query functions
205
- job related functions
206

    
207
The cluster query functions are usually short-duration, and are the
208
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
209
internally implemented still with these opcodes). The clients are
210
guaranteed to receive the response in a reasonable time via a timeout.
211

    
212
The job-related functions will be:
213

    
214
- submit job
215
- query job (which could also be categorized in the query-functions)
216
- archive job (see the job queue design doc)
217
- wait for job change, which allows a client to wait without polling
218

    
219
For more details of the actual operation list, see the `Job Queue`_.
220

    
221
Both requests and responses will consist of a JSON-encoded message
222
followed by the ``ETX`` character (ASCII decimal 3), which is not a
223
valid character in JSON messages and thus can serve as a message
224
delimiter. The contents of the messages will be a dictionary with two
225
fields:
226

    
227
:method:
228
  the name of the method called
229
:args:
230
  the arguments to the method, as a list (no keyword arguments allowed)
231

    
232
Responses will follow the same format, with the two fields being:
233

    
234
:success:
235
  a boolean denoting the success of the operation
236
:result:
237
  the actual result, or error message in case of failure
238

    
239
There are two special value for the result field:
240

    
241
- in the case that the operation failed, and this field is a list of
242
  length two, the client library will try to interpret is as an
243
  exception, the first element being the exception type and the second
244
  one the actual exception arguments; this will allow a simple method of
245
  passing Ganeti-related exception across the interface
246
- for the *WaitForChange* call (that waits on the server for a job to
247
  change status), if the result is equal to ``nochange`` instead of the
248
  usual result for this call (a list of changes), then the library will
249
  internally retry the call; this is done in order to differentiate
250
  internally between master daemon hung and job simply not changed
251

    
252
Users of the API that don't use the provided python library should
253
take care of the above two cases.
254

    
255

    
256
Master daemon implementation
257
++++++++++++++++++++++++++++
258

    
259
The daemon will be based around a main I/O thread that will wait for
260
new requests from the clients, and that does the setup/shutdown of the
261
other thread (pools).
262

    
263
There will two other classes of threads in the daemon:
264

    
265
- job processing threads, part of a thread pool, and which are
266
  long-lived, started at daemon startup and terminated only at shutdown
267
  time
268
- client I/O threads, which are the ones that talk the local protocol
269
  (LUXI) to the clients, and are short-lived
270

    
271
Master startup/failover
272
+++++++++++++++++++++++
273

    
274
In Ganeti 1.x there is no protection against failing over the master
275
to a node with stale configuration. In effect, the responsibility of
276
correct failovers falls on the admin. This is true both for the new
277
master and for when an old, offline master startup.
278

    
279
Since in 2.x we are extending the cluster state to cover the job queue
280
and have a daemon that will execute by itself the job queue, we want
281
to have more resilience for the master role.
282

    
283
The following algorithm will happen whenever a node is ready to
284
transition to the master role, either at startup time or at node
285
failover:
286

    
287
#. read the configuration file and parse the node list
288
   contained within
289

    
290
#. query all the nodes and make sure we obtain an agreement via
291
   a quorum of at least half plus one nodes for the following:
292

    
293
    - we have the latest configuration and job list (as
294
      determined by the serial number on the configuration and
295
      highest job ID on the job queue)
296

    
297
    - if we are not failing over (but just starting), the
298
      quorum agrees that we are the designated master
299

    
300
    - if any of the above is false, we prevent the current operation
301
      (i.e. we don't become the master)
302

    
303
#. at this point, the node transitions to the master role
304

    
305
#. for all the in-progress jobs, mark them as failed, with
306
   reason unknown or something similar (master failed, etc.)
307

    
308
Since due to exceptional conditions we could have a situation in which
309
no node can become the master due to inconsistent data, we will have
310
an override switch for the master daemon startup that will assume the
311
current node has the right data and will replicate all the
312
configuration files to the other nodes.
313

    
314
**Note**: the above algorithm is by no means an election algorithm; it
315
is a *confirmation* of the master role currently held by a node.
316

    
317
Logging
318
+++++++
319

    
320
The logging system will be switched completely to the standard python
321
logging module; currently it's logging-based, but exposes a different
322
API, which is just overhead. As such, the code will be switched over
323
to standard logging calls, and only the setup will be custom.
324

    
325
With this change, we will remove the separate debug/info/error logs,
326
and instead have always one logfile per daemon model:
327

    
328
- master-daemon.log for the master daemon
329
- node-daemon.log for the node daemon (this is the same as in 1.2)
330
- rapi-daemon.log for the RAPI daemon logs
331
- rapi-access.log, an additional log file for the RAPI that will be
332
  in the standard HTTP log format for possible parsing by other tools
333

    
334
Since the :term:`watcher` will only submit jobs to the master for
335
startup of the instances, its log file will contain less information
336
than before, mainly that it will start the instance, but not the
337
results.
338

    
339
Node daemon changes
340
+++++++++++++++++++
341

    
342
The only change to the node daemon is that, since we need better
343
concurrency, we don't process the inter-node RPC calls in the node
344
daemon itself, but we fork and process each request in a separate
345
child.
346

    
347
Since we don't have many calls, and we only fork (not exec), the
348
overhead should be minimal.
349

    
350
Caveats
351
+++++++
352

    
353
A discussed alternative is to keep the current individual processes
354
touching the cluster configuration model. The reasons we have not
355
chosen this approach is:
356

    
357
- the speed of reading and unserializing the cluster state
358
  today is not small enough that we can ignore it; the addition of
359
  the job queue will make the startup cost even higher. While this
360
  runtime cost is low, it can be on the order of a few seconds on
361
  bigger clusters, which for very quick commands is comparable to
362
  the actual duration of the computation itself
363

    
364
- individual commands would make it harder to implement a
365
  fire-and-forget job request, along the lines "start this
366
  instance but do not wait for it to finish"; it would require a
367
  model of backgrounding the operation and other things that are
368
  much better served by a daemon-based model
369

    
370
Another area of discussion is moving away from Twisted in this new
371
implementation. While Twisted has its advantages, there are also many
372
disadvantages to using it:
373

    
374
- first and foremost, it's not a library, but a framework; thus, if
375
  you use twisted, all the code needs to be 'twiste-ized' and written
376
  in an asynchronous manner, using deferreds; while this method works,
377
  it's not a common way to code and it requires that the entire process
378
  workflow is based around a single *reactor* (Twisted name for a main
379
  loop)
380
- the more advanced granular locking that we want to implement would
381
  require, if written in the async-manner, deep integration with the
382
  Twisted stack, to such an extend that business-logic is inseparable
383
  from the protocol coding; we felt that this is an unreasonable
384
  request, and that a good protocol library should allow complete
385
  separation of low-level protocol calls and business logic; by
386
  comparison, the threaded approach combined with HTTPs protocol
387
  required (for the first iteration) absolutely no changes from the 1.2
388
  code, and later changes for optimizing the inter-node RPC calls
389
  required just syntactic changes (e.g.  ``rpc.call_...`` to
390
  ``self.rpc.call_...``)
391

    
392
Another issue is with the Twisted API stability - during the Ganeti
393
1.x lifetime, we had to to implement many times workarounds to changes
394
in the Twisted version, so that for example 1.2 is able to use both
395
Twisted 2.x and 8.x.
396

    
397
In the end, since we already had an HTTP server library for the RAPI,
398
we just reused that for inter-node communication.
399

    
400

    
401
Granular locking
402
~~~~~~~~~~~~~~~~
403

    
404
We want to make sure that multiple operations can run in parallel on a
405
Ganeti Cluster. In order for this to happen we need to make sure
406
concurrently run operations don't step on each other toes and break the
407
cluster.
408

    
409
This design addresses how we are going to deal with locking so that:
410

    
411
- we preserve data coherency
412
- we prevent deadlocks
413
- we prevent job starvation
414

    
415
Reaching the maximum possible parallelism is a Non-Goal. We have
416
identified a set of operations that are currently bottlenecks and need
417
to be parallelised and have worked on those. In the future it will be
418
possible to address other needs, thus making the cluster more and more
419
parallel one step at a time.
420

    
421
This section only talks about parallelising Ganeti level operations, aka
422
Logical Units, and the locking needed for that. Any other
423
synchronization lock needed internally by the code is outside its scope.
424

    
425
Library details
426
+++++++++++++++
427

    
428
The proposed library has these features:
429

    
430
- internally managing all the locks, making the implementation
431
  transparent from their usage
432
- automatically grabbing multiple locks in the right order (avoid
433
  deadlock)
434
- ability to transparently handle conversion to more granularity
435
- support asynchronous operation (future goal)
436

    
437
Locking will be valid only on the master node and will not be a
438
distributed operation. Therefore, in case of master failure, the
439
operations currently running will be aborted and the locks will be
440
lost; it remains to the administrator to cleanup (if needed) the
441
operation result (e.g. make sure an instance is either installed
442
correctly or removed).
443

    
444
A corollary of this is that a master-failover operation with both
445
masters alive needs to happen while no operations are running, and
446
therefore no locks are held.
447

    
448
All the locks will be represented by objects (like
449
``lockings.SharedLock``), and the individual locks for each object
450
will be created at initialisation time, from the config file.
451

    
452
The API will have a way to grab one or more than one locks at the same
453
time.  Any attempt to grab a lock while already holding one in the wrong
454
order will be checked for, and fail.
455

    
456

    
457
The Locks
458
+++++++++
459

    
460
At the first stage we have decided to provide the following locks:
461

    
462
- One "config file" lock
463
- One lock per node in the cluster
464
- One lock per instance in the cluster
465

    
466
All the instance locks will need to be taken before the node locks, and
467
the node locks before the config lock. Locks will need to be acquired at
468
the same time for multiple instances and nodes, and internal ordering
469
will be dealt within the locking library, which, for simplicity, will
470
just use alphabetical order.
471

    
472
Each lock has the following three possible statuses:
473

    
474
- unlocked (anyone can grab the lock)
475
- shared (anyone can grab/have the lock but only in shared mode)
476
- exclusive (no one else can grab/have the lock)
477

    
478
Handling conversion to more granularity
479
+++++++++++++++++++++++++++++++++++++++
480

    
481
In order to convert to a more granular approach transparently each time
482
we split a lock into more we'll create a "metalock", which will depend
483
on those sub-locks and live for the time necessary for all the code to
484
convert (or forever, in some conditions). When a metalock exists all
485
converted code must acquire it in shared mode, so it can run
486
concurrently, but still be exclusive with old code, which acquires it
487
exclusively.
488

    
489
In the beginning the only such lock will be what replaces the current
490
"command" lock, and will acquire all the locks in the system, before
491
proceeding. This lock will be called the "Big Ganeti Lock" because
492
holding that one will avoid any other concurrent Ganeti operations.
493

    
494
We might also want to devise more metalocks (eg. all nodes, all
495
nodes+config) in order to make it easier for some parts of the code to
496
acquire what it needs without specifying it explicitly.
497

    
498
In the future things like the node locks could become metalocks, should
499
we decide to split them into an even more fine grained approach, but
500
this will probably be only after the first 2.0 version has been
501
released.
502

    
503
Adding/Removing locks
504
+++++++++++++++++++++
505

    
506
When a new instance or a new node is created an associated lock must be
507
added to the list. The relevant code will need to inform the locking
508
library of such a change.
509

    
510
This needs to be compatible with every other lock in the system,
511
especially metalocks that guarantee to grab sets of resources without
512
specifying them explicitly. The implementation of this will be handled
513
in the locking library itself.
514

    
515
When instances or nodes disappear from the cluster the relevant locks
516
must be removed. This is easier than adding new elements, as the code
517
which removes them must own them exclusively already, and thus deals
518
with metalocks exactly as normal code acquiring those locks. Any
519
operation queuing on a removed lock will fail after its removal.
520

    
521
Asynchronous operations
522
+++++++++++++++++++++++
523

    
524
For the first version the locking library will only export synchronous
525
operations, which will block till the needed lock are held, and only
526
fail if the request is impossible or somehow erroneous.
527

    
528
In the future we may want to implement different types of asynchronous
529
operations such as:
530

    
531
- try to acquire this lock set and fail if not possible
532
- try to acquire one of these lock sets and return the first one you
533
  were able to get (or after a timeout) (select/poll like)
534

    
535
These operations can be used to prioritize operations based on available
536
locks, rather than making them just blindly queue for acquiring them.
537
The inherent risk, though, is that any code using the first operation,
538
or setting a timeout for the second one, is susceptible to starvation
539
and thus may never be able to get the required locks and complete
540
certain tasks. Considering this providing/using these operations should
541
not be among our first priorities.
542

    
543
Locking granularity
544
+++++++++++++++++++
545

    
546
For the first version of this code we'll convert each Logical Unit to
547
acquire/release the locks it needs, so locking will be at the Logical
548
Unit level.  In the future we may want to split logical units in
549
independent "tasklets" with their own locking requirements. A different
550
design doc (or mini design doc) will cover the move from Logical Units
551
to tasklets.
552

    
553
Code examples
554
+++++++++++++
555

    
556
In general when acquiring locks we should use a code path equivalent
557
to::
558

    
559
  lock.acquire()
560
  try:
561
    ...
562
    # other code
563
  finally:
564
    lock.release()
565

    
566
This makes sure we release all locks, and avoid possible deadlocks. Of
567
course extra care must be used not to leave, if possible locked
568
structures in an unusable state. Note that with Python 2.5 a simpler
569
syntax will be possible, but we want to keep compatibility with Python
570
2.4 so the new constructs should not be used.
571

    
572
In order to avoid this extra indentation and code changes everywhere in
573
the Logical Units code, we decided to allow LUs to declare locks, and
574
then execute their code with their locks acquired. In the new world LUs
575
are called like this::
576

    
577
  # user passed names are expanded to the internal lock/resource name,
578
  # then known needed locks are declared
579
  lu.ExpandNames()
580
  ... some locking/adding of locks may happen ...
581
  # late declaration of locks for one level: this is useful because sometimes
582
  # we can't know which resource we need before locking the previous level
583
  lu.DeclareLocks() # for each level (cluster, instance, node)
584
  ... more locking/adding of locks can happen ...
585
  # these functions are called with the proper locks held
586
  lu.CheckPrereq()
587
  lu.Exec()
588
  ... locks declared for removal are removed, all acquired locks released ...
589

    
590
The Processor and the LogicalUnit class will contain exact documentation
591
on how locks are supposed to be declared.
592

    
593
Caveats
594
+++++++
595

    
596
This library will provide an easy upgrade path to bring all the code to
597
granular locking without breaking everything, and it will also guarantee
598
against a lot of common errors. Code switching from the old "lock
599
everything" lock to the new system, though, needs to be carefully
600
scrutinised to be sure it is really acquiring all the necessary locks,
601
and none has been overlooked or forgotten.
602

    
603
The code can contain other locks outside of this library, to synchronise
604
other threaded code (eg for the job queue) but in general these should
605
be leaf locks or carefully structured non-leaf ones, to avoid deadlock
606
race conditions.
607

    
608

    
609
Job Queue
610
~~~~~~~~~
611

    
612
Granular locking is not enough to speed up operations, we also need a
613
queue to store these and to be able to process as many as possible in
614
parallel.
615

    
616
A Ganeti job will consist of multiple ``OpCodes`` which are the basic
617
element of operation in Ganeti 1.2 (and will remain as such). Most
618
command-level commands are equivalent to one OpCode, or in some cases
619
to a sequence of opcodes, all of the same type (e.g. evacuating a node
620
will generate N opcodes of type replace disks).
621

    
622

    
623
Job execution—“Life of a Ganeti job”
624
++++++++++++++++++++++++++++++++++++
625

    
626
#. Job gets submitted by the client. A new job identifier is generated
627
   and assigned to the job. The job is then automatically replicated
628
   [#replic]_ to all nodes in the cluster. The identifier is returned to
629
   the client.
630
#. A pool of worker threads waits for new jobs. If all are busy, the job
631
   has to wait and the first worker finishing its work will grab it.
632
   Otherwise any of the waiting threads will pick up the new job.
633
#. Client waits for job status updates by calling a waiting RPC
634
   function. Log message may be shown to the user. Until the job is
635
   started, it can also be canceled.
636
#. As soon as the job is finished, its final result and status can be
637
   retrieved from the server.
638
#. If the client archives the job, it gets moved to a history directory.
639
   There will be a method to archive all jobs older than a a given age.
640

    
641
.. [#replic] We need replication in order to maintain the consistency
642
   across all nodes in the system; the master node only differs in the
643
   fact that now it is running the master daemon, but it if fails and we
644
   do a master failover, the jobs are still visible on the new master
645
   (though marked as failed).
646

    
647
Failures to replicate a job to other nodes will be only flagged as
648
errors in the master daemon log if more than half of the nodes failed,
649
otherwise we ignore the failure, and rely on the fact that the next
650
update (for still running jobs) will retry the update. For finished
651
jobs, it is less of a problem.
652

    
653
Future improvements will look into checking the consistency of the job
654
list and jobs themselves at master daemon startup.
655

    
656

    
657
Job storage
658
+++++++++++
659

    
660
Jobs are stored in the filesystem as individual files, serialized
661
using JSON (standard serialization mechanism in Ganeti).
662

    
663
The choice of storing each job in its own file was made because:
664

    
665
- a file can be atomically replaced
666
- a file can easily be replicated to other nodes
667
- checking consistency across nodes can be implemented very easily,
668
  since all job files should be (at a given moment in time) identical
669

    
670
The other possible choices that were discussed and discounted were:
671

    
672
- single big file with all job data: not feasible due to difficult
673
  updates
674
- in-process databases: hard to replicate the entire database to the
675
  other nodes, and replicating individual operations does not mean wee
676
  keep consistency
677

    
678

    
679
Queue structure
680
+++++++++++++++
681

    
682
All file operations have to be done atomically by writing to a temporary
683
file and subsequent renaming. Except for log messages, every change in a
684
job is stored and replicated to other nodes.
685

    
686
::
687

    
688
  /var/lib/ganeti/queue/
689
    job-1 (JSON encoded job description and status)
690
    […]
691
    job-37
692
    job-38
693
    job-39
694
    lock (Queue managing process opens this file in exclusive mode)
695
    serial (Last job ID used)
696
    version (Queue format version)
697

    
698

    
699
Locking
700
+++++++
701

    
702
Locking in the job queue is a complicated topic. It is called from more
703
than one thread and must be thread-safe. For simplicity, a single lock
704
is used for the whole job queue.
705

    
706
A more detailed description can be found in doc/locking.rst.
707

    
708

    
709
Internal RPC
710
++++++++++++
711

    
712
RPC calls available between Ganeti master and node daemons:
713

    
714
jobqueue_update(file_name, content)
715
  Writes a file in the job queue directory.
716
jobqueue_purge()
717
  Cleans the job queue directory completely, including archived job.
718
jobqueue_rename(old, new)
719
  Renames a file in the job queue directory.
720

    
721

    
722
Client RPC
723
++++++++++
724

    
725
RPC between Ganeti clients and the Ganeti master daemon supports the
726
following operations:
727

    
728
SubmitJob(ops)
729
  Submits a list of opcodes and returns the job identifier. The
730
  identifier is guaranteed to be unique during the lifetime of a
731
  cluster.
732
WaitForJobChange(job_id, fields, […], timeout)
733
  This function waits until a job changes or a timeout expires. The
734
  condition for when a job changed is defined by the fields passed and
735
  the last log message received.
736
QueryJobs(job_ids, fields)
737
  Returns field values for the job identifiers passed.
738
CancelJob(job_id)
739
  Cancels the job specified by identifier. This operation may fail if
740
  the job is already running, canceled or finished.
741
ArchiveJob(job_id)
742
  Moves a job into the …/archive/ directory. This operation will fail if
743
  the job has not been canceled or finished.
744

    
745

    
746
Job and opcode status
747
+++++++++++++++++++++
748

    
749
Each job and each opcode has, at any time, one of the following states:
750

    
751
Queued
752
  The job/opcode was submitted, but did not yet start.
753
Waiting
754
  The job/opcode is waiting for a lock to proceed.
755
Running
756
  The job/opcode is running.
757
Canceled
758
  The job/opcode was canceled before it started.
759
Success
760
  The job/opcode ran and finished successfully.
761
Error
762
  The job/opcode was aborted with an error.
763

    
764
If the master is aborted while a job is running, the job will be set to
765
the Error status once the master started again.
766

    
767

    
768
History
769
+++++++
770

    
771
Archived jobs are kept in a separate directory,
772
``/var/lib/ganeti/queue/archive/``.  This is done in order to speed up
773
the queue handling: by default, the jobs in the archive are not
774
touched by any functions. Only the current (unarchived) jobs are
775
parsed, loaded, and verified (if implemented) by the master daemon.
776

    
777

    
778
Ganeti updates
779
++++++++++++++
780

    
781
The queue has to be completely empty for Ganeti updates with changes
782
in the job queue structure. In order to allow this, there will be a
783
way to prevent new jobs entering the queue.
784

    
785

    
786
Object parameters
787
~~~~~~~~~~~~~~~~~
788

    
789
Across all cluster configuration data, we have multiple classes of
790
parameters:
791

    
792
A. cluster-wide parameters (e.g. name of the cluster, the master);
793
   these are the ones that we have today, and are unchanged from the
794
   current model
795

    
796
#. node parameters
797

    
798
#. instance specific parameters, e.g. the name of disks (LV), that
799
   cannot be shared with other instances
800

    
801
#. instance parameters, that are or can be the same for many
802
   instances, but are not hypervisor related; e.g. the number of VCPUs,
803
   or the size of memory
804

    
805
#. instance parameters that are hypervisor specific (e.g. kernel_path
806
   or PAE mode)
807

    
808

    
809
The following definitions for instance parameters will be used below:
810

    
811
:hypervisor parameter:
812
  a hypervisor parameter (or hypervisor specific parameter) is defined
813
  as a parameter that is interpreted by the hypervisor support code in
814
  Ganeti and usually is specific to a particular hypervisor (like the
815
  kernel path for :term:`PVM` which makes no sense for :term:`HVM`).
816

    
817
:backend parameter:
818
  a backend parameter is defined as an instance parameter that can be
819
  shared among a list of instances, and is either generic enough not
820
  to be tied to a given hypervisor or cannot influence at all the
821
  hypervisor behaviour.
822

    
823
  For example: memory, vcpus, auto_balance
824

    
825
  All these parameters will be encoded into constants.py with the prefix
826
  "BE\_" and the whole list of parameters will exist in the set
827
  "BES_PARAMETERS"
828

    
829
:proper parameter:
830
  a parameter whose value is unique to the instance (e.g. the name of a
831
  LV, or the MAC of a NIC)
832

    
833
As a general rule, for all kind of parameters, “None” (or in
834
JSON-speak, “nil”) will no longer be a valid value for a parameter. As
835
such, only non-default parameters will be saved as part of objects in
836
the serialization step, reducing the size of the serialized format.
837

    
838
Cluster parameters
839
++++++++++++++++++
840

    
841
Cluster parameters remain as today, attributes at the top level of the
842
Cluster object. In addition, two new attributes at this level will
843
hold defaults for the instances:
844

    
845
- hvparams, a dictionary indexed by hypervisor type, holding default
846
  values for hypervisor parameters that are not defined/overridden by
847
  the instances of this hypervisor type
848

    
849
- beparams, a dictionary holding (for 2.0) a single element 'default',
850
  which holds the default value for backend parameters
851

    
852
Node parameters
853
+++++++++++++++
854

    
855
Node-related parameters are very few, and we will continue using the
856
same model for these as previously (attributes on the Node object).
857

    
858
There are three new node flags, described in a separate section "node
859
flags" below.
860

    
861
Instance parameters
862
+++++++++++++++++++
863

    
864
As described before, the instance parameters are split in three:
865
instance proper parameters, unique to each instance, instance
866
hypervisor parameters and instance backend parameters.
867

    
868
The “hvparams” and “beparams” are kept in two dictionaries at instance
869
level. Only non-default parameters are stored (but once customized, a
870
parameter will be kept, even with the same value as the default one,
871
until reset).
872

    
873
The names for hypervisor parameters in the instance.hvparams subtree
874
should be choosen as generic as possible, especially if specific
875
parameters could conceivably be useful for more than one hypervisor,
876
e.g. ``instance.hvparams.vnc_console_port`` instead of using both
877
``instance.hvparams.hvm_vnc_console_port`` and
878
``instance.hvparams.kvm_vnc_console_port``.
879

    
880
There are some special cases related to disks and NICs (for example):
881
a disk has both Ganeti-related parameters (e.g. the name of the LV)
882
and hypervisor-related parameters (how the disk is presented to/named
883
in the instance). The former parameters remain as proper-instance
884
parameters, while the latter value are migrated to the hvparams
885
structure. In 2.0, we will have only globally-per-instance such
886
hypervisor parameters, and not per-disk ones (e.g. all NICs will be
887
exported as of the same type).
888

    
889
Starting from the 1.2 list of instance parameters, here is how they
890
will be mapped to the three classes of parameters:
891

    
892
- name (P)
893
- primary_node (P)
894
- os (P)
895
- hypervisor (P)
896
- status (P)
897
- memory (BE)
898
- vcpus (BE)
899
- nics (P)
900
- disks (P)
901
- disk_template (P)
902
- network_port (P)
903
- kernel_path (HV)
904
- initrd_path (HV)
905
- hvm_boot_order (HV)
906
- hvm_acpi (HV)
907
- hvm_pae (HV)
908
- hvm_cdrom_image_path (HV)
909
- hvm_nic_type (HV)
910
- hvm_disk_type (HV)
911
- vnc_bind_address (HV)
912
- serial_no (P)
913

    
914

    
915
Parameter validation
916
++++++++++++++++++++
917

    
918
To support the new cluster parameter design, additional features will
919
be required from the hypervisor support implementations in Ganeti.
920

    
921
The hypervisor support  implementation API will be extended with the
922
following features:
923

    
924
:PARAMETERS: class-level attribute holding the list of valid parameters
925
  for this hypervisor
926
:CheckParamSyntax(hvparams): checks that the given parameters are
927
  valid (as in the names are valid) for this hypervisor; usually just
928
  comparing ``hvparams.keys()`` and ``cls.PARAMETERS``; this is a class
929
  method that can be called from within master code (i.e. cmdlib) and
930
  should be safe to do so
931
:ValidateParameters(hvparams): verifies the values of the provided
932
  parameters against this hypervisor; this is a method that will be
933
  called on the target node, from backend.py code, and as such can
934
  make node-specific checks (e.g. kernel_path checking)
935

    
936
Default value application
937
+++++++++++++++++++++++++
938

    
939
The application of defaults to an instance is done in the Cluster
940
object, via two new methods as follows:
941

    
942
- ``Cluster.FillHV(instance)``, returns 'filled' hvparams dict, based on
943
  instance's hvparams and cluster's ``hvparams[instance.hypervisor]``
944

    
945
- ``Cluster.FillBE(instance, be_type="default")``, which returns the
946
  beparams dict, based on the instance and cluster beparams
947

    
948
The FillHV/BE transformations will be used, for example, in the
949
RpcRunner when sending an instance for activation/stop, and the sent
950
instance hvparams/beparams will have the final value (noded code doesn't
951
know about defaults).
952

    
953
LU code will need to self-call the transformation, if needed.
954

    
955
Opcode changes
956
++++++++++++++
957

    
958
The parameter changes will have impact on the OpCodes, especially on
959
the following ones:
960

    
961
- ``OpInstanceCreate``, where the new hv and be parameters will be sent
962
  as dictionaries; note that all hv and be parameters are now optional,
963
  as the values can be instead taken from the cluster
964
- ``OpInstanceQuery``, where we have to be able to query these new
965
  parameters; the syntax for names will be ``hvparam/$NAME`` and
966
  ``beparam/$NAME`` for querying an individual parameter out of one
967
  dictionary, and ``hvparams``, respectively ``beparams``, for the whole
968
  dictionaries
969
- ``OpModifyInstance``, where the the modified parameters are sent as
970
  dictionaries
971

    
972
Additionally, we will need new OpCodes to modify the cluster-level
973
defaults for the be/hv sets of parameters.
974

    
975
Caveats
976
+++++++
977

    
978
One problem that might appear is that our classification is not
979
complete or not good enough, and we'll need to change this model. As
980
the last resort, we will need to rollback and keep 1.2 style.
981

    
982
Another problem is that classification of one parameter is unclear
983
(e.g. ``network_port``, is this BE or HV?); in this case we'll take
984
the risk of having to move parameters later between classes.
985

    
986
Security
987
++++++++
988

    
989
The only security issue that we foresee is if some new parameters will
990
have sensitive value. If so, we will need to have a way to export the
991
config data while purging the sensitive value.
992

    
993
E.g. for the drbd shared secrets, we could export these with the
994
values replaced by an empty string.
995

    
996
Node flags
997
~~~~~~~~~~
998

    
999
Ganeti 2.0 adds three node flags that change the way nodes are handled
1000
within Ganeti and the related infrastructure (iallocator interaction,
1001
RAPI data export).
1002

    
1003
*master candidate* flag
1004
+++++++++++++++++++++++
1005

    
1006
Ganeti 2.0 allows more scalability in operation by introducing
1007
parallelization. However, a new bottleneck is reached that is the
1008
synchronization and replication of cluster configuration to all nodes
1009
in the cluster.
1010

    
1011
This breaks scalability as the speed of the replication decreases
1012
roughly with the size of the nodes in the cluster. The goal of the
1013
master candidate flag is to change this O(n) into O(1) with respect to
1014
job and configuration data propagation.
1015

    
1016
Only nodes having this flag set (let's call this set of nodes the
1017
*candidate pool*) will have jobs and configuration data replicated.
1018

    
1019
The cluster will have a new parameter (runtime changeable) called
1020
``candidate_pool_size`` which represents the number of candidates the
1021
cluster tries to maintain (preferably automatically).
1022

    
1023
This will impact the cluster operations as follows:
1024

    
1025
- jobs and config data will be replicated only to a fixed set of nodes
1026
- master fail-over will only be possible to a node in the candidate pool
1027
- cluster verify needs changing to account for these two roles
1028
- external scripts will no longer have access to the configuration
1029
  file (this is not recommended anyway)
1030

    
1031

    
1032
The caveats of this change are:
1033

    
1034
- if all candidates are lost (completely), cluster configuration is
1035
  lost (but it should be backed up external to the cluster anyway)
1036

    
1037
- failed nodes which are candidate must be dealt with properly, so
1038
  that we don't lose too many candidates at the same time; this will be
1039
  reported in cluster verify
1040

    
1041
- the 'all equal' concept of ganeti is no longer true
1042

    
1043
- the partial distribution of config data means that all nodes will
1044
  have to revert to ssconf files for master info (as in 1.2)
1045

    
1046
Advantages:
1047

    
1048
- speed on a 100+ nodes simulated cluster is greatly enhanced, even
1049
  for a simple operation; ``gnt-instance remove`` on a diskless instance
1050
  remove goes from ~9seconds to ~2 seconds
1051

    
1052
- node failure of non-candidates will be less impacting on the cluster
1053

    
1054
The default value for the candidate pool size will be set to 10 but
1055
this can be changed at cluster creation and modified any time later.
1056

    
1057
Testing on simulated big clusters with sequential and parallel jobs
1058
show that this value (10) is a sweet-spot from performance and load
1059
point of view.
1060

    
1061
*offline* flag
1062
++++++++++++++
1063

    
1064
In order to support better the situation in which nodes are offline
1065
(e.g. for repair) without altering the cluster configuration, Ganeti
1066
needs to be told and needs to properly handle this state for nodes.
1067

    
1068
This will result in simpler procedures, and less mistakes, when the
1069
amount of node failures is high on an absolute scale (either due to
1070
high failure rate or simply big clusters).
1071

    
1072
Nodes having this attribute set will not be contacted for inter-node
1073
RPC calls, will not be master candidates, and will not be able to host
1074
instances as primaries.
1075

    
1076
Setting this attribute on a node:
1077

    
1078
- will not be allowed if the node is the master
1079
- will not be allowed if the node has primary instances
1080
- will cause the node to be demoted from the master candidate role (if
1081
  it was), possibly causing another node to be promoted to that role
1082

    
1083
This attribute will impact the cluster operations as follows:
1084

    
1085
- querying these nodes for anything will fail instantly in the RPC
1086
  library, with a specific RPC error (RpcResult.offline == True)
1087

    
1088
- they will be listed in the Other section of cluster verify
1089

    
1090
The code is changed in the following ways:
1091

    
1092
- RPC calls were be converted to skip such nodes:
1093

    
1094
  - RpcRunner-instance-based RPC calls are easy to convert
1095

    
1096
  - static/classmethod RPC calls are harder to convert, and were left
1097
    alone
1098

    
1099
- the RPC results were unified so that this new result state (offline)
1100
  can be differentiated
1101

    
1102
- master voting still queries in repair nodes, as we need to ensure
1103
  consistency in case the (wrong) masters have old data, and nodes have
1104
  come back from repairs
1105

    
1106
Caveats:
1107

    
1108
- some operation semantics are less clear (e.g. what to do on instance
1109
  start with offline secondary?); for now, these will just fail as if
1110
  the flag is not set (but faster)
1111
- 2-node cluster with one node offline needs manual startup of the
1112
  master with a special flag to skip voting (as the master can't get a
1113
  quorum there)
1114

    
1115
One of the advantages of implementing this flag is that it will allow
1116
in the future automation tools to automatically put the node in
1117
repairs and recover from this state, and the code (should/will) handle
1118
this much better than just timing out. So, future possible
1119
improvements (for later versions):
1120

    
1121
- watcher will detect nodes which fail RPC calls, will attempt to ssh
1122
  to them, if failure will put them offline
1123
- watcher will try to ssh and query the offline nodes, if successful
1124
  will take them off the repair list
1125

    
1126
Alternatives considered: The RPC call model in 2.0 is, by default,
1127
much nicer - errors are logged in the background, and job/opcode
1128
execution is clearer, so we could simply not introduce this. However,
1129
having this state will make both the codepaths clearer (offline
1130
vs. temporary failure) and the operational model (it's not a node with
1131
errors, but an offline node).
1132

    
1133

    
1134
*drained* flag
1135
++++++++++++++
1136

    
1137
Due to parallel execution of jobs in Ganeti 2.0, we could have the
1138
following situation:
1139

    
1140
- gnt-node migrate + failover is run
1141
- gnt-node evacuate is run, which schedules a long-running 6-opcode
1142
  job for the node
1143
- partway through, a new job comes in that runs an iallocator script,
1144
  which finds the above node as empty and a very good candidate
1145
- gnt-node evacuate has finished, but now it has to be run again, to
1146
  clean the above instance(s)
1147

    
1148
In order to prevent this situation, and to be able to get nodes into
1149
proper offline status easily, a new *drained* flag was added to the
1150
nodes.
1151

    
1152
This flag (which actually means "is being, or was drained, and is
1153
expected to go offline"), will prevent allocations on the node, but
1154
otherwise all other operations (start/stop instance, query, etc.) are
1155
working without any restrictions.
1156

    
1157
Interaction between flags
1158
+++++++++++++++++++++++++
1159

    
1160
While these flags are implemented as separate flags, they are
1161
mutually-exclusive and are acting together with the master node role
1162
as a single *node status* value. In other words, a flag is only in one
1163
of these roles at a given time. The lack of any of these flags denote
1164
a regular node.
1165

    
1166
The current node status is visible in the ``gnt-cluster verify``
1167
output, and the individual flags can be examined via separate flags in
1168
the ``gnt-node list`` output.
1169

    
1170
These new flags will be exported in both the iallocator input message
1171
and via RAPI, see the respective man pages for the exact names.
1172

    
1173
Feature changes
1174
---------------
1175

    
1176
The main feature-level changes will be:
1177

    
1178
- a number of disk related changes
1179
- removal of fixed two-disk, one-nic per instance limitation
1180

    
1181
Disk handling changes
1182
~~~~~~~~~~~~~~~~~~~~~
1183

    
1184
The storage options available in Ganeti 1.x were introduced based on
1185
then-current software (first DRBD 0.7 then later DRBD 8) and the
1186
estimated usage patters. However, experience has later shown that some
1187
assumptions made initially are not true and that more flexibility is
1188
needed.
1189

    
1190
One main assumption made was that disk failures should be treated as
1191
'rare' events, and that each of them needs to be manually handled in
1192
order to ensure data safety; however, both these assumptions are false:
1193

    
1194
- disk failures can be a common occurrence, based on usage patterns or
1195
  cluster size
1196
- our disk setup is robust enough (referring to DRBD8 + LVM) that we
1197
  could automate more of the recovery
1198

    
1199
Note that we still don't have fully-automated disk recovery as a goal,
1200
but our goal is to reduce the manual work needed.
1201

    
1202
As such, we plan the following main changes:
1203

    
1204
- DRBD8 is much more flexible and stable than its previous version
1205
  (0.7), such that removing the support for the ``remote_raid1``
1206
  template and focusing only on DRBD8 is easier
1207

    
1208
- dynamic discovery of DRBD devices is not actually needed in a cluster
1209
  that where the DRBD namespace is controlled by Ganeti; switching to a
1210
  static assignment (done at either instance creation time or change
1211
  secondary time) will change the disk activation time from O(n) to
1212
  O(1), which on big clusters is a significant gain
1213

    
1214
- remove the hard dependency on LVM (currently all available storage
1215
  types are ultimately backed by LVM volumes) by introducing file-based
1216
  storage
1217

    
1218
Additionally, a number of smaller enhancements are also planned:
1219
- support variable number of disks
1220
- support read-only disks
1221

    
1222
Future enhancements in the 2.x series, which do not require base design
1223
changes, might include:
1224

    
1225
- enhancement of the LVM allocation method in order to try to keep
1226
  all of an instance's virtual disks on the same physical
1227
  disks
1228

    
1229
- add support for DRBD8 authentication at handshake time in
1230
  order to ensure each device connects to the correct peer
1231

    
1232
- remove the restrictions on failover only to the secondary
1233
  which creates very strict rules on cluster allocation
1234

    
1235
DRBD minor allocation
1236
+++++++++++++++++++++
1237

    
1238
Currently, when trying to identify or activate a new DRBD (or MD)
1239
device, the code scans all in-use devices in order to see if we find
1240
one that looks similar to our parameters and is already in the desired
1241
state or not. Since this needs external commands to be run, it is very
1242
slow when more than a few devices are already present.
1243

    
1244
Therefore, we will change the discovery model from dynamic to
1245
static. When a new device is logically created (added to the
1246
configuration) a free minor number is computed from the list of
1247
devices that should exist on that node and assigned to that
1248
device.
1249

    
1250
At device activation, if the minor is already in use, we check if
1251
it has our parameters; if not so, we just destroy the device (if
1252
possible, otherwise we abort) and start it with our own
1253
parameters.
1254

    
1255
This means that we in effect take ownership of the minor space for
1256
that device type; if there's a user-created DRBD minor, it will be
1257
automatically removed.
1258

    
1259
The change will have the effect of reducing the number of external
1260
commands run per device from a constant number times the index of the
1261
first free DRBD minor to just a constant number.
1262

    
1263
Removal of obsolete device types (MD, DRBD7)
1264
++++++++++++++++++++++++++++++++++++++++++++
1265

    
1266
We need to remove these device types because of two issues. First,
1267
DRBD7 has bad failure modes in case of dual failures (both network and
1268
disk - it cannot propagate the error up the device stack and instead
1269
just panics. Second, due to the asymmetry between primary and
1270
secondary in MD+DRBD mode, we cannot do live failover (not even if we
1271
had MD+DRBD8).
1272

    
1273
File-based storage support
1274
++++++++++++++++++++++++++
1275

    
1276
Using files instead of logical volumes for instance storage would
1277
allow us to get rid of the hard requirement for volume groups for
1278
testing clusters and it would also allow usage of SAN storage to do
1279
live failover taking advantage of this storage solution.
1280

    
1281
Better LVM allocation
1282
+++++++++++++++++++++
1283

    
1284
Currently, the LV to PV allocation mechanism is a very simple one: at
1285
each new request for a logical volume, tell LVM to allocate the volume
1286
in order based on the amount of free space. This is good for
1287
simplicity and for keeping the usage equally spread over the available
1288
physical disks, however it introduces a problem that an instance could
1289
end up with its (currently) two drives on two physical disks, or
1290
(worse) that the data and metadata for a DRBD device end up on
1291
different drives.
1292

    
1293
This is bad because it causes unneeded ``replace-disks`` operations in
1294
case of a physical failure.
1295

    
1296
The solution is to batch allocations for an instance and make the LVM
1297
handling code try to allocate as close as possible all the storage of
1298
one instance. We will still allow the logical volumes to spill over to
1299
additional disks as needed.
1300

    
1301
Note that this clustered allocation can only be attempted at initial
1302
instance creation, or at change secondary node time. At add disk time,
1303
or at replacing individual disks, it's not easy enough to compute the
1304
current disk map so we'll not attempt the clustering.
1305

    
1306
DRBD8 peer authentication at handshake
1307
++++++++++++++++++++++++++++++++++++++
1308

    
1309
DRBD8 has a new feature that allow authentication of the peer at
1310
connect time. We can use this to prevent connecting to the wrong peer
1311
more that securing the connection. Even though we never had issues
1312
with wrong connections, it would be good to implement this.
1313

    
1314

    
1315
LVM self-repair (optional)
1316
++++++++++++++++++++++++++
1317

    
1318
The complete failure of a physical disk is very tedious to
1319
troubleshoot, mainly because of the many failure modes and the many
1320
steps needed. We can safely automate some of the steps, more
1321
specifically the ``vgreduce --removemissing`` using the following
1322
method:
1323

    
1324
#. check if all nodes have consistent volume groups
1325
#. if yes, and previous status was yes, do nothing
1326
#. if yes, and previous status was no, save status and restart
1327
#. if no, and previous status was no, do nothing
1328
#. if no, and previous status was yes:
1329
    #. if more than one node is inconsistent, do nothing
1330
    #. if only one node is inconsistent:
1331
        #. run ``vgreduce --removemissing``
1332
        #. log this occurrence in the Ganeti log in a form that
1333
           can be used for monitoring
1334
        #. [FUTURE] run ``replace-disks`` for all
1335
           instances affected
1336

    
1337
Failover to any node
1338
++++++++++++++++++++
1339

    
1340
With a modified disk activation sequence, we can implement the
1341
*failover to any* functionality, removing many of the layout
1342
restrictions of a cluster:
1343

    
1344
- the need to reserve memory on the current secondary: this gets reduced
1345
  to a must to reserve memory anywhere on the cluster
1346

    
1347
- the need to first failover and then replace secondary for an
1348
  instance: with failover-to-any, we can directly failover to
1349
  another node, which also does the replace disks at the same
1350
  step
1351

    
1352
In the following, we denote the current primary by P1, the current
1353
secondary by S1, and the new primary and secondaries by P2 and S2. P2
1354
is fixed to the node the user chooses, but the choice of S2 can be
1355
made between P1 and S1. This choice can be constrained, depending on
1356
which of P1 and S1 has failed.
1357

    
1358
- if P1 has failed, then S1 must become S2, and live migration is not
1359
  possible
1360
- if S1 has failed, then P1 must become S2, and live migration could be
1361
  possible (in theory, but this is not a design goal for 2.0)
1362

    
1363
The algorithm for performing the failover is straightforward:
1364

    
1365
- verify that S2 (the node the user has chosen to keep as secondary) has
1366
  valid data (is consistent)
1367

    
1368
- tear down the current DRBD association and setup a DRBD pairing
1369
  between P2 (P2 is indicated by the user) and S2; since P2 has no data,
1370
  it will start re-syncing from S2
1371

    
1372
- as soon as P2 is in state SyncTarget (i.e. after the resync has
1373
  started but before it has finished), we can promote it to primary role
1374
  (r/w) and start the instance on P2
1375

    
1376
- as soon as the P2?S2 sync has finished, we can remove
1377
  the old data on the old node that has not been chosen for
1378
  S2
1379

    
1380
Caveats: during the P2?S2 sync, a (non-transient) network error
1381
will cause I/O errors on the instance, so (if a longer instance
1382
downtime is acceptable) we can postpone the restart of the instance
1383
until the resync is done. However, disk I/O errors on S2 will cause
1384
data loss, since we don't have a good copy of the data anymore, so in
1385
this case waiting for the sync to complete is not an option. As such,
1386
it is recommended that this feature is used only in conjunction with
1387
proper disk monitoring.
1388

    
1389

    
1390
Live migration note: While failover-to-any is possible for all choices
1391
of S2, migration-to-any is possible only if we keep P1 as S2.
1392

    
1393
Caveats
1394
+++++++
1395

    
1396
The dynamic device model, while more complex, has an advantage: it
1397
will not reuse by mistake the DRBD device of another instance, since
1398
it always looks for either our own or a free one.
1399

    
1400
The static one, in contrast, will assume that given a minor number N,
1401
it's ours and we can take over. This needs careful implementation such
1402
that if the minor is in use, either we are able to cleanly shut it
1403
down, or we abort the startup. Otherwise, it could be that we start
1404
syncing between two instance's disks, causing data loss.
1405

    
1406

    
1407
Variable number of disk/NICs per instance
1408
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1409

    
1410
Variable number of disks
1411
++++++++++++++++++++++++
1412

    
1413
In order to support high-security scenarios (for example read-only sda
1414
and read-write sdb), we need to make a fully flexibly disk
1415
definition. This has less impact that it might look at first sight:
1416
only the instance creation has hard coded number of disks, not the disk
1417
handling code. The block device handling and most of the instance
1418
handling code is already working with "the instance's disks" as
1419
opposed to "the two disks of the instance", but some pieces are not
1420
(e.g. import/export) and the code needs a review to ensure safety.
1421

    
1422
The objective is to be able to specify the number of disks at
1423
instance creation, and to be able to toggle from read-only to
1424
read-write a disk afterward.
1425

    
1426
Variable number of NICs
1427
+++++++++++++++++++++++
1428

    
1429
Similar to the disk change, we need to allow multiple network
1430
interfaces per instance. This will affect the internal code (some
1431
function will have to stop assuming that ``instance.nics`` is a list
1432
of length one), the OS API which currently can export/import only one
1433
instance, and the command line interface.
1434

    
1435
Interface changes
1436
-----------------
1437

    
1438
There are two areas of interface changes: API-level changes (the OS
1439
interface and the RAPI interface) and the command line interface
1440
changes.
1441

    
1442
OS interface
1443
~~~~~~~~~~~~
1444

    
1445
The current Ganeti OS interface, version 5, is tailored for Ganeti 1.2.
1446
The interface is composed by a series of scripts which get called with
1447
certain parameters to perform OS-dependent operations on the cluster.
1448
The current scripts are:
1449

    
1450
create
1451
  called when a new instance is added to the cluster
1452
export
1453
  called to export an instance disk to a stream
1454
import
1455
  called to import from a stream to a new instance
1456
rename
1457
  called to perform the os-specific operations necessary for renaming an
1458
  instance
1459

    
1460
Currently these scripts suffer from the limitations of Ganeti 1.2: for
1461
example they accept exactly one block and one swap devices to operate
1462
on, rather than any amount of generic block devices, they blindly assume
1463
that an instance will have just one network interface to operate, they
1464
can not be configured to optimise the instance for a particular
1465
hypervisor.
1466

    
1467
Since in Ganeti 2.0 we want to support multiple hypervisors, and a
1468
non-fixed number of network and disks the OS interface need to change to
1469
transmit the appropriate amount of information about an instance to its
1470
managing operating system, when operating on it. Moreover since some old
1471
assumptions usually used in OS scripts are no longer valid we need to
1472
re-establish a common knowledge on what can be assumed and what cannot
1473
be regarding Ganeti environment.
1474

    
1475

    
1476
When designing the new OS API our priorities are:
1477
- ease of use
1478
- future extensibility
1479
- ease of porting from the old API
1480
- modularity
1481

    
1482
As such we want to limit the number of scripts that must be written to
1483
support an OS, and make it easy to share code between them by uniforming
1484
their input.  We also will leave the current script structure unchanged,
1485
as far as we can, and make a few of the scripts (import, export and
1486
rename) optional. Most information will be passed to the script through
1487
environment variables, for ease of access and at the same time ease of
1488
using only the information a script needs.
1489

    
1490

    
1491
The Scripts
1492
+++++++++++
1493

    
1494
As in Ganeti 1.2, every OS which wants to be installed in Ganeti needs
1495
to support the following functionality, through scripts:
1496

    
1497
create:
1498
  used to create a new instance running that OS. This script should
1499
  prepare the block devices, and install them so that the new OS can
1500
  boot under the specified hypervisor.
1501
export (optional):
1502
  used to export an installed instance using the given OS to a format
1503
  which can be used to import it back into a new instance.
1504
import (optional):
1505
  used to import an exported instance into a new one. This script is
1506
  similar to create, but the new instance should have the content of the
1507
  export, rather than contain a pristine installation.
1508
rename (optional):
1509
  used to perform the internal OS-specific operations needed to rename
1510
  an instance.
1511

    
1512
If any optional script is not implemented Ganeti will refuse to perform
1513
the given operation on instances using the non-implementing OS. Of
1514
course the create script is mandatory, and it doesn't make sense to
1515
support the either the export or the import operation but not both.
1516

    
1517
Incompatibilities with 1.2
1518
__________________________
1519

    
1520
We expect the following incompatibilities between the OS scripts for 1.2
1521
and the ones for 2.0:
1522

    
1523
- Input parameters: in 1.2 those were passed on the command line, in 2.0
1524
  we'll use environment variables, as there will be a lot more
1525
  information and not all OSes may care about all of it.
1526
- Number of calls: export scripts will be called once for each device
1527
  the instance has, and import scripts once for every exported disk.
1528
  Imported instances will be forced to have a number of disks greater or
1529
  equal to the one of the export.
1530
- Some scripts are not compulsory: if such a script is missing the
1531
  relevant operations will be forbidden for instances of that OS. This
1532
  makes it easier to distinguish between unsupported operations and
1533
  no-op ones (if any).
1534

    
1535

    
1536
Input
1537
_____
1538

    
1539
Rather than using command line flags, as they do now, scripts will
1540
accept inputs from environment variables. We expect the following input
1541
values:
1542

    
1543
OS_API_VERSION
1544
  The version of the OS API that the following parameters comply with;
1545
  this is used so that in the future we could have OSes supporting
1546
  multiple versions and thus Ganeti send the proper version in this
1547
  parameter
1548
INSTANCE_NAME
1549
  Name of the instance acted on
1550
HYPERVISOR
1551
  The hypervisor the instance should run on (e.g. 'xen-pvm', 'xen-hvm',
1552
  'kvm')
1553
DISK_COUNT
1554
  The number of disks this instance will have
1555
NIC_COUNT
1556
  The number of NICs this instance will have
1557
DISK_<N>_PATH
1558
  Path to the Nth disk.
1559
DISK_<N>_ACCESS
1560
  W if read/write, R if read only. OS scripts are not supposed to touch
1561
  read-only disks, but will be passed them to know.
1562
DISK_<N>_FRONTEND_TYPE
1563
  Type of the disk as seen by the instance. Can be 'scsi', 'ide',
1564
  'virtio'
1565
DISK_<N>_BACKEND_TYPE
1566
  Type of the disk as seen from the node. Can be 'block', 'file:loop' or
1567
  'file:blktap'
1568
NIC_<N>_MAC
1569
  Mac address for the Nth network interface
1570
NIC_<N>_IP
1571
  Ip address for the Nth network interface, if available
1572
NIC_<N>_BRIDGE
1573
  Node bridge the Nth network interface will be connected to
1574
NIC_<N>_FRONTEND_TYPE
1575
  Type of the Nth NIC as seen by the instance. For example 'virtio',
1576
  'rtl8139', etc.
1577
DEBUG_LEVEL
1578
  Whether more out should be produced, for debugging purposes. Currently
1579
  the only valid values are 0 and 1.
1580

    
1581
These are only the basic variables we are thinking of now, but more
1582
may come during the implementation and they will be documented in the
1583
:manpage:`ganeti-os-api` man page. All these variables will be
1584
available to all scripts.
1585

    
1586
Some scripts will need a few more information to work. These will have
1587
per-script variables, such as for example:
1588

    
1589
OLD_INSTANCE_NAME
1590
  rename: the name the instance should be renamed from.
1591
EXPORT_DEVICE
1592
  export: device to be exported, a snapshot of the actual device. The
1593
  data must be exported to stdout.
1594
EXPORT_INDEX
1595
  export: sequential number of the instance device targeted.
1596
IMPORT_DEVICE
1597
  import: device to send the data to, part of the new instance. The data
1598
  must be imported from stdin.
1599
IMPORT_INDEX
1600
  import: sequential number of the instance device targeted.
1601

    
1602
(Rationale for INSTANCE_NAME as an environment variable: the instance
1603
name is always needed and we could pass it on the command line. On the
1604
other hand, though, this would force scripts to both access the
1605
environment and parse the command line, so we'll move it for
1606
uniformity.)
1607

    
1608

    
1609
Output/Behaviour
1610
________________
1611

    
1612
As discussed scripts should only send user-targeted information to
1613
stderr. The create and import scripts are supposed to format/initialise
1614
the given block devices and install the correct instance data. The
1615
export script is supposed to export instance data to stdout in a format
1616
understandable by the the import script. The data will be compressed by
1617
Ganeti, so no compression should be done. The rename script should only
1618
modify the instance's knowledge of what its name is.
1619

    
1620
Other declarative style features
1621
++++++++++++++++++++++++++++++++
1622

    
1623
Similar to Ganeti 1.2, OS specifications will need to provide a
1624
'ganeti_api_version' containing list of numbers matching the
1625
version(s) of the API they implement. Ganeti itself will always be
1626
compatible with one version of the API and may maintain backwards
1627
compatibility if it's feasible to do so. The numbers are one-per-line,
1628
so an OS supporting both version 5 and version 20 will have a file
1629
containing two lines. This is different from Ganeti 1.2, which only
1630
supported one version number.
1631

    
1632
In addition to that an OS will be able to declare that it does support
1633
only a subset of the Ganeti hypervisors, by declaring them in the
1634
'hypervisors' file.
1635

    
1636

    
1637
Caveats/Notes
1638
+++++++++++++
1639

    
1640
We might want to have a "default" import/export behaviour that just
1641
dumps all disks and restores them. This can save work as most systems
1642
will just do this, while allowing flexibility for different systems.
1643

    
1644
Environment variables are limited in size, but we expect that there will
1645
be enough space to store the information we need. If we discover that
1646
this is not the case we may want to go to a more complex API such as
1647
storing those information on the filesystem and providing the OS script
1648
with the path to a file where they are encoded in some format.
1649

    
1650

    
1651

    
1652
Remote API changes
1653
~~~~~~~~~~~~~~~~~~
1654

    
1655
The first Ganeti remote API (RAPI) was designed and deployed with the
1656
Ganeti 1.2.5 release.  That version provide read-only access to the
1657
cluster state. Fully functional read-write API demands significant
1658
internal changes which will be implemented in version 2.0.
1659

    
1660
We decided to go with implementing the Ganeti RAPI in a RESTful way,
1661
which is aligned with key features we looking. It is simple,
1662
stateless, scalable and extensible paradigm of API implementation. As
1663
transport it uses HTTP over SSL, and we are implementing it with JSON
1664
encoding, but in a way it possible to extend and provide any other
1665
one.
1666

    
1667
Design
1668
++++++
1669

    
1670
The Ganeti RAPI is implemented as independent daemon, running on the
1671
same node with the same permission level as Ganeti master
1672
daemon. Communication is done through the LUXI library to the master
1673
daemon. In order to keep communication asynchronous RAPI processes two
1674
types of client requests:
1675

    
1676
- queries: server is able to answer immediately
1677
- job submission: some time is required for a useful response
1678

    
1679
In the query case requested data send back to client in the HTTP
1680
response body. Typical examples of queries would be: list of nodes,
1681
instances, cluster info, etc.
1682

    
1683
In the case of job submission, the client receive a job ID, the
1684
identifier which allows one to query the job progress in the job queue
1685
(see `Job Queue`_).
1686

    
1687
Internally, each exported object has an version identifier, which is
1688
used as a state identifier in the HTTP header E-Tag field for
1689
requests/responses to avoid race conditions.
1690

    
1691

    
1692
Resource representation
1693
+++++++++++++++++++++++
1694

    
1695
The key difference of using REST instead of others API is that REST
1696
requires separation of services via resources with unique URIs. Each
1697
of them should have limited amount of state and support standard HTTP
1698
methods: GET, POST, DELETE, PUT.
1699

    
1700
For example in Ganeti's case we can have a set of URI:
1701

    
1702
 - ``/{clustername}/instances``
1703
 - ``/{clustername}/instances/{instancename}``
1704
 - ``/{clustername}/instances/{instancename}/tag``
1705
 - ``/{clustername}/tag``
1706

    
1707
A GET request to ``/{clustername}/instances`` will return the list of
1708
instances, a POST to ``/{clustername}/instances`` should create a new
1709
instance, a DELETE ``/{clustername}/instances/{instancename}`` should
1710
delete the instance, a GET ``/{clustername}/tag`` should return get
1711
cluster tags.
1712

    
1713
Each resource URI will have a version prefix. The resource IDs are to
1714
be determined.
1715

    
1716
Internal encoding might be JSON, XML, or any other. The JSON encoding
1717
fits nicely in Ganeti RAPI needs. The client can request a specific
1718
representation via the Accept field in the HTTP header.
1719

    
1720
REST uses HTTP as its transport and application protocol for resource
1721
access. The set of possible responses is a subset of standard HTTP
1722
responses.
1723

    
1724
The statelessness model provides additional reliability and
1725
transparency to operations (e.g. only one request needs to be analyzed
1726
to understand the in-progress operation, not a sequence of multiple
1727
requests/responses).
1728

    
1729

    
1730
Security
1731
++++++++
1732

    
1733
With the write functionality security becomes a much bigger an issue.
1734
The Ganeti RAPI uses basic HTTP authentication on top of an
1735
SSL-secured connection to grant access to an exported resource. The
1736
password is stored locally in an Apache-style ``.htpasswd`` file. Only
1737
one level of privileges is supported.
1738

    
1739
Caveats
1740
+++++++
1741

    
1742
The model detailed above for job submission requires the client to
1743
poll periodically for updates to the job; an alternative would be to
1744
allow the client to request a callback, or a 'wait for updates' call.
1745

    
1746
The callback model was not considered due to the following two issues:
1747

    
1748
- callbacks would require a new model of allowed callback URLs,
1749
  together with a method of managing these
1750
- callbacks only work when the client and the master are in the same
1751
  security domain, and they fail in the other cases (e.g. when there is
1752
  a firewall between the client and the RAPI daemon that only allows
1753
  client-to-RAPI calls, which is usual in DMZ cases)
1754

    
1755
The 'wait for updates' method is not suited to the HTTP protocol,
1756
where requests are supposed to be short-lived.
1757

    
1758
Command line changes
1759
~~~~~~~~~~~~~~~~~~~~
1760

    
1761
Ganeti 2.0 introduces several new features as well as new ways to
1762
handle instance resources like disks or network interfaces. This
1763
requires some noticeable changes in the way command line arguments are
1764
handled.
1765

    
1766
- extend and modify command line syntax to support new features
1767
- ensure consistent patterns in command line arguments to reduce
1768
  cognitive load
1769

    
1770
The design changes that require these changes are, in no particular
1771
order:
1772

    
1773
- flexible instance disk handling: support a variable number of disks
1774
  with varying properties per instance,
1775
- flexible instance network interface handling: support a variable
1776
  number of network interfaces with varying properties per instance
1777
- multiple hypervisors: multiple hypervisors can be active on the same
1778
  cluster, each supporting different parameters,
1779
- support for device type CDROM (via ISO image)
1780

    
1781
As such, there are several areas of Ganeti where the command line
1782
arguments will change:
1783

    
1784
- Cluster configuration
1785

    
1786
  - cluster initialization
1787
  - cluster default configuration
1788

    
1789
- Instance configuration
1790

    
1791
  - handling of network cards for instances,
1792
  - handling of disks for instances,
1793
  - handling of CDROM devices and
1794
  - handling of hypervisor specific options.
1795

    
1796
There are several areas of Ganeti where the command line arguments
1797
will change:
1798

    
1799
- Cluster configuration
1800

    
1801
  - cluster initialization
1802
  - cluster default configuration
1803

    
1804
- Instance configuration
1805

    
1806
  - handling of network cards for instances,
1807
  - handling of disks for instances,
1808
  - handling of CDROM devices and
1809
  - handling of hypervisor specific options.
1810

    
1811
Notes about device removal/addition
1812
+++++++++++++++++++++++++++++++++++
1813

    
1814
To avoid problems with device location changes (e.g. second network
1815
interface of the instance becoming the first or third and the like)
1816
the list of network/disk devices is treated as a stack, i.e. devices
1817
can only be added/removed at the end of the list of devices of each
1818
class (disk or network) for each instance.
1819

    
1820
gnt-instance commands
1821
+++++++++++++++++++++
1822

    
1823
The commands for gnt-instance will be modified and extended to allow
1824
for the new functionality:
1825

    
1826
- the add command will be extended to support the new device and
1827
  hypervisor options,
1828
- the modify command continues to handle all modifications to
1829
  instances, but will be extended with new arguments for handling
1830
  devices.
1831

    
1832
Network Device Options
1833
++++++++++++++++++++++
1834

    
1835
The generic format of the network device option is:
1836

    
1837
  --net $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1838

    
1839
:$DEVNUM: device number, unsigned integer, starting at 0,
1840
:$OPTION: device option, string,
1841
:$VALUE: device option value, string.
1842

    
1843
Currently, the following device options will be defined (open to
1844
further changes):
1845

    
1846
:mac: MAC address of the network interface, accepts either a valid
1847
  MAC address or the string 'auto'. If 'auto' is specified, a new MAC
1848
  address will be generated randomly. If the mac device option is not
1849
  specified, the default value 'auto' is assumed.
1850
:bridge: network bridge the network interface is connected
1851
  to. Accepts either a valid bridge name (the specified bridge must
1852
  exist on the node(s)) as string or the string 'auto'. If 'auto' is
1853
  specified, the default brigde is used. If the bridge option is not
1854
  specified, the default value 'auto' is assumed.
1855

    
1856
Disk Device Options
1857
+++++++++++++++++++
1858

    
1859
The generic format of the disk device option is:
1860

    
1861
  --disk $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1862

    
1863
:$DEVNUM: device number, unsigned integer, starting at 0,
1864
:$OPTION: device option, string,
1865
:$VALUE: device option value, string.
1866

    
1867
Currently, the following device options will be defined (open to
1868
further changes):
1869

    
1870
:size: size of the disk device, either a positive number, specifying
1871
  the disk size in mebibytes, or a number followed by a magnitude suffix
1872
  (M for mebibytes, G for gibibytes). Also accepts the string 'auto' in
1873
  which case the default disk size will be used. If the size option is
1874
  not specified, 'auto' is assumed. This option is not valid for all
1875
  disk layout types.
1876
:access: access mode of the disk device, a single letter, valid values
1877
  are:
1878

    
1879
  - *w*: read/write access to the disk device or
1880
  - *r*: read-only access to the disk device.
1881

    
1882
  If the access mode is not specified, the default mode of read/write
1883
  access will be configured.
1884
:path: path to the image file for the disk device, string. No default
1885
  exists. This option is not valid for all disk layout types.
1886

    
1887
Adding devices
1888
++++++++++++++
1889

    
1890
To add devices to an already existing instance, use the device type
1891
specific option to gnt-instance modify. Currently, there are two
1892
device type specific options supported:
1893

    
1894
:--net: for network interface cards
1895
:--disk: for disk devices
1896

    
1897
The syntax to the device specific options is similar to the generic
1898
device options, but instead of specifying a device number like for
1899
gnt-instance add, you specify the magic string add. The new device
1900
will always be appended at the end of the list of devices of this type
1901
for the specified instance, e.g. if the instance has disk devices 0,1
1902
and 2, the newly added disk device will be disk device 3.
1903

    
1904
Example: gnt-instance modify --net add:mac=auto test-instance
1905

    
1906
Removing devices
1907
++++++++++++++++
1908

    
1909
Removing devices from and instance is done via gnt-instance
1910
modify. The same device specific options as for adding instances are
1911
used. Instead of a device number and further device options, only the
1912
magic string remove is specified. It will always remove the last
1913
device in the list of devices of this type for the instance specified,
1914
e.g. if the instance has disk devices 0, 1, 2 and 3, the disk device
1915
number 3 will be removed.
1916

    
1917
Example: gnt-instance modify --net remove test-instance
1918

    
1919
Modifying devices
1920
+++++++++++++++++
1921

    
1922
Modifying devices is also done with device type specific options to
1923
the gnt-instance modify command. There are currently two device type
1924
options supported:
1925

    
1926
:--net: for network interface cards
1927
:--disk: for disk devices
1928

    
1929
The syntax to the device specific options is similar to the generic
1930
device options. The device number you specify identifies the device to
1931
be modified.
1932

    
1933
Example::
1934

    
1935
  gnt-instance modify --disk 2:access=r
1936

    
1937
Hypervisor Options
1938
++++++++++++++++++
1939

    
1940
Ganeti 2.0 will support more than one hypervisor. Different
1941
hypervisors have various options that only apply to a specific
1942
hypervisor. Those hypervisor specific options are treated specially
1943
via the ``--hypervisor`` option. The generic syntax of the hypervisor
1944
option is as follows::
1945

    
1946
  --hypervisor $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1947

    
1948
:$HYPERVISOR: symbolic name of the hypervisor to use, string,
1949
  has to match the supported hypervisors. Example: xen-pvm
1950

    
1951
:$OPTION: hypervisor option name, string
1952
:$VALUE: hypervisor option value, string
1953

    
1954
The hypervisor option for an instance can be set on instance creation
1955
time via the ``gnt-instance add`` command. If the hypervisor for an
1956
instance is not specified upon instance creation, the default
1957
hypervisor will be used.
1958

    
1959
Modifying hypervisor parameters
1960
+++++++++++++++++++++++++++++++
1961

    
1962
The hypervisor parameters of an existing instance can be modified
1963
using ``--hypervisor`` option of the ``gnt-instance modify``
1964
command. However, the hypervisor type of an existing instance can not
1965
be changed, only the particular hypervisor specific option can be
1966
changed. Therefore, the format of the option parameters has been
1967
simplified to omit the hypervisor name and only contain the comma
1968
separated list of option-value pairs.
1969

    
1970
Example::
1971

    
1972
  gnt-instance modify --hypervisor cdrom=/srv/boot.iso,boot_order=cdrom:network test-instance
1973

    
1974
gnt-cluster commands
1975
++++++++++++++++++++
1976

    
1977
The command for gnt-cluster will be extended to allow setting and
1978
changing the default parameters of the cluster:
1979

    
1980
- The init command will be extend to support the defaults option to
1981
  set the cluster defaults upon cluster initialization.
1982
- The modify command will be added to modify the cluster
1983
  parameters. It will support the --defaults option to change the
1984
  cluster defaults.
1985

    
1986
Cluster defaults
1987

    
1988
The generic format of the cluster default setting option is:
1989

    
1990
  --defaults $OPTION=$VALUE[,$OPTION=$VALUE]
1991

    
1992
:$OPTION: cluster default option, string,
1993
:$VALUE: cluster default option value, string.
1994

    
1995
Currently, the following cluster default options are defined (open to
1996
further changes):
1997

    
1998
:hypervisor: the default hypervisor to use for new instances,
1999
  string. Must be a valid hypervisor known to and supported by the
2000
  cluster.
2001
:disksize: the disksize for newly created instance disks, where
2002
  applicable. Must be either a positive number, in which case the unit
2003
  of megabyte is assumed, or a positive number followed by a supported
2004
  magnitude symbol (M for megabyte or G for gigabyte).
2005
:bridge: the default network bridge to use for newly created instance
2006
  network interfaces, string. Must be a valid bridge name of a bridge
2007
  existing on the node(s).
2008

    
2009
Hypervisor cluster defaults
2010
+++++++++++++++++++++++++++
2011

    
2012
The generic format of the hypervisor cluster wide default setting
2013
option is::
2014

    
2015
  --hypervisor-defaults $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
2016

    
2017
:$HYPERVISOR: symbolic name of the hypervisor whose defaults you want
2018
  to set, string
2019
:$OPTION: cluster default option, string,
2020
:$VALUE: cluster default option value, string.
2021

    
2022
.. vim: set textwidth=72 :