Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.0.rst @ 9725b53d

History | View | Annotate | Download (75.3 kB)

1
=================
2
Ganeti 2.0 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.0 compared to
6
the 1.2 version.
7

    
8
The 2.0 version will constitute a rewrite of the 'core' architecture,
9
paving the way for additional features in future 2.x versions.
10

    
11
.. contents:: :depth: 3
12

    
13
Objective
14
=========
15

    
16
Ganeti 1.2 has many scalability issues and restrictions due to its
17
roots as software for managing small and 'static' clusters.
18

    
19
Version 2.0 will attempt to remedy first the scalability issues and
20
then the restrictions.
21

    
22
Background
23
==========
24

    
25
While Ganeti 1.2 is usable, it severely limits the flexibility of the
26
cluster administration and imposes a very rigid model. It has the
27
following main scalability issues:
28

    
29
- only one operation at a time on the cluster [#]_
30
- poor handling of node failures in the cluster
31
- mixing hypervisors in a cluster not allowed
32

    
33
It also has a number of artificial restrictions, due to historical design:
34

    
35
- fixed number of disks (two) per instance
36
- fixed number of NICs
37

    
38
.. [#] Replace disks will release the lock, but this is an exception
39
       and not a recommended way to operate
40

    
41
The 2.0 version is intended to address some of these problems, and
42
create a more flexible code base for future developments.
43

    
44
Among these problems, the single-operation at a time restriction is
45
biggest issue with the current version of Ganeti. It is such a big
46
impediment in operating bigger clusters that many times one is tempted
47
to remove the lock just to do a simple operation like start instance
48
while an OS installation is running.
49

    
50
Scalability problems
51
--------------------
52

    
53
Ganeti 1.2 has a single global lock, which is used for all cluster
54
operations.  This has been painful at various times, for example:
55

    
56
- It is impossible for two people to efficiently interact with a cluster
57
  (for example for debugging) at the same time.
58
- When batch jobs are running it's impossible to do other work (for example
59
  failovers/fixes) on a cluster.
60

    
61
This poses scalability problems: as clusters grow in node and instance
62
size it's a lot more likely that operations which one could conceive
63
should run in parallel (for example because they happen on different
64
nodes) are actually stalling each other while waiting for the global
65
lock, without a real reason for that to happen.
66

    
67
One of the main causes of this global lock (beside the higher
68
difficulty of ensuring data consistency in a more granular lock model)
69
is the fact that currently there is no long-lived process in Ganeti
70
that can coordinate multiple operations. Each command tries to acquire
71
the so called *cmd* lock and when it succeeds, it takes complete
72
ownership of the cluster configuration and state.
73

    
74
Other scalability problems are due the design of the DRBD device
75
model, which assumed at its creation a low (one to four) number of
76
instances per node, which is no longer true with today's hardware.
77

    
78
Artificial restrictions
79
-----------------------
80

    
81
Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per
82
instance model. This is a purely artificial restrictions, but it
83
touches multiple areas (configuration, import/export, command line)
84
that it's more fitted to a major release than a minor one.
85

    
86
Architecture issues
87
-------------------
88

    
89
The fact that each command is a separate process that reads the
90
cluster state, executes the command, and saves the new state is also
91
an issue on big clusters where the configuration data for the cluster
92
begins to be non-trivial in size.
93

    
94
Overview
95
========
96

    
97
In order to solve the scalability problems, a rewrite of the core
98
design of Ganeti is required. While the cluster operations themselves
99
won't change (e.g. start instance will do the same things, the way
100
these operations are scheduled internally will change radically.
101

    
102
The new design will change the cluster architecture to:
103

    
104
.. image:: arch-2.0.png
105

    
106
This differs from the 1.2 architecture by the addition of the master
107
daemon, which will be the only entity to talk to the node daemons.
108

    
109

    
110
Detailed design
111
===============
112

    
113
The changes for 2.0 can be split into roughly three areas:
114

    
115
- core changes that affect the design of the software
116
- features (or restriction removals) but which do not have a wide
117
  impact on the design
118
- user-level and API-level changes which translate into differences for
119
  the operation of the cluster
120

    
121
Core changes
122
------------
123

    
124
The main changes will be switching from a per-process model to a
125
daemon based model, where the individual gnt-* commands will be
126
clients that talk to this daemon (see `Master daemon`_). This will
127
allow us to get rid of the global cluster lock for most operations,
128
having instead a per-object lock (see `Granular locking`_). Also, the
129
daemon will be able to queue jobs, and this will allow the individual
130
clients to submit jobs without waiting for them to finish, and also
131
see the result of old requests (see `Job Queue`_).
132

    
133
Beside these major changes, another 'core' change but that will not be
134
as visible to the users will be changing the model of object attribute
135
storage, and separate that into name spaces (such that an Xen PVM
136
instance will not have the Xen HVM parameters). This will allow future
137
flexibility in defining additional parameters. For more details see
138
`Object parameters`_.
139

    
140
The various changes brought in by the master daemon model and the
141
read-write RAPI will require changes to the cluster security; we move
142
away from Twisted and use HTTP(s) for intra- and extra-cluster
143
communications. For more details, see the security document in the
144
doc/ directory.
145

    
146
Master daemon
147
~~~~~~~~~~~~~
148

    
149
In Ganeti 2.0, we will have the following *entities*:
150

    
151
- the master daemon (on the master node)
152
- the node daemon (on all nodes)
153
- the command line tools (on the master node)
154
- the RAPI daemon (on the master node)
155

    
156
The master-daemon related interaction paths are:
157

    
158
- (CLI tools/RAPI daemon) and the master daemon, via the so called *LUXI* API
159
- the master daemon and the node daemons, via the node RPC
160

    
161
There are also some additional interaction paths for exceptional cases:
162

    
163
- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
164
  and ``gnt-cluster command``)
165
- master failover is a special case when a non-master node will SSH
166
  and do node-RPC calls to the current master
167

    
168
The protocol between the master daemon and the node daemons will be
169
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
170
using a simple PUT/GET of JSON-encoded messages. This is done due to
171
difficulties in working with the Twisted framework and its protocols
172
in a multithreaded environment, which we can overcome by using a
173
simpler stack (see the caveats section).
174

    
175
The protocol between the CLI/RAPI and the master daemon will be a
176
custom one (called *LUXI*): on a UNIX socket on the master node, with
177
rights restricted by filesystem permissions, the CLI/RAPI will talk to
178
the master daemon using JSON-encoded messages.
179

    
180
The operations supported over this internal protocol will be encoded
181
via a python library that will expose a simple API for its
182
users. Internally, the protocol will simply encode all objects in JSON
183
format and decode them on the receiver side.
184

    
185
For more details about the RAPI daemon see `Remote API changes`_, and
186
for the node daemon see `Node daemon changes`_.
187

    
188
The LUXI protocol
189
+++++++++++++++++
190

    
191
As described above, the protocol for making requests or queries to the
192
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
193
messages.
194

    
195
The choice of UNIX was in order to get rid of the need of
196
authentication and authorisation inside Ganeti; for 2.0, the
197
permissions on the Unix socket itself will determine the access
198
rights.
199

    
200
We will have two main classes of operations over this API:
201

    
202
- cluster query functions
203
- job related functions
204

    
205
The cluster query functions are usually short-duration, and are the
206
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
207
internally implemented still with these opcodes). The clients are
208
guaranteed to receive the response in a reasonable time via a timeout.
209

    
210
The job-related functions will be:
211

    
212
- submit job
213
- query job (which could also be categorized in the query-functions)
214
- archive job (see the job queue design doc)
215
- wait for job change, which allows a client to wait without polling
216

    
217
For more details of the actual operation list, see the `Job Queue`_.
218

    
219
Both requests and responses will consist of a JSON-encoded message
220
followed by the ``ETX`` character (ASCII decimal 3), which is not a
221
valid character in JSON messages and thus can serve as a message
222
delimiter. The contents of the messages will be a dictionary with two
223
fields:
224

    
225
:method:
226
  the name of the method called
227
:args:
228
  the arguments to the method, as a list (no keyword arguments allowed)
229

    
230
Responses will follow the same format, with the two fields being:
231

    
232
:success:
233
  a boolean denoting the success of the operation
234
:result:
235
  the actual result, or error message in case of failure
236

    
237
There are two special value for the result field:
238

    
239
- in the case that the operation failed, and this field is a list of
240
  length two, the client library will try to interpret is as an exception,
241
  the first element being the exception type and the second one the
242
  actual exception arguments; this will allow a simple method of passing
243
  Ganeti-related exception across the interface
244
- for the *WaitForChange* call (that waits on the server for a job to
245
  change status), if the result is equal to ``nochange`` instead of the
246
  usual result for this call (a list of changes), then the library will
247
  internally retry the call; this is done in order to differentiate
248
  internally between master daemon hung and job simply not changed
249

    
250
Users of the API that don't use the provided python library should
251
take care of the above two cases.
252

    
253

    
254
Master daemon implementation
255
++++++++++++++++++++++++++++
256

    
257
The daemon will be based around a main I/O thread that will wait for
258
new requests from the clients, and that does the setup/shutdown of the
259
other thread (pools).
260

    
261
There will two other classes of threads in the daemon:
262

    
263
- job processing threads, part of a thread pool, and which are
264
  long-lived, started at daemon startup and terminated only at shutdown
265
  time
266
- client I/O threads, which are the ones that talk the local protocol
267
  (LUXI) to the clients, and are short-lived
268

    
269
Master startup/failover
270
+++++++++++++++++++++++
271

    
272
In Ganeti 1.x there is no protection against failing over the master
273
to a node with stale configuration. In effect, the responsibility of
274
correct failovers falls on the admin. This is true both for the new
275
master and for when an old, offline master startup.
276

    
277
Since in 2.x we are extending the cluster state to cover the job queue
278
and have a daemon that will execute by itself the job queue, we want
279
to have more resilience for the master role.
280

    
281
The following algorithm will happen whenever a node is ready to
282
transition to the master role, either at startup time or at node
283
failover:
284

    
285
#. read the configuration file and parse the node list
286
   contained within
287

    
288
#. query all the nodes and make sure we obtain an agreement via
289
   a quorum of at least half plus one nodes for the following:
290

    
291
    - we have the latest configuration and job list (as
292
      determined by the serial number on the configuration and
293
      highest job ID on the job queue)
294

    
295
    - there is not even a single node having a newer
296
      configuration file
297

    
298
    - if we are not failing over (but just starting), the
299
      quorum agrees that we are the designated master
300

    
301
    - if any of the above is false, we prevent the current operation
302
      (i.e. we don't become the master)
303

    
304
#. at this point, the node transitions to the master role
305

    
306
#. for all the in-progress jobs, mark them as failed, with
307
   reason unknown or something similar (master failed, etc.)
308

    
309
Since due to exceptional conditions we could have a situation in which
310
no node can become the master due to inconsistent data, we will have
311
an override switch for the master daemon startup that will assume the
312
current node has the right data and will replicate all the
313
configuration files to the other nodes.
314

    
315
**Note**: the above algorithm is by no means an election algorithm; it
316
is a *confirmation* of the master role currently held by a node.
317

    
318
Logging
319
+++++++
320

    
321
The logging system will be switched completely to the standard python
322
logging module; currently it's logging-based, but exposes a different
323
API, which is just overhead. As such, the code will be switched over
324
to standard logging calls, and only the setup will be custom.
325

    
326
With this change, we will remove the separate debug/info/error logs,
327
and instead have always one logfile per daemon model:
328

    
329
- master-daemon.log for the master daemon
330
- node-daemon.log for the node daemon (this is the same as in 1.2)
331
- rapi-daemon.log for the RAPI daemon logs
332
- rapi-access.log, an additional log file for the RAPI that will be
333
  in the standard HTTP log format for possible parsing by other tools
334

    
335
Since the :term:`watcher` will only submit jobs to the master for
336
startup of the instances, its log file will contain less information
337
than before, mainly that it will start the instance, but not the
338
results.
339

    
340
Node daemon changes
341
+++++++++++++++++++
342

    
343
The only change to the node daemon is that, since we need better
344
concurrency, we don't process the inter-node RPC calls in the node
345
daemon itself, but we fork and process each request in a separate
346
child.
347

    
348
Since we don't have many calls, and we only fork (not exec), the
349
overhead should be minimal.
350

    
351
Caveats
352
+++++++
353

    
354
A discussed alternative is to keep the current individual processes
355
touching the cluster configuration model. The reasons we have not
356
chosen this approach is:
357

    
358
- the speed of reading and unserializing the cluster state
359
  today is not small enough that we can ignore it; the addition of
360
  the job queue will make the startup cost even higher. While this
361
  runtime cost is low, it can be on the order of a few seconds on
362
  bigger clusters, which for very quick commands is comparable to
363
  the actual duration of the computation itself
364

    
365
- individual commands would make it harder to implement a
366
  fire-and-forget job request, along the lines "start this
367
  instance but do not wait for it to finish"; it would require a
368
  model of backgrounding the operation and other things that are
369
  much better served by a daemon-based model
370

    
371
Another area of discussion is moving away from Twisted in this new
372
implementation. While Twisted has its advantages, there are also many
373
disadvantages to using it:
374

    
375
- first and foremost, it's not a library, but a framework; thus, if
376
  you use twisted, all the code needs to be 'twiste-ized' and written
377
  in an asynchronous manner, using deferreds; while this method works,
378
  it's not a common way to code and it requires that the entire process
379
  workflow is based around a single *reactor* (Twisted name for a main
380
  loop)
381
- the more advanced granular locking that we want to implement would
382
  require, if written in the async-manner, deep integration with the
383
  Twisted stack, to such an extend that business-logic is inseparable
384
  from the protocol coding; we felt that this is an unreasonable request,
385
  and that a good protocol library should allow complete separation of
386
  low-level protocol calls and business logic; by comparison, the threaded
387
  approach combined with HTTPs protocol required (for the first iteration)
388
  absolutely no changes from the 1.2 code, and later changes for optimizing
389
  the inter-node RPC calls required just syntactic changes (e.g.
390
  ``rpc.call_...`` to ``self.rpc.call_...``)
391

    
392
Another issue is with the Twisted API stability - during the Ganeti
393
1.x lifetime, we had to to implement many times workarounds to changes
394
in the Twisted version, so that for example 1.2 is able to use both
395
Twisted 2.x and 8.x.
396

    
397
In the end, since we already had an HTTP server library for the RAPI,
398
we just reused that for inter-node communication.
399

    
400

    
401
Granular locking
402
~~~~~~~~~~~~~~~~
403

    
404
We want to make sure that multiple operations can run in parallel on a Ganeti
405
Cluster. In order for this to happen we need to make sure concurrently run
406
operations don't step on each other toes and break the cluster.
407

    
408
This design addresses how we are going to deal with locking so that:
409

    
410
- we preserve data coherency
411
- we prevent deadlocks
412
- we prevent job starvation
413

    
414
Reaching the maximum possible parallelism is a Non-Goal. We have identified a
415
set of operations that are currently bottlenecks and need to be parallelised
416
and have worked on those. In the future it will be possible to address other
417
needs, thus making the cluster more and more parallel one step at a time.
418

    
419
This section only talks about parallelising Ganeti level operations, aka
420
Logical Units, and the locking needed for that. Any other synchronization lock
421
needed internally by the code is outside its scope.
422

    
423
Library details
424
+++++++++++++++
425

    
426
The proposed library has these features:
427

    
428
- internally managing all the locks, making the implementation transparent
429
  from their usage
430
- automatically grabbing multiple locks in the right order (avoid deadlock)
431
- ability to transparently handle conversion to more granularity
432
- support asynchronous operation (future goal)
433

    
434
Locking will be valid only on the master node and will not be a
435
distributed operation. Therefore, in case of master failure, the
436
operations currently running will be aborted and the locks will be
437
lost; it remains to the administrator to cleanup (if needed) the
438
operation result (e.g. make sure an instance is either installed
439
correctly or removed).
440

    
441
A corollary of this is that a master-failover operation with both
442
masters alive needs to happen while no operations are running, and
443
therefore no locks are held.
444

    
445
All the locks will be represented by objects (like
446
``lockings.SharedLock``), and the individual locks for each object
447
will be created at initialisation time, from the config file.
448

    
449
The API will have a way to grab one or more than one locks at the same time.
450
Any attempt to grab a lock while already holding one in the wrong order will be
451
checked for, and fail.
452

    
453

    
454
The Locks
455
+++++++++
456

    
457
At the first stage we have decided to provide the following locks:
458

    
459
- One "config file" lock
460
- One lock per node in the cluster
461
- One lock per instance in the cluster
462

    
463
All the instance locks will need to be taken before the node locks, and the
464
node locks before the config lock. Locks will need to be acquired at the same
465
time for multiple instances and nodes, and internal ordering will be dealt
466
within the locking library, which, for simplicity, will just use alphabetical
467
order.
468

    
469
Each lock has the following three possible statuses:
470

    
471
- unlocked (anyone can grab the lock)
472
- shared (anyone can grab/have the lock but only in shared mode)
473
- exclusive (no one else can grab/have the lock)
474

    
475
Handling conversion to more granularity
476
+++++++++++++++++++++++++++++++++++++++
477

    
478
In order to convert to a more granular approach transparently each time we
479
split a lock into more we'll create a "metalock", which will depend on those
480
sub-locks and live for the time necessary for all the code to convert (or
481
forever, in some conditions). When a metalock exists all converted code must
482
acquire it in shared mode, so it can run concurrently, but still be exclusive
483
with old code, which acquires it exclusively.
484

    
485
In the beginning the only such lock will be what replaces the current "command"
486
lock, and will acquire all the locks in the system, before proceeding. This
487
lock will be called the "Big Ganeti Lock" because holding that one will avoid
488
any other concurrent Ganeti operations.
489

    
490
We might also want to devise more metalocks (eg. all nodes, all nodes+config)
491
in order to make it easier for some parts of the code to acquire what it needs
492
without specifying it explicitly.
493

    
494
In the future things like the node locks could become metalocks, should we
495
decide to split them into an even more fine grained approach, but this will
496
probably be only after the first 2.0 version has been released.
497

    
498
Adding/Removing locks
499
+++++++++++++++++++++
500

    
501
When a new instance or a new node is created an associated lock must be added
502
to the list. The relevant code will need to inform the locking library of such
503
a change.
504

    
505
This needs to be compatible with every other lock in the system, especially
506
metalocks that guarantee to grab sets of resources without specifying them
507
explicitly. The implementation of this will be handled in the locking library
508
itself.
509

    
510
When instances or nodes disappear from the cluster the relevant locks
511
must be removed. This is easier than adding new elements, as the code
512
which removes them must own them exclusively already, and thus deals
513
with metalocks exactly as normal code acquiring those locks. Any
514
operation queuing on a removed lock will fail after its removal.
515

    
516
Asynchronous operations
517
+++++++++++++++++++++++
518

    
519
For the first version the locking library will only export synchronous
520
operations, which will block till the needed lock are held, and only fail if
521
the request is impossible or somehow erroneous.
522

    
523
In the future we may want to implement different types of asynchronous
524
operations such as:
525

    
526
- try to acquire this lock set and fail if not possible
527
- try to acquire one of these lock sets and return the first one you were
528
  able to get (or after a timeout) (select/poll like)
529

    
530
These operations can be used to prioritize operations based on available locks,
531
rather than making them just blindly queue for acquiring them. The inherent
532
risk, though, is that any code using the first operation, or setting a timeout
533
for the second one, is susceptible to starvation and thus may never be able to
534
get the required locks and complete certain tasks. Considering this
535
providing/using these operations should not be among our first priorities.
536

    
537
Locking granularity
538
+++++++++++++++++++
539

    
540
For the first version of this code we'll convert each Logical Unit to
541
acquire/release the locks it needs, so locking will be at the Logical Unit
542
level.  In the future we may want to split logical units in independent
543
"tasklets" with their own locking requirements. A different design doc (or mini
544
design doc) will cover the move from Logical Units to tasklets.
545

    
546
Code examples
547
+++++++++++++
548

    
549
In general when acquiring locks we should use a code path equivalent to::
550

    
551
  lock.acquire()
552
  try:
553
    ...
554
    # other code
555
  finally:
556
    lock.release()
557

    
558
This makes sure we release all locks, and avoid possible deadlocks. Of
559
course extra care must be used not to leave, if possible locked
560
structures in an unusable state. Note that with Python 2.5 a simpler
561
syntax will be possible, but we want to keep compatibility with Python
562
2.4 so the new constructs should not be used.
563

    
564
In order to avoid this extra indentation and code changes everywhere in the
565
Logical Units code, we decided to allow LUs to declare locks, and then execute
566
their code with their locks acquired. In the new world LUs are called like
567
this::
568

    
569
  # user passed names are expanded to the internal lock/resource name,
570
  # then known needed locks are declared
571
  lu.ExpandNames()
572
  ... some locking/adding of locks may happen ...
573
  # late declaration of locks for one level: this is useful because sometimes
574
  # we can't know which resource we need before locking the previous level
575
  lu.DeclareLocks() # for each level (cluster, instance, node)
576
  ... more locking/adding of locks can happen ...
577
  # these functions are called with the proper locks held
578
  lu.CheckPrereq()
579
  lu.Exec()
580
  ... locks declared for removal are removed, all acquired locks released ...
581

    
582
The Processor and the LogicalUnit class will contain exact documentation on how
583
locks are supposed to be declared.
584

    
585
Caveats
586
+++++++
587

    
588
This library will provide an easy upgrade path to bring all the code to
589
granular locking without breaking everything, and it will also guarantee
590
against a lot of common errors. Code switching from the old "lock everything"
591
lock to the new system, though, needs to be carefully scrutinised to be sure it
592
is really acquiring all the necessary locks, and none has been overlooked or
593
forgotten.
594

    
595
The code can contain other locks outside of this library, to synchronise other
596
threaded code (eg for the job queue) but in general these should be leaf locks
597
or carefully structured non-leaf ones, to avoid deadlock race conditions.
598

    
599

    
600
Job Queue
601
~~~~~~~~~
602

    
603
Granular locking is not enough to speed up operations, we also need a
604
queue to store these and to be able to process as many as possible in
605
parallel.
606

    
607
A Ganeti job will consist of multiple ``OpCodes`` which are the basic
608
element of operation in Ganeti 1.2 (and will remain as such). Most
609
command-level commands are equivalent to one OpCode, or in some cases
610
to a sequence of opcodes, all of the same type (e.g. evacuating a node
611
will generate N opcodes of type replace disks).
612

    
613

    
614
Job execution—“Life of a Ganeti job”
615
++++++++++++++++++++++++++++++++++++
616

    
617
#. Job gets submitted by the client. A new job identifier is generated and
618
   assigned to the job. The job is then automatically replicated [#replic]_
619
   to all nodes in the cluster. The identifier is returned to the client.
620
#. A pool of worker threads waits for new jobs. If all are busy, the job has
621
   to wait and the first worker finishing its work will grab it. Otherwise any
622
   of the waiting threads will pick up the new job.
623
#. Client waits for job status updates by calling a waiting RPC function.
624
   Log message may be shown to the user. Until the job is started, it can also
625
   be canceled.
626
#. As soon as the job is finished, its final result and status can be retrieved
627
   from the server.
628
#. If the client archives the job, it gets moved to a history directory.
629
   There will be a method to archive all jobs older than a a given age.
630

    
631
.. [#replic] We need replication in order to maintain the consistency across
632
   all nodes in the system; the master node only differs in the fact that
633
   now it is running the master daemon, but it if fails and we do a master
634
   failover, the jobs are still visible on the new master (though marked as
635
   failed).
636

    
637
Failures to replicate a job to other nodes will be only flagged as
638
errors in the master daemon log if more than half of the nodes failed,
639
otherwise we ignore the failure, and rely on the fact that the next
640
update (for still running jobs) will retry the update. For finished
641
jobs, it is less of a problem.
642

    
643
Future improvements will look into checking the consistency of the job
644
list and jobs themselves at master daemon startup.
645

    
646

    
647
Job storage
648
+++++++++++
649

    
650
Jobs are stored in the filesystem as individual files, serialized
651
using JSON (standard serialization mechanism in Ganeti).
652

    
653
The choice of storing each job in its own file was made because:
654

    
655
- a file can be atomically replaced
656
- a file can easily be replicated to other nodes
657
- checking consistency across nodes can be implemented very easily, since
658
  all job files should be (at a given moment in time) identical
659

    
660
The other possible choices that were discussed and discounted were:
661

    
662
- single big file with all job data: not feasible due to difficult updates
663
- in-process databases: hard to replicate the entire database to the
664
  other nodes, and replicating individual operations does not mean wee keep
665
  consistency
666

    
667

    
668
Queue structure
669
+++++++++++++++
670

    
671
All file operations have to be done atomically by writing to a temporary file
672
and subsequent renaming. Except for log messages, every change in a job is
673
stored and replicated to other nodes.
674

    
675
::
676

    
677
  /var/lib/ganeti/queue/
678
    job-1 (JSON encoded job description and status)
679
    […]
680
    job-37
681
    job-38
682
    job-39
683
    lock (Queue managing process opens this file in exclusive mode)
684
    serial (Last job ID used)
685
    version (Queue format version)
686

    
687

    
688
Locking
689
+++++++
690

    
691
Locking in the job queue is a complicated topic. It is called from more than
692
one thread and must be thread-safe. For simplicity, a single lock is used for
693
the whole job queue.
694

    
695
A more detailed description can be found in doc/locking.rst.
696

    
697

    
698
Internal RPC
699
++++++++++++
700

    
701
RPC calls available between Ganeti master and node daemons:
702

    
703
jobqueue_update(file_name, content)
704
  Writes a file in the job queue directory.
705
jobqueue_purge()
706
  Cleans the job queue directory completely, including archived job.
707
jobqueue_rename(old, new)
708
  Renames a file in the job queue directory.
709

    
710

    
711
Client RPC
712
++++++++++
713

    
714
RPC between Ganeti clients and the Ganeti master daemon supports the following
715
operations:
716

    
717
SubmitJob(ops)
718
  Submits a list of opcodes and returns the job identifier. The identifier is
719
  guaranteed to be unique during the lifetime of a cluster.
720
WaitForJobChange(job_id, fields, […], timeout)
721
  This function waits until a job changes or a timeout expires. The condition
722
  for when a job changed is defined by the fields passed and the last log
723
  message received.
724
QueryJobs(job_ids, fields)
725
  Returns field values for the job identifiers passed.
726
CancelJob(job_id)
727
  Cancels the job specified by identifier. This operation may fail if the job
728
  is already running, canceled or finished.
729
ArchiveJob(job_id)
730
  Moves a job into the …/archive/ directory. This operation will fail if the
731
  job has not been canceled or finished.
732

    
733

    
734
Job and opcode status
735
+++++++++++++++++++++
736

    
737
Each job and each opcode has, at any time, one of the following states:
738

    
739
Queued
740
  The job/opcode was submitted, but did not yet start.
741
Waiting
742
  The job/opcode is waiting for a lock to proceed.
743
Running
744
  The job/opcode is running.
745
Canceled
746
  The job/opcode was canceled before it started.
747
Success
748
  The job/opcode ran and finished successfully.
749
Error
750
  The job/opcode was aborted with an error.
751

    
752
If the master is aborted while a job is running, the job will be set to the
753
Error status once the master started again.
754

    
755

    
756
History
757
+++++++
758

    
759
Archived jobs are kept in a separate directory,
760
``/var/lib/ganeti/queue/archive/``.  This is done in order to speed up
761
the queue handling: by default, the jobs in the archive are not
762
touched by any functions. Only the current (unarchived) jobs are
763
parsed, loaded, and verified (if implemented) by the master daemon.
764

    
765

    
766
Ganeti updates
767
++++++++++++++
768

    
769
The queue has to be completely empty for Ganeti updates with changes
770
in the job queue structure. In order to allow this, there will be a
771
way to prevent new jobs entering the queue.
772

    
773

    
774
Object parameters
775
~~~~~~~~~~~~~~~~~
776

    
777
Across all cluster configuration data, we have multiple classes of
778
parameters:
779

    
780
A. cluster-wide parameters (e.g. name of the cluster, the master);
781
   these are the ones that we have today, and are unchanged from the
782
   current model
783

    
784
#. node parameters
785

    
786
#. instance specific parameters, e.g. the name of disks (LV), that
787
   cannot be shared with other instances
788

    
789
#. instance parameters, that are or can be the same for many
790
   instances, but are not hypervisor related; e.g. the number of VCPUs,
791
   or the size of memory
792

    
793
#. instance parameters that are hypervisor specific (e.g. kernel_path
794
   or PAE mode)
795

    
796

    
797
The following definitions for instance parameters will be used below:
798

    
799
:hypervisor parameter:
800
  a hypervisor parameter (or hypervisor specific parameter) is defined
801
  as a parameter that is interpreted by the hypervisor support code in
802
  Ganeti and usually is specific to a particular hypervisor (like the
803
  kernel path for :term:`PVM` which makes no sense for :term:`HVM`).
804

    
805
:backend parameter:
806
  a backend parameter is defined as an instance parameter that can be
807
  shared among a list of instances, and is either generic enough not
808
  to be tied to a given hypervisor or cannot influence at all the
809
  hypervisor behaviour.
810

    
811
  For example: memory, vcpus, auto_balance
812

    
813
  All these parameters will be encoded into constants.py with the prefix "BE\_"
814
  and the whole list of parameters will exist in the set "BES_PARAMETERS"
815

    
816
:proper parameter:
817
  a parameter whose value is unique to the instance (e.g. the name of a LV,
818
  or the MAC of a NIC)
819

    
820
As a general rule, for all kind of parameters, “None” (or in
821
JSON-speak, “nil”) will no longer be a valid value for a parameter. As
822
such, only non-default parameters will be saved as part of objects in
823
the serialization step, reducing the size of the serialized format.
824

    
825
Cluster parameters
826
++++++++++++++++++
827

    
828
Cluster parameters remain as today, attributes at the top level of the
829
Cluster object. In addition, two new attributes at this level will
830
hold defaults for the instances:
831

    
832
- hvparams, a dictionary indexed by hypervisor type, holding default
833
  values for hypervisor parameters that are not defined/overridden by
834
  the instances of this hypervisor type
835

    
836
- beparams, a dictionary holding (for 2.0) a single element 'default',
837
  which holds the default value for backend parameters
838

    
839
Node parameters
840
+++++++++++++++
841

    
842
Node-related parameters are very few, and we will continue using the
843
same model for these as previously (attributes on the Node object).
844

    
845
There are three new node flags, described in a separate section "node
846
flags" below.
847

    
848
Instance parameters
849
+++++++++++++++++++
850

    
851
As described before, the instance parameters are split in three:
852
instance proper parameters, unique to each instance, instance
853
hypervisor parameters and instance backend parameters.
854

    
855
The “hvparams” and “beparams” are kept in two dictionaries at instance
856
level. Only non-default parameters are stored (but once customized, a
857
parameter will be kept, even with the same value as the default one,
858
until reset).
859

    
860
The names for hypervisor parameters in the instance.hvparams subtree
861
should be choosen as generic as possible, especially if specific
862
parameters could conceivably be useful for more than one hypervisor,
863
e.g. ``instance.hvparams.vnc_console_port`` instead of using both
864
``instance.hvparams.hvm_vnc_console_port`` and
865
``instance.hvparams.kvm_vnc_console_port``.
866

    
867
There are some special cases related to disks and NICs (for example):
868
a disk has both Ganeti-related parameters (e.g. the name of the LV)
869
and hypervisor-related parameters (how the disk is presented to/named
870
in the instance). The former parameters remain as proper-instance
871
parameters, while the latter value are migrated to the hvparams
872
structure. In 2.0, we will have only globally-per-instance such
873
hypervisor parameters, and not per-disk ones (e.g. all NICs will be
874
exported as of the same type).
875

    
876
Starting from the 1.2 list of instance parameters, here is how they
877
will be mapped to the three classes of parameters:
878

    
879
- name (P)
880
- primary_node (P)
881
- os (P)
882
- hypervisor (P)
883
- status (P)
884
- memory (BE)
885
- vcpus (BE)
886
- nics (P)
887
- disks (P)
888
- disk_template (P)
889
- network_port (P)
890
- kernel_path (HV)
891
- initrd_path (HV)
892
- hvm_boot_order (HV)
893
- hvm_acpi (HV)
894
- hvm_pae (HV)
895
- hvm_cdrom_image_path (HV)
896
- hvm_nic_type (HV)
897
- hvm_disk_type (HV)
898
- vnc_bind_address (HV)
899
- serial_no (P)
900

    
901

    
902
Parameter validation
903
++++++++++++++++++++
904

    
905
To support the new cluster parameter design, additional features will
906
be required from the hypervisor support implementations in Ganeti.
907

    
908
The hypervisor support  implementation API will be extended with the
909
following features:
910

    
911
:PARAMETERS: class-level attribute holding the list of valid parameters
912
  for this hypervisor
913
:CheckParamSyntax(hvparams): checks that the given parameters are
914
  valid (as in the names are valid) for this hypervisor; usually just
915
  comparing ``hvparams.keys()`` and ``cls.PARAMETERS``; this is a class
916
  method that can be called from within master code (i.e. cmdlib) and
917
  should be safe to do so
918
:ValidateParameters(hvparams): verifies the values of the provided
919
  parameters against this hypervisor; this is a method that will be
920
  called on the target node, from backend.py code, and as such can
921
  make node-specific checks (e.g. kernel_path checking)
922

    
923
Default value application
924
+++++++++++++++++++++++++
925

    
926
The application of defaults to an instance is done in the Cluster
927
object, via two new methods as follows:
928

    
929
- ``Cluster.FillHV(instance)``, returns 'filled' hvparams dict, based on
930
  instance's hvparams and cluster's ``hvparams[instance.hypervisor]``
931

    
932
- ``Cluster.FillBE(instance, be_type="default")``, which returns the
933
  beparams dict, based on the instance and cluster beparams
934

    
935
The FillHV/BE transformations will be used, for example, in the RpcRunner
936
when sending an instance for activation/stop, and the sent instance
937
hvparams/beparams will have the final value (noded code doesn't know
938
about defaults).
939

    
940
LU code will need to self-call the transformation, if needed.
941

    
942
Opcode changes
943
++++++++++++++
944

    
945
The parameter changes will have impact on the OpCodes, especially on
946
the following ones:
947

    
948
- ``OpCreateInstance``, where the new hv and be parameters will be sent as
949
  dictionaries; note that all hv and be parameters are now optional, as
950
  the values can be instead taken from the cluster
951
- ``OpQueryInstances``, where we have to be able to query these new
952
  parameters; the syntax for names will be ``hvparam/$NAME`` and
953
  ``beparam/$NAME`` for querying an individual parameter out of one
954
  dictionary, and ``hvparams``, respectively ``beparams``, for the whole
955
  dictionaries
956
- ``OpModifyInstance``, where the the modified parameters are sent as
957
  dictionaries
958

    
959
Additionally, we will need new OpCodes to modify the cluster-level
960
defaults for the be/hv sets of parameters.
961

    
962
Caveats
963
+++++++
964

    
965
One problem that might appear is that our classification is not
966
complete or not good enough, and we'll need to change this model. As
967
the last resort, we will need to rollback and keep 1.2 style.
968

    
969
Another problem is that classification of one parameter is unclear
970
(e.g. ``network_port``, is this BE or HV?); in this case we'll take
971
the risk of having to move parameters later between classes.
972

    
973
Security
974
++++++++
975

    
976
The only security issue that we foresee is if some new parameters will
977
have sensitive value. If so, we will need to have a way to export the
978
config data while purging the sensitive value.
979

    
980
E.g. for the drbd shared secrets, we could export these with the
981
values replaced by an empty string.
982

    
983
Node flags
984
~~~~~~~~~~
985

    
986
Ganeti 2.0 adds three node flags that change the way nodes are handled
987
within Ganeti and the related infrastructure (iallocator interaction,
988
RAPI data export).
989

    
990
*master candidate* flag
991
+++++++++++++++++++++++
992

    
993
Ganeti 2.0 allows more scalability in operation by introducing
994
parallelization. However, a new bottleneck is reached that is the
995
synchronization and replication of cluster configuration to all nodes
996
in the cluster.
997

    
998
This breaks scalability as the speed of the replication decreases
999
roughly with the size of the nodes in the cluster. The goal of the
1000
master candidate flag is to change this O(n) into O(1) with respect to
1001
job and configuration data propagation.
1002

    
1003
Only nodes having this flag set (let's call this set of nodes the
1004
*candidate pool*) will have jobs and configuration data replicated.
1005

    
1006
The cluster will have a new parameter (runtime changeable) called
1007
``candidate_pool_size`` which represents the number of candidates the
1008
cluster tries to maintain (preferably automatically).
1009

    
1010
This will impact the cluster operations as follows:
1011

    
1012
- jobs and config data will be replicated only to a fixed set of nodes
1013
- master fail-over will only be possible to a node in the candidate pool
1014
- cluster verify needs changing to account for these two roles
1015
- external scripts will no longer have access to the configuration
1016
  file (this is not recommended anyway)
1017

    
1018

    
1019
The caveats of this change are:
1020

    
1021
- if all candidates are lost (completely), cluster configuration is
1022
  lost (but it should be backed up external to the cluster anyway)
1023

    
1024
- failed nodes which are candidate must be dealt with properly, so
1025
  that we don't lose too many candidates at the same time; this will be
1026
  reported in cluster verify
1027

    
1028
- the 'all equal' concept of ganeti is no longer true
1029

    
1030
- the partial distribution of config data means that all nodes will
1031
  have to revert to ssconf files for master info (as in 1.2)
1032

    
1033
Advantages:
1034

    
1035
- speed on a 100+ nodes simulated cluster is greatly enhanced, even
1036
  for a simple operation; ``gnt-instance remove`` on a diskless instance
1037
  remove goes from ~9seconds to ~2 seconds
1038

    
1039
- node failure of non-candidates will be less impacting on the cluster
1040

    
1041
The default value for the candidate pool size will be set to 10 but
1042
this can be changed at cluster creation and modified any time later.
1043

    
1044
Testing on simulated big clusters with sequential and parallel jobs
1045
show that this value (10) is a sweet-spot from performance and load
1046
point of view.
1047

    
1048
*offline* flag
1049
++++++++++++++
1050

    
1051
In order to support better the situation in which nodes are offline
1052
(e.g. for repair) without altering the cluster configuration, Ganeti
1053
needs to be told and needs to properly handle this state for nodes.
1054

    
1055
This will result in simpler procedures, and less mistakes, when the
1056
amount of node failures is high on an absolute scale (either due to
1057
high failure rate or simply big clusters).
1058

    
1059
Nodes having this attribute set will not be contacted for inter-node
1060
RPC calls, will not be master candidates, and will not be able to host
1061
instances as primaries.
1062

    
1063
Setting this attribute on a node:
1064

    
1065
- will not be allowed if the node is the master
1066
- will not be allowed if the node has primary instances
1067
- will cause the node to be demoted from the master candidate role (if
1068
  it was), possibly causing another node to be promoted to that role
1069

    
1070
This attribute will impact the cluster operations as follows:
1071

    
1072
- querying these nodes for anything will fail instantly in the RPC
1073
  library, with a specific RPC error (RpcResult.offline == True)
1074

    
1075
- they will be listed in the Other section of cluster verify
1076

    
1077
The code is changed in the following ways:
1078

    
1079
- RPC calls were be converted to skip such nodes:
1080

    
1081
  - RpcRunner-instance-based RPC calls are easy to convert
1082

    
1083
  - static/classmethod RPC calls are harder to convert, and were left
1084
    alone
1085

    
1086
- the RPC results were unified so that this new result state (offline)
1087
  can be differentiated
1088

    
1089
- master voting still queries in repair nodes, as we need to ensure
1090
  consistency in case the (wrong) masters have old data, and nodes have
1091
  come back from repairs
1092

    
1093
Caveats:
1094

    
1095
- some operation semantics are less clear (e.g. what to do on instance
1096
  start with offline secondary?); for now, these will just fail as if the
1097
  flag is not set (but faster)
1098
- 2-node cluster with one node offline needs manual startup of the
1099
  master with a special flag to skip voting (as the master can't get a
1100
  quorum there)
1101

    
1102
One of the advantages of implementing this flag is that it will allow
1103
in the future automation tools to automatically put the node in
1104
repairs and recover from this state, and the code (should/will) handle
1105
this much better than just timing out. So, future possible
1106
improvements (for later versions):
1107

    
1108
- watcher will detect nodes which fail RPC calls, will attempt to ssh
1109
  to them, if failure will put them offline
1110
- watcher will try to ssh and query the offline nodes, if successful
1111
  will take them off the repair list
1112

    
1113
Alternatives considered: The RPC call model in 2.0 is, by default,
1114
much nicer - errors are logged in the background, and job/opcode
1115
execution is clearer, so we could simply not introduce this. However,
1116
having this state will make both the codepaths clearer (offline
1117
vs. temporary failure) and the operational model (it's not a node with
1118
errors, but an offline node).
1119

    
1120

    
1121
*drained* flag
1122
++++++++++++++
1123

    
1124
Due to parallel execution of jobs in Ganeti 2.0, we could have the
1125
following situation:
1126

    
1127
- gnt-node migrate + failover is run
1128
- gnt-node evacuate is run, which schedules a long-running 6-opcode
1129
  job for the node
1130
- partway through, a new job comes in that runs an iallocator script,
1131
  which finds the above node as empty and a very good candidate
1132
- gnt-node evacuate has finished, but now it has to be run again, to
1133
  clean the above instance(s)
1134

    
1135
In order to prevent this situation, and to be able to get nodes into
1136
proper offline status easily, a new *drained* flag was added to the nodes.
1137

    
1138
This flag (which actually means "is being, or was drained, and is
1139
expected to go offline"), will prevent allocations on the node, but
1140
otherwise all other operations (start/stop instance, query, etc.) are
1141
working without any restrictions.
1142

    
1143
Interaction between flags
1144
+++++++++++++++++++++++++
1145

    
1146
While these flags are implemented as separate flags, they are
1147
mutually-exclusive and are acting together with the master node role
1148
as a single *node status* value. In other words, a flag is only in one
1149
of these roles at a given time. The lack of any of these flags denote
1150
a regular node.
1151

    
1152
The current node status is visible in the ``gnt-cluster verify``
1153
output, and the individual flags can be examined via separate flags in
1154
the ``gnt-node list`` output.
1155

    
1156
These new flags will be exported in both the iallocator input message
1157
and via RAPI, see the respective man pages for the exact names.
1158

    
1159
Feature changes
1160
---------------
1161

    
1162
The main feature-level changes will be:
1163

    
1164
- a number of disk related changes
1165
- removal of fixed two-disk, one-nic per instance limitation
1166

    
1167
Disk handling changes
1168
~~~~~~~~~~~~~~~~~~~~~
1169

    
1170
The storage options available in Ganeti 1.x were introduced based on
1171
then-current software (first DRBD 0.7 then later DRBD 8) and the
1172
estimated usage patters. However, experience has later shown that some
1173
assumptions made initially are not true and that more flexibility is
1174
needed.
1175

    
1176
One main assumption made was that disk failures should be treated as 'rare'
1177
events, and that each of them needs to be manually handled in order to ensure
1178
data safety; however, both these assumptions are false:
1179

    
1180
- disk failures can be a common occurrence, based on usage patterns or cluster
1181
  size
1182
- our disk setup is robust enough (referring to DRBD8 + LVM) that we could
1183
  automate more of the recovery
1184

    
1185
Note that we still don't have fully-automated disk recovery as a goal, but our
1186
goal is to reduce the manual work needed.
1187

    
1188
As such, we plan the following main changes:
1189

    
1190
- DRBD8 is much more flexible and stable than its previous version (0.7),
1191
  such that removing the support for the ``remote_raid1`` template and
1192
  focusing only on DRBD8 is easier
1193

    
1194
- dynamic discovery of DRBD devices is not actually needed in a cluster that
1195
  where the DRBD namespace is controlled by Ganeti; switching to a static
1196
  assignment (done at either instance creation time or change secondary time)
1197
  will change the disk activation time from O(n) to O(1), which on big
1198
  clusters is a significant gain
1199

    
1200
- remove the hard dependency on LVM (currently all available storage types are
1201
  ultimately backed by LVM volumes) by introducing file-based storage
1202

    
1203
Additionally, a number of smaller enhancements are also planned:
1204
- support variable number of disks
1205
- support read-only disks
1206

    
1207
Future enhancements in the 2.x series, which do not require base design
1208
changes, might include:
1209

    
1210
- enhancement of the LVM allocation method in order to try to keep
1211
  all of an instance's virtual disks on the same physical
1212
  disks
1213

    
1214
- add support for DRBD8 authentication at handshake time in
1215
  order to ensure each device connects to the correct peer
1216

    
1217
- remove the restrictions on failover only to the secondary
1218
  which creates very strict rules on cluster allocation
1219

    
1220
DRBD minor allocation
1221
+++++++++++++++++++++
1222

    
1223
Currently, when trying to identify or activate a new DRBD (or MD)
1224
device, the code scans all in-use devices in order to see if we find
1225
one that looks similar to our parameters and is already in the desired
1226
state or not. Since this needs external commands to be run, it is very
1227
slow when more than a few devices are already present.
1228

    
1229
Therefore, we will change the discovery model from dynamic to
1230
static. When a new device is logically created (added to the
1231
configuration) a free minor number is computed from the list of
1232
devices that should exist on that node and assigned to that
1233
device.
1234

    
1235
At device activation, if the minor is already in use, we check if
1236
it has our parameters; if not so, we just destroy the device (if
1237
possible, otherwise we abort) and start it with our own
1238
parameters.
1239

    
1240
This means that we in effect take ownership of the minor space for
1241
that device type; if there's a user-created DRBD minor, it will be
1242
automatically removed.
1243

    
1244
The change will have the effect of reducing the number of external
1245
commands run per device from a constant number times the index of the
1246
first free DRBD minor to just a constant number.
1247

    
1248
Removal of obsolete device types (MD, DRBD7)
1249
++++++++++++++++++++++++++++++++++++++++++++
1250

    
1251
We need to remove these device types because of two issues. First,
1252
DRBD7 has bad failure modes in case of dual failures (both network and
1253
disk - it cannot propagate the error up the device stack and instead
1254
just panics. Second, due to the asymmetry between primary and
1255
secondary in MD+DRBD mode, we cannot do live failover (not even if we
1256
had MD+DRBD8).
1257

    
1258
File-based storage support
1259
++++++++++++++++++++++++++
1260

    
1261
Using files instead of logical volumes for instance storage would
1262
allow us to get rid of the hard requirement for volume groups for
1263
testing clusters and it would also allow usage of SAN storage to do
1264
live failover taking advantage of this storage solution.
1265

    
1266
Better LVM allocation
1267
+++++++++++++++++++++
1268

    
1269
Currently, the LV to PV allocation mechanism is a very simple one: at
1270
each new request for a logical volume, tell LVM to allocate the volume
1271
in order based on the amount of free space. This is good for
1272
simplicity and for keeping the usage equally spread over the available
1273
physical disks, however it introduces a problem that an instance could
1274
end up with its (currently) two drives on two physical disks, or
1275
(worse) that the data and metadata for a DRBD device end up on
1276
different drives.
1277

    
1278
This is bad because it causes unneeded ``replace-disks`` operations in
1279
case of a physical failure.
1280

    
1281
The solution is to batch allocations for an instance and make the LVM
1282
handling code try to allocate as close as possible all the storage of
1283
one instance. We will still allow the logical volumes to spill over to
1284
additional disks as needed.
1285

    
1286
Note that this clustered allocation can only be attempted at initial
1287
instance creation, or at change secondary node time. At add disk time,
1288
or at replacing individual disks, it's not easy enough to compute the
1289
current disk map so we'll not attempt the clustering.
1290

    
1291
DRBD8 peer authentication at handshake
1292
++++++++++++++++++++++++++++++++++++++
1293

    
1294
DRBD8 has a new feature that allow authentication of the peer at
1295
connect time. We can use this to prevent connecting to the wrong peer
1296
more that securing the connection. Even though we never had issues
1297
with wrong connections, it would be good to implement this.
1298

    
1299

    
1300
LVM self-repair (optional)
1301
++++++++++++++++++++++++++
1302

    
1303
The complete failure of a physical disk is very tedious to
1304
troubleshoot, mainly because of the many failure modes and the many
1305
steps needed. We can safely automate some of the steps, more
1306
specifically the ``vgreduce --removemissing`` using the following
1307
method:
1308

    
1309
#. check if all nodes have consistent volume groups
1310
#. if yes, and previous status was yes, do nothing
1311
#. if yes, and previous status was no, save status and restart
1312
#. if no, and previous status was no, do nothing
1313
#. if no, and previous status was yes:
1314
    #. if more than one node is inconsistent, do nothing
1315
    #. if only one node is inconsistent:
1316
        #. run ``vgreduce --removemissing``
1317
        #. log this occurrence in the Ganeti log in a form that
1318
           can be used for monitoring
1319
        #. [FUTURE] run ``replace-disks`` for all
1320
           instances affected
1321

    
1322
Failover to any node
1323
++++++++++++++++++++
1324

    
1325
With a modified disk activation sequence, we can implement the
1326
*failover to any* functionality, removing many of the layout
1327
restrictions of a cluster:
1328

    
1329
- the need to reserve memory on the current secondary: this gets reduced to
1330
  a must to reserve memory anywhere on the cluster
1331

    
1332
- the need to first failover and then replace secondary for an
1333
  instance: with failover-to-any, we can directly failover to
1334
  another node, which also does the replace disks at the same
1335
  step
1336

    
1337
In the following, we denote the current primary by P1, the current
1338
secondary by S1, and the new primary and secondaries by P2 and S2. P2
1339
is fixed to the node the user chooses, but the choice of S2 can be
1340
made between P1 and S1. This choice can be constrained, depending on
1341
which of P1 and S1 has failed.
1342

    
1343
- if P1 has failed, then S1 must become S2, and live migration is not possible
1344
- if S1 has failed, then P1 must become S2, and live migration could be
1345
  possible (in theory, but this is not a design goal for 2.0)
1346

    
1347
The algorithm for performing the failover is straightforward:
1348

    
1349
- verify that S2 (the node the user has chosen to keep as secondary) has
1350
  valid data (is consistent)
1351

    
1352
- tear down the current DRBD association and setup a DRBD pairing between
1353
  P2 (P2 is indicated by the user) and S2; since P2 has no data, it will
1354
  start re-syncing from S2
1355

    
1356
- as soon as P2 is in state SyncTarget (i.e. after the resync has started
1357
  but before it has finished), we can promote it to primary role (r/w)
1358
  and start the instance on P2
1359

    
1360
- as soon as the P2?S2 sync has finished, we can remove
1361
  the old data on the old node that has not been chosen for
1362
  S2
1363

    
1364
Caveats: during the P2?S2 sync, a (non-transient) network error
1365
will cause I/O errors on the instance, so (if a longer instance
1366
downtime is acceptable) we can postpone the restart of the instance
1367
until the resync is done. However, disk I/O errors on S2 will cause
1368
data loss, since we don't have a good copy of the data anymore, so in
1369
this case waiting for the sync to complete is not an option. As such,
1370
it is recommended that this feature is used only in conjunction with
1371
proper disk monitoring.
1372

    
1373

    
1374
Live migration note: While failover-to-any is possible for all choices
1375
of S2, migration-to-any is possible only if we keep P1 as S2.
1376

    
1377
Caveats
1378
+++++++
1379

    
1380
The dynamic device model, while more complex, has an advantage: it
1381
will not reuse by mistake the DRBD device of another instance, since
1382
it always looks for either our own or a free one.
1383

    
1384
The static one, in contrast, will assume that given a minor number N,
1385
it's ours and we can take over. This needs careful implementation such
1386
that if the minor is in use, either we are able to cleanly shut it
1387
down, or we abort the startup. Otherwise, it could be that we start
1388
syncing between two instance's disks, causing data loss.
1389

    
1390

    
1391
Variable number of disk/NICs per instance
1392
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1393

    
1394
Variable number of disks
1395
++++++++++++++++++++++++
1396

    
1397
In order to support high-security scenarios (for example read-only sda
1398
and read-write sdb), we need to make a fully flexibly disk
1399
definition. This has less impact that it might look at first sight:
1400
only the instance creation has hard coded number of disks, not the disk
1401
handling code. The block device handling and most of the instance
1402
handling code is already working with "the instance's disks" as
1403
opposed to "the two disks of the instance", but some pieces are not
1404
(e.g. import/export) and the code needs a review to ensure safety.
1405

    
1406
The objective is to be able to specify the number of disks at
1407
instance creation, and to be able to toggle from read-only to
1408
read-write a disk afterward.
1409

    
1410
Variable number of NICs
1411
+++++++++++++++++++++++
1412

    
1413
Similar to the disk change, we need to allow multiple network
1414
interfaces per instance. This will affect the internal code (some
1415
function will have to stop assuming that ``instance.nics`` is a list
1416
of length one), the OS API which currently can export/import only one
1417
instance, and the command line interface.
1418

    
1419
Interface changes
1420
-----------------
1421

    
1422
There are two areas of interface changes: API-level changes (the OS
1423
interface and the RAPI interface) and the command line interface
1424
changes.
1425

    
1426
OS interface
1427
~~~~~~~~~~~~
1428

    
1429
The current Ganeti OS interface, version 5, is tailored for Ganeti 1.2. The
1430
interface is composed by a series of scripts which get called with certain
1431
parameters to perform OS-dependent operations on the cluster. The current
1432
scripts are:
1433

    
1434
create
1435
  called when a new instance is added to the cluster
1436
export
1437
  called to export an instance disk to a stream
1438
import
1439
  called to import from a stream to a new instance
1440
rename
1441
  called to perform the os-specific operations necessary for renaming an
1442
  instance
1443

    
1444
Currently these scripts suffer from the limitations of Ganeti 1.2: for example
1445
they accept exactly one block and one swap devices to operate on, rather than
1446
any amount of generic block devices, they blindly assume that an instance will
1447
have just one network interface to operate, they can not be configured to
1448
optimise the instance for a particular hypervisor.
1449

    
1450
Since in Ganeti 2.0 we want to support multiple hypervisors, and a non-fixed
1451
number of network and disks the OS interface need to change to transmit the
1452
appropriate amount of information about an instance to its managing operating
1453
system, when operating on it. Moreover since some old assumptions usually used
1454
in OS scripts are no longer valid we need to re-establish a common knowledge on
1455
what can be assumed and what cannot be regarding Ganeti environment.
1456

    
1457

    
1458
When designing the new OS API our priorities are:
1459
- ease of use
1460
- future extensibility
1461
- ease of porting from the old API
1462
- modularity
1463

    
1464
As such we want to limit the number of scripts that must be written to support
1465
an OS, and make it easy to share code between them by uniforming their input.
1466
We also will leave the current script structure unchanged, as far as we can,
1467
and make a few of the scripts (import, export and rename) optional. Most
1468
information will be passed to the script through environment variables, for
1469
ease of access and at the same time ease of using only the information a script
1470
needs.
1471

    
1472

    
1473
The Scripts
1474
+++++++++++
1475

    
1476
As in Ganeti 1.2, every OS which wants to be installed in Ganeti needs to
1477
support the following functionality, through scripts:
1478

    
1479
create:
1480
  used to create a new instance running that OS. This script should prepare the
1481
  block devices, and install them so that the new OS can boot under the
1482
  specified hypervisor.
1483
export (optional):
1484
  used to export an installed instance using the given OS to a format which can
1485
  be used to import it back into a new instance.
1486
import (optional):
1487
  used to import an exported instance into a new one. This script is similar to
1488
  create, but the new instance should have the content of the export, rather
1489
  than contain a pristine installation.
1490
rename (optional):
1491
  used to perform the internal OS-specific operations needed to rename an
1492
  instance.
1493

    
1494
If any optional script is not implemented Ganeti will refuse to perform the
1495
given operation on instances using the non-implementing OS. Of course the
1496
create script is mandatory, and it doesn't make sense to support the either the
1497
export or the import operation but not both.
1498

    
1499
Incompatibilities with 1.2
1500
__________________________
1501

    
1502
We expect the following incompatibilities between the OS scripts for 1.2 and
1503
the ones for 2.0:
1504

    
1505
- Input parameters: in 1.2 those were passed on the command line, in 2.0 we'll
1506
  use environment variables, as there will be a lot more information and not
1507
  all OSes may care about all of it.
1508
- Number of calls: export scripts will be called once for each device the
1509
  instance has, and import scripts once for every exported disk. Imported
1510
  instances will be forced to have a number of disks greater or equal to the
1511
  one of the export.
1512
- Some scripts are not compulsory: if such a script is missing the relevant
1513
  operations will be forbidden for instances of that OS. This makes it easier
1514
  to distinguish between unsupported operations and no-op ones (if any).
1515

    
1516

    
1517
Input
1518
_____
1519

    
1520
Rather than using command line flags, as they do now, scripts will accept
1521
inputs from environment variables.  We expect the following input values:
1522

    
1523
OS_API_VERSION
1524
  The version of the OS API that the following parameters comply with;
1525
  this is used so that in the future we could have OSes supporting
1526
  multiple versions and thus Ganeti send the proper version in this
1527
  parameter
1528
INSTANCE_NAME
1529
  Name of the instance acted on
1530
HYPERVISOR
1531
  The hypervisor the instance should run on (e.g. 'xen-pvm', 'xen-hvm', 'kvm')
1532
DISK_COUNT
1533
  The number of disks this instance will have
1534
NIC_COUNT
1535
  The number of NICs this instance will have
1536
DISK_<N>_PATH
1537
  Path to the Nth disk.
1538
DISK_<N>_ACCESS
1539
  W if read/write, R if read only. OS scripts are not supposed to touch
1540
  read-only disks, but will be passed them to know.
1541
DISK_<N>_FRONTEND_TYPE
1542
  Type of the disk as seen by the instance. Can be 'scsi', 'ide', 'virtio'
1543
DISK_<N>_BACKEND_TYPE
1544
  Type of the disk as seen from the node. Can be 'block', 'file:loop' or
1545
  'file:blktap'
1546
NIC_<N>_MAC
1547
  Mac address for the Nth network interface
1548
NIC_<N>_IP
1549
  Ip address for the Nth network interface, if available
1550
NIC_<N>_BRIDGE
1551
  Node bridge the Nth network interface will be connected to
1552
NIC_<N>_FRONTEND_TYPE
1553
  Type of the Nth NIC as seen by the instance. For example 'virtio',
1554
  'rtl8139', etc.
1555
DEBUG_LEVEL
1556
  Whether more out should be produced, for debugging purposes. Currently the
1557
  only valid values are 0 and 1.
1558

    
1559
These are only the basic variables we are thinking of now, but more
1560
may come during the implementation and they will be documented in the
1561
:manpage:`ganeti-os-api` man page. All these variables will be
1562
available to all scripts.
1563

    
1564
Some scripts will need a few more information to work. These will have
1565
per-script variables, such as for example:
1566

    
1567
OLD_INSTANCE_NAME
1568
  rename: the name the instance should be renamed from.
1569
EXPORT_DEVICE
1570
  export: device to be exported, a snapshot of the actual device. The data must be exported to stdout.
1571
EXPORT_INDEX
1572
  export: sequential number of the instance device targeted.
1573
IMPORT_DEVICE
1574
  import: device to send the data to, part of the new instance. The data must be imported from stdin.
1575
IMPORT_INDEX
1576
  import: sequential number of the instance device targeted.
1577

    
1578
(Rationale for INSTANCE_NAME as an environment variable: the instance name is
1579
always needed and we could pass it on the command line. On the other hand,
1580
though, this would force scripts to both access the environment and parse the
1581
command line, so we'll move it for uniformity.)
1582

    
1583

    
1584
Output/Behaviour
1585
________________
1586

    
1587
As discussed scripts should only send user-targeted information to stderr. The
1588
create and import scripts are supposed to format/initialise the given block
1589
devices and install the correct instance data. The export script is supposed to
1590
export instance data to stdout in a format understandable by the the import
1591
script. The data will be compressed by Ganeti, so no compression should be
1592
done. The rename script should only modify the instance's knowledge of what
1593
its name is.
1594

    
1595
Other declarative style features
1596
++++++++++++++++++++++++++++++++
1597

    
1598
Similar to Ganeti 1.2, OS specifications will need to provide a
1599
'ganeti_api_version' containing list of numbers matching the
1600
version(s) of the API they implement. Ganeti itself will always be
1601
compatible with one version of the API and may maintain backwards
1602
compatibility if it's feasible to do so. The numbers are one-per-line,
1603
so an OS supporting both version 5 and version 20 will have a file
1604
containing two lines. This is different from Ganeti 1.2, which only
1605
supported one version number.
1606

    
1607
In addition to that an OS will be able to declare that it does support only a
1608
subset of the Ganeti hypervisors, by declaring them in the 'hypervisors' file.
1609

    
1610

    
1611
Caveats/Notes
1612
+++++++++++++
1613

    
1614
We might want to have a "default" import/export behaviour that just dumps all
1615
disks and restores them. This can save work as most systems will just do this,
1616
while allowing flexibility for different systems.
1617

    
1618
Environment variables are limited in size, but we expect that there will be
1619
enough space to store the information we need. If we discover that this is not
1620
the case we may want to go to a more complex API such as storing those
1621
information on the filesystem and providing the OS script with the path to a
1622
file where they are encoded in some format.
1623

    
1624

    
1625

    
1626
Remote API changes
1627
~~~~~~~~~~~~~~~~~~
1628

    
1629
The first Ganeti remote API (RAPI) was designed and deployed with the
1630
Ganeti 1.2.5 release.  That version provide read-only access to the
1631
cluster state. Fully functional read-write API demands significant
1632
internal changes which will be implemented in version 2.0.
1633

    
1634
We decided to go with implementing the Ganeti RAPI in a RESTful way,
1635
which is aligned with key features we looking. It is simple,
1636
stateless, scalable and extensible paradigm of API implementation. As
1637
transport it uses HTTP over SSL, and we are implementing it with JSON
1638
encoding, but in a way it possible to extend and provide any other
1639
one.
1640

    
1641
Design
1642
++++++
1643

    
1644
The Ganeti RAPI is implemented as independent daemon, running on the
1645
same node with the same permission level as Ganeti master
1646
daemon. Communication is done through the LUXI library to the master
1647
daemon. In order to keep communication asynchronous RAPI processes two
1648
types of client requests:
1649

    
1650
- queries: server is able to answer immediately
1651
- job submission: some time is required for a useful response
1652

    
1653
In the query case requested data send back to client in the HTTP
1654
response body. Typical examples of queries would be: list of nodes,
1655
instances, cluster info, etc.
1656

    
1657
In the case of job submission, the client receive a job ID, the
1658
identifier which allows to query the job progress in the job queue
1659
(see `Job Queue`_).
1660

    
1661
Internally, each exported object has an version identifier, which is
1662
used as a state identifier in the HTTP header E-Tag field for
1663
requests/responses to avoid race conditions.
1664

    
1665

    
1666
Resource representation
1667
+++++++++++++++++++++++
1668

    
1669
The key difference of using REST instead of others API is that REST
1670
requires separation of services via resources with unique URIs. Each
1671
of them should have limited amount of state and support standard HTTP
1672
methods: GET, POST, DELETE, PUT.
1673

    
1674
For example in Ganeti's case we can have a set of URI:
1675

    
1676
 - ``/{clustername}/instances``
1677
 - ``/{clustername}/instances/{instancename}``
1678
 - ``/{clustername}/instances/{instancename}/tag``
1679
 - ``/{clustername}/tag``
1680

    
1681
A GET request to ``/{clustername}/instances`` will return the list of
1682
instances, a POST to ``/{clustername}/instances`` should create a new
1683
instance, a DELETE ``/{clustername}/instances/{instancename}`` should
1684
delete the instance, a GET ``/{clustername}/tag`` should return get
1685
cluster tags.
1686

    
1687
Each resource URI will have a version prefix. The resource IDs are to
1688
be determined.
1689

    
1690
Internal encoding might be JSON, XML, or any other. The JSON encoding
1691
fits nicely in Ganeti RAPI needs. The client can request a specific
1692
representation via the Accept field in the HTTP header.
1693

    
1694
REST uses HTTP as its transport and application protocol for resource
1695
access. The set of possible responses is a subset of standard HTTP
1696
responses.
1697

    
1698
The statelessness model provides additional reliability and
1699
transparency to operations (e.g. only one request needs to be analyzed
1700
to understand the in-progress operation, not a sequence of multiple
1701
requests/responses).
1702

    
1703

    
1704
Security
1705
++++++++
1706

    
1707
With the write functionality security becomes a much bigger an issue.
1708
The Ganeti RAPI uses basic HTTP authentication on top of an
1709
SSL-secured connection to grant access to an exported resource. The
1710
password is stored locally in an Apache-style ``.htpasswd`` file. Only
1711
one level of privileges is supported.
1712

    
1713
Caveats
1714
+++++++
1715

    
1716
The model detailed above for job submission requires the client to
1717
poll periodically for updates to the job; an alternative would be to
1718
allow the client to request a callback, or a 'wait for updates' call.
1719

    
1720
The callback model was not considered due to the following two issues:
1721

    
1722
- callbacks would require a new model of allowed callback URLs,
1723
  together with a method of managing these
1724
- callbacks only work when the client and the master are in the same
1725
  security domain, and they fail in the other cases (e.g. when there is
1726
  a firewall between the client and the RAPI daemon that only allows
1727
  client-to-RAPI calls, which is usual in DMZ cases)
1728

    
1729
The 'wait for updates' method is not suited to the HTTP protocol,
1730
where requests are supposed to be short-lived.
1731

    
1732
Command line changes
1733
~~~~~~~~~~~~~~~~~~~~
1734

    
1735
Ganeti 2.0 introduces several new features as well as new ways to
1736
handle instance resources like disks or network interfaces. This
1737
requires some noticeable changes in the way command line arguments are
1738
handled.
1739

    
1740
- extend and modify command line syntax to support new features
1741
- ensure consistent patterns in command line arguments to reduce
1742
  cognitive load
1743

    
1744
The design changes that require these changes are, in no particular
1745
order:
1746

    
1747
- flexible instance disk handling: support a variable number of disks
1748
  with varying properties per instance,
1749
- flexible instance network interface handling: support a variable
1750
  number of network interfaces with varying properties per instance
1751
- multiple hypervisors: multiple hypervisors can be active on the same
1752
  cluster, each supporting different parameters,
1753
- support for device type CDROM (via ISO image)
1754

    
1755
As such, there are several areas of Ganeti where the command line
1756
arguments will change:
1757

    
1758
- Cluster configuration
1759

    
1760
  - cluster initialization
1761
  - cluster default configuration
1762

    
1763
- Instance configuration
1764

    
1765
  - handling of network cards for instances,
1766
  - handling of disks for instances,
1767
  - handling of CDROM devices and
1768
  - handling of hypervisor specific options.
1769

    
1770
There are several areas of Ganeti where the command line arguments
1771
will change:
1772

    
1773
- Cluster configuration
1774

    
1775
  - cluster initialization
1776
  - cluster default configuration
1777

    
1778
- Instance configuration
1779

    
1780
  - handling of network cards for instances,
1781
  - handling of disks for instances,
1782
  - handling of CDROM devices and
1783
  - handling of hypervisor specific options.
1784

    
1785
Notes about device removal/addition
1786
+++++++++++++++++++++++++++++++++++
1787

    
1788
To avoid problems with device location changes (e.g. second network
1789
interface of the instance becoming the first or third and the like)
1790
the list of network/disk devices is treated as a stack, i.e. devices
1791
can only be added/removed at the end of the list of devices of each
1792
class (disk or network) for each instance.
1793

    
1794
gnt-instance commands
1795
+++++++++++++++++++++
1796

    
1797
The commands for gnt-instance will be modified and extended to allow
1798
for the new functionality:
1799

    
1800
- the add command will be extended to support the new device and
1801
  hypervisor options,
1802
- the modify command continues to handle all modifications to
1803
  instances, but will be extended with new arguments for handling
1804
  devices.
1805

    
1806
Network Device Options
1807
++++++++++++++++++++++
1808

    
1809
The generic format of the network device option is:
1810

    
1811
  --net $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1812

    
1813
:$DEVNUM: device number, unsigned integer, starting at 0,
1814
:$OPTION: device option, string,
1815
:$VALUE: device option value, string.
1816

    
1817
Currently, the following device options will be defined (open to
1818
further changes):
1819

    
1820
:mac: MAC address of the network interface, accepts either a valid
1821
  MAC address or the string 'auto'. If 'auto' is specified, a new MAC
1822
  address will be generated randomly. If the mac device option is not
1823
  specified, the default value 'auto' is assumed.
1824
:bridge: network bridge the network interface is connected
1825
  to. Accepts either a valid bridge name (the specified bridge must
1826
  exist on the node(s)) as string or the string 'auto'. If 'auto' is
1827
  specified, the default brigde is used. If the bridge option is not
1828
  specified, the default value 'auto' is assumed.
1829

    
1830
Disk Device Options
1831
+++++++++++++++++++
1832

    
1833
The generic format of the disk device option is:
1834

    
1835
  --disk $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1836

    
1837
:$DEVNUM: device number, unsigned integer, starting at 0,
1838
:$OPTION: device option, string,
1839
:$VALUE: device option value, string.
1840

    
1841
Currently, the following device options will be defined (open to
1842
further changes):
1843

    
1844
:size: size of the disk device, either a positive number, specifying
1845
  the disk size in mebibytes, or a number followed by a magnitude suffix
1846
  (M for mebibytes, G for gibibytes). Also accepts the string 'auto' in
1847
  which case the default disk size will be used. If the size option is
1848
  not specified, 'auto' is assumed. This option is not valid for all
1849
  disk layout types.
1850
:access: access mode of the disk device, a single letter, valid values
1851
  are:
1852

    
1853
  - *w*: read/write access to the disk device or
1854
  - *r*: read-only access to the disk device.
1855

    
1856
  If the access mode is not specified, the default mode of read/write
1857
  access will be configured.
1858
:path: path to the image file for the disk device, string. No default
1859
  exists. This option is not valid for all disk layout types.
1860

    
1861
Adding devices
1862
++++++++++++++
1863

    
1864
To add devices to an already existing instance, use the device type
1865
specific option to gnt-instance modify. Currently, there are two
1866
device type specific options supported:
1867

    
1868
:--net: for network interface cards
1869
:--disk: for disk devices
1870

    
1871
The syntax to the device specific options is similar to the generic
1872
device options, but instead of specifying a device number like for
1873
gnt-instance add, you specify the magic string add. The new device
1874
will always be appended at the end of the list of devices of this type
1875
for the specified instance, e.g. if the instance has disk devices 0,1
1876
and 2, the newly added disk device will be disk device 3.
1877

    
1878
Example: gnt-instance modify --net add:mac=auto test-instance
1879

    
1880
Removing devices
1881
++++++++++++++++
1882

    
1883
Removing devices from and instance is done via gnt-instance
1884
modify. The same device specific options as for adding instances are
1885
used. Instead of a device number and further device options, only the
1886
magic string remove is specified. It will always remove the last
1887
device in the list of devices of this type for the instance specified,
1888
e.g. if the instance has disk devices 0, 1, 2 and 3, the disk device
1889
number 3 will be removed.
1890

    
1891
Example: gnt-instance modify --net remove test-instance
1892

    
1893
Modifying devices
1894
+++++++++++++++++
1895

    
1896
Modifying devices is also done with device type specific options to
1897
the gnt-instance modify command. There are currently two device type
1898
options supported:
1899

    
1900
:--net: for network interface cards
1901
:--disk: for disk devices
1902

    
1903
The syntax to the device specific options is similar to the generic
1904
device options. The device number you specify identifies the device to
1905
be modified.
1906

    
1907
Example::
1908

    
1909
  gnt-instance modify --disk 2:access=r
1910

    
1911
Hypervisor Options
1912
++++++++++++++++++
1913

    
1914
Ganeti 2.0 will support more than one hypervisor. Different
1915
hypervisors have various options that only apply to a specific
1916
hypervisor. Those hypervisor specific options are treated specially
1917
via the ``--hypervisor`` option. The generic syntax of the hypervisor
1918
option is as follows::
1919

    
1920
  --hypervisor $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1921

    
1922
:$HYPERVISOR: symbolic name of the hypervisor to use, string,
1923
  has to match the supported hypervisors. Example: xen-pvm
1924

    
1925
:$OPTION: hypervisor option name, string
1926
:$VALUE: hypervisor option value, string
1927

    
1928
The hypervisor option for an instance can be set on instance creation
1929
time via the ``gnt-instance add`` command. If the hypervisor for an
1930
instance is not specified upon instance creation, the default
1931
hypervisor will be used.
1932

    
1933
Modifying hypervisor parameters
1934
+++++++++++++++++++++++++++++++
1935

    
1936
The hypervisor parameters of an existing instance can be modified
1937
using ``--hypervisor`` option of the ``gnt-instance modify``
1938
command. However, the hypervisor type of an existing instance can not
1939
be changed, only the particular hypervisor specific option can be
1940
changed. Therefore, the format of the option parameters has been
1941
simplified to omit the hypervisor name and only contain the comma
1942
separated list of option-value pairs.
1943

    
1944
Example::
1945

    
1946
  gnt-instance modify --hypervisor cdrom=/srv/boot.iso,boot_order=cdrom:network test-instance
1947

    
1948
gnt-cluster commands
1949
++++++++++++++++++++
1950

    
1951
The command for gnt-cluster will be extended to allow setting and
1952
changing the default parameters of the cluster:
1953

    
1954
- The init command will be extend to support the defaults option to
1955
  set the cluster defaults upon cluster initialization.
1956
- The modify command will be added to modify the cluster
1957
  parameters. It will support the --defaults option to change the
1958
  cluster defaults.
1959

    
1960
Cluster defaults
1961

    
1962
The generic format of the cluster default setting option is:
1963

    
1964
  --defaults $OPTION=$VALUE[,$OPTION=$VALUE]
1965

    
1966
:$OPTION: cluster default option, string,
1967
:$VALUE: cluster default option value, string.
1968

    
1969
Currently, the following cluster default options are defined (open to
1970
further changes):
1971

    
1972
:hypervisor: the default hypervisor to use for new instances,
1973
  string. Must be a valid hypervisor known to and supported by the
1974
  cluster.
1975
:disksize: the disksize for newly created instance disks, where
1976
  applicable. Must be either a positive number, in which case the unit
1977
  of megabyte is assumed, or a positive number followed by a supported
1978
  magnitude symbol (M for megabyte or G for gigabyte).
1979
:bridge: the default network bridge to use for newly created instance
1980
  network interfaces, string. Must be a valid bridge name of a bridge
1981
  existing on the node(s).
1982

    
1983
Hypervisor cluster defaults
1984
+++++++++++++++++++++++++++
1985

    
1986
The generic format of the hypervisor cluster wide default setting
1987
option is::
1988

    
1989
  --hypervisor-defaults $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1990

    
1991
:$HYPERVISOR: symbolic name of the hypervisor whose defaults you want
1992
  to set, string
1993
:$OPTION: cluster default option, string,
1994
:$VALUE: cluster default option value, string.