Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.0.rst @ 6c2d0b44

History | View | Annotate | Download (69.5 kB)

1
=================
2
Ganeti 2.0 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.0 compared to
6
the 1.2 version.
7

    
8
The 2.0 version will constitute a rewrite of the 'core' architecture,
9
paving the way for additional features in future 2.x versions.
10

    
11
.. contents::
12

    
13
Objective
14
=========
15

    
16
Ganeti 1.2 has many scalability issues and restrictions due to its
17
roots as software for managing small and 'static' clusters.
18

    
19
Version 2.0 will attempt to remedy first the scalability issues and
20
then the restrictions.
21

    
22
Background
23
==========
24

    
25
While Ganeti 1.2 is usable, it severely limits the flexibility of the
26
cluster administration and imposes a very rigid model. It has the
27
following main scalability issues:
28

    
29
- only one operation at a time on the cluster [#]_
30
- poor handling of node failures in the cluster
31
- mixing hypervisors in a cluster not allowed
32

    
33
It also has a number of artificial restrictions, due to historical design:
34

    
35
- fixed number of disks (two) per instance
36
- fixed number of NICs
37

    
38
.. [#] Replace disks will release the lock, but this is an exception
39
       and not a recommended way to operate
40

    
41
The 2.0 version is intended to address some of these problems, and
42
create a more flexible code base for future developments.
43

    
44
Among these problems, the single-operation at a time restriction is
45
biggest issue with the current version of Ganeti. It is such a big
46
impediment in operating bigger clusters that many times one is tempted
47
to remove the lock just to do a simple operation like start instance
48
while an OS installation is running.
49

    
50
Scalability problems
51
--------------------
52

    
53
Ganeti 1.2 has a single global lock, which is used for all cluster
54
operations.  This has been painful at various times, for example:
55

    
56
- It is impossible for two people to efficiently interact with a cluster
57
  (for example for debugging) at the same time.
58
- When batch jobs are running it's impossible to do other work (for example
59
  failovers/fixes) on a cluster.
60

    
61
This poses scalability problems: as clusters grow in node and instance
62
size it's a lot more likely that operations which one could conceive
63
should run in parallel (for example because they happen on different
64
nodes) are actually stalling each other while waiting for the global
65
lock, without a real reason for that to happen.
66

    
67
One of the main causes of this global lock (beside the higher
68
difficulty of ensuring data consistency in a more granular lock model)
69
is the fact that currently there is no long-lived process in Ganeti
70
that can coordinate multiple operations. Each command tries to acquire
71
the so called *cmd* lock and when it succeeds, it takes complete
72
ownership of the cluster configuration and state.
73

    
74
Other scalability problems are due the design of the DRBD device
75
model, which assumed at its creation a low (one to four) number of
76
instances per node, which is no longer true with today's hardware.
77

    
78
Artificial restrictions
79
-----------------------
80

    
81
Ganeti 1.2 (and previous versions) have a fixed two-disks, one-NIC per
82
instance model. This is a purely artificial restrictions, but it
83
touches multiple areas (configuration, import/export, command line)
84
that it's more fitted to a major release than a minor one.
85

    
86
Architecture issues
87
-------------------
88

    
89
The fact that each command is a separate process that reads the
90
cluster state, executes the command, and saves the new state is also
91
an issue on big clusters where the configuration data for the cluster
92
begins to be non-trivial in size.
93

    
94
Overview
95
========
96

    
97
In order to solve the scalability problems, a rewrite of the core
98
design of Ganeti is required. While the cluster operations themselves
99
won't change (e.g. start instance will do the same things, the way
100
these operations are scheduled internally will change radically.
101

    
102
The new design will change the cluster architecture to:
103

    
104
.. image:: arch-2.0.png
105

    
106
This differs from the 1.2 architecture by the addition of the master
107
daemon, which will be the only entity to talk to the node daemons.
108

    
109

    
110
Detailed design
111
===============
112

    
113
The changes for 2.0 can be split into roughly three areas:
114

    
115
- core changes that affect the design of the software
116
- features (or restriction removals) but which do not have a wide
117
  impact on the design
118
- user-level and API-level changes which translate into differences for
119
  the operation of the cluster
120

    
121
Core changes
122
------------
123

    
124
The main changes will be switching from a per-process model to a
125
daemon based model, where the individual gnt-* commands will be
126
clients that talk to this daemon (see `Master daemon`_). This will
127
allow us to get rid of the global cluster lock for most operations,
128
having instead a per-object lock (see `Granular locking`_). Also, the
129
daemon will be able to queue jobs, and this will allow the individual
130
clients to submit jobs without waiting for them to finish, and also
131
see the result of old requests (see `Job Queue`_).
132

    
133
Beside these major changes, another 'core' change but that will not be
134
as visible to the users will be changing the model of object attribute
135
storage, and separate that into name spaces (such that an Xen PVM
136
instance will not have the Xen HVM parameters). This will allow future
137
flexibility in defining additional parameters. For more details see
138
`Object parameters`_.
139

    
140
The various changes brought in by the master daemon model and the
141
read-write RAPI will require changes to the cluster security; we move
142
away from Twisted and use HTTP(s) for intra- and extra-cluster
143
communications. For more details, see the security document in the
144
doc/ directory.
145

    
146
Master daemon
147
~~~~~~~~~~~~~
148

    
149
In Ganeti 2.0, we will have the following *entities*:
150

    
151
- the master daemon (on the master node)
152
- the node daemon (on all nodes)
153
- the command line tools (on the master node)
154
- the RAPI daemon (on the master node)
155

    
156
The master-daemon related interaction paths are:
157

    
158
- (CLI tools/RAPI daemon) and the master daemon, via the so called *LUXI* API
159
- the master daemon and the node daemons, via the node RPC
160

    
161
There are also some additional interaction paths for exceptional cases:
162

    
163
- CLI tools might access via SSH the nodes (for ``gnt-cluster copyfile``
164
  and ``gnt-cluster command``)
165
- master failover is a special case when a non-master node will SSH
166
  and do node-RPC calls to the current master
167

    
168
The protocol between the master daemon and the node daemons will be
169
changed from (Ganeti 1.2) Twisted PB (perspective broker) to HTTP(S),
170
using a simple PUT/GET of JSON-encoded messages. This is done due to
171
difficulties in working with the Twisted framework and its protocols
172
in a multithreaded environment, which we can overcome by using a
173
simpler stack (see the caveats section).
174

    
175
The protocol between the CLI/RAPI and the master daemon will be a
176
custom one (called *LUXI*): on a UNIX socket on the master node, with
177
rights restricted by filesystem permissions, the CLI/RAPI will talk to
178
the master daemon using JSON-encoded messages.
179

    
180
The operations supported over this internal protocol will be encoded
181
via a python library that will expose a simple API for its
182
users. Internally, the protocol will simply encode all objects in JSON
183
format and decode them on the receiver side.
184

    
185
For more details about the RAPI daemon see `Remote API changes`_, and
186
for the node daemon see `Node daemon changes`_.
187

    
188
The LUXI protocol
189
+++++++++++++++++
190

    
191
As described above, the protocol for making requests or queries to the
192
master daemon will be a UNIX-socket based simple RPC of JSON-encoded
193
messages.
194

    
195
The choice of UNIX was in order to get rid of the need of
196
authentication and authorisation inside Ganeti; for 2.0, the
197
permissions on the Unix socket itself will determine the access
198
rights.
199

    
200
We will have two main classes of operations over this API:
201

    
202
- cluster query functions
203
- job related functions
204

    
205
The cluster query functions are usually short-duration, and are the
206
equivalent of the ``OP_QUERY_*`` opcodes in Ganeti 1.2 (and they are
207
internally implemented still with these opcodes). The clients are
208
guaranteed to receive the response in a reasonable time via a timeout.
209

    
210
The job-related functions will be:
211

    
212
- submit job
213
- query job (which could also be categorized in the query-functions)
214
- archive job (see the job queue design doc)
215
- wait for job change, which allows a client to wait without polling
216

    
217
For more details of the actual operation list, see the `Job Queue`_.
218

    
219
Both requests and responses will consist of a JSON-encoded message
220
followed by the ``ETX`` character (ASCII decimal 3), which is not a
221
valid character in JSON messages and thus can serve as a message
222
delimiter. The contents of the messages will be a dictionary with two
223
fields:
224

    
225
:method:
226
  the name of the method called
227
:args:
228
  the arguments to the method, as a list (no keyword arguments allowed)
229

    
230
Responses will follow the same format, with the two fields being:
231

    
232
:success:
233
  a boolean denoting the success of the operation
234
:result:
235
  the actual result, or error message in case of failure
236

    
237
There are two special value for the result field:
238

    
239
- in the case that the operation failed, and this field is a list of
240
  length two, the client library will try to interpret is as an exception,
241
  the first element being the exception type and the second one the
242
  actual exception arguments; this will allow a simple method of passing
243
  Ganeti-related exception across the interface
244
- for the *WaitForChange* call (that waits on the server for a job to
245
  change status), if the result is equal to ``nochange`` instead of the
246
  usual result for this call (a list of changes), then the library will
247
  internally retry the call; this is done in order to differentiate
248
  internally between master daemon hung and job simply not changed
249

    
250
Users of the API that don't use the provided python library should
251
take care of the above two cases.
252

    
253

    
254
Master daemon implementation
255
++++++++++++++++++++++++++++
256

    
257
The daemon will be based around a main I/O thread that will wait for
258
new requests from the clients, and that does the setup/shutdown of the
259
other thread (pools).
260

    
261
There will two other classes of threads in the daemon:
262

    
263
- job processing threads, part of a thread pool, and which are
264
  long-lived, started at daemon startup and terminated only at shutdown
265
  time
266
- client I/O threads, which are the ones that talk the local protocol
267
  (LUXI) to the clients, and are short-lived
268

    
269
Master startup/failover
270
+++++++++++++++++++++++
271

    
272
In Ganeti 1.x there is no protection against failing over the master
273
to a node with stale configuration. In effect, the responsibility of
274
correct failovers falls on the admin. This is true both for the new
275
master and for when an old, offline master startup.
276

    
277
Since in 2.x we are extending the cluster state to cover the job queue
278
and have a daemon that will execute by itself the job queue, we want
279
to have more resilience for the master role.
280

    
281
The following algorithm will happen whenever a node is ready to
282
transition to the master role, either at startup time or at node
283
failover:
284

    
285
#. read the configuration file and parse the node list
286
   contained within
287

    
288
#. query all the nodes and make sure we obtain an agreement via
289
   a quorum of at least half plus one nodes for the following:
290

    
291
    - we have the latest configuration and job list (as
292
      determined by the serial number on the configuration and
293
      highest job ID on the job queue)
294

    
295
    - there is not even a single node having a newer
296
      configuration file
297

    
298
    - if we are not failing over (but just starting), the
299
      quorum agrees that we are the designated master
300

    
301
    - if any of the above is false, we prevent the current operation
302
      (i.e. we don't become the master)
303

    
304
#. at this point, the node transitions to the master role
305

    
306
#. for all the in-progress jobs, mark them as failed, with
307
   reason unknown or something similar (master failed, etc.)
308

    
309
Since due to exceptional conditions we could have a situation in which
310
no node can become the master due to inconsistent data, we will have
311
an override switch for the master daemon startup that will assume the
312
current node has the right data and will replicate all the
313
configuration files to the other nodes.
314

    
315
**Note**: the above algorithm is by no means an election algorithm; it
316
is a *confirmation* of the master role currently held by a node.
317

    
318
Logging
319
+++++++
320

    
321
The logging system will be switched completely to the standard python
322
logging module; currently it's logging-based, but exposes a different
323
API, which is just overhead. As such, the code will be switched over
324
to standard logging calls, and only the setup will be custom.
325

    
326
With this change, we will remove the separate debug/info/error logs,
327
and instead have always one logfile per daemon model:
328

    
329
- master-daemon.log for the master daemon
330
- node-daemon.log for the node daemon (this is the same as in 1.2)
331
- rapi-daemon.log for the RAPI daemon logs
332
- rapi-access.log, an additional log file for the RAPI that will be
333
  in the standard HTTP log format for possible parsing by other tools
334

    
335
Since the `watcher`_ will only submit jobs to the master for startup
336
of the instances, its log file will contain less information than
337
before, mainly that it will start the instance, but not the results.
338

    
339
Node daemon changes
340
+++++++++++++++++++
341

    
342
The only change to the node daemon is that, since we need better
343
concurrency, we don't process the inter-node RPC calls in the node
344
daemon itself, but we fork and process each request in a separate
345
child.
346

    
347
Since we don't have many calls, and we only fork (not exec), the
348
overhead should be minimal.
349

    
350
Caveats
351
+++++++
352

    
353
A discussed alternative is to keep the current individual processes
354
touching the cluster configuration model. The reasons we have not
355
chosen this approach is:
356

    
357
- the speed of reading and unserializing the cluster state
358
  today is not small enough that we can ignore it; the addition of
359
  the job queue will make the startup cost even higher. While this
360
  runtime cost is low, it can be on the order of a few seconds on
361
  bigger clusters, which for very quick commands is comparable to
362
  the actual duration of the computation itself
363

    
364
- individual commands would make it harder to implement a
365
  fire-and-forget job request, along the lines "start this
366
  instance but do not wait for it to finish"; it would require a
367
  model of backgrounding the operation and other things that are
368
  much better served by a daemon-based model
369

    
370
Another area of discussion is moving away from Twisted in this new
371
implementation. While Twisted has its advantages, there are also many
372
disadvantages to using it:
373

    
374
- first and foremost, it's not a library, but a framework; thus, if
375
  you use twisted, all the code needs to be 'twiste-ized' and written
376
  in an asynchronous manner, using deferreds; while this method works,
377
  it's not a common way to code and it requires that the entire process
378
  workflow is based around a single *reactor* (Twisted name for a main
379
  loop)
380
- the more advanced granular locking that we want to implement would
381
  require, if written in the async-manner, deep integration with the
382
  Twisted stack, to such an extend that business-logic is inseparable
383
  from the protocol coding; we felt that this is an unreasonable request,
384
  and that a good protocol library should allow complete separation of
385
  low-level protocol calls and business logic; by comparison, the threaded
386
  approach combined with HTTPs protocol required (for the first iteration)
387
  absolutely no changes from the 1.2 code, and later changes for optimizing
388
  the inter-node RPC calls required just syntactic changes (e.g.
389
  ``rpc.call_...`` to ``self.rpc.call_...``)
390

    
391
Another issue is with the Twisted API stability - during the Ganeti
392
1.x lifetime, we had to to implement many times workarounds to changes
393
in the Twisted version, so that for example 1.2 is able to use both
394
Twisted 2.x and 8.x.
395

    
396
In the end, since we already had an HTTP server library for the RAPI,
397
we just reused that for inter-node communication.
398

    
399

    
400
Granular locking
401
~~~~~~~~~~~~~~~~
402

    
403
We want to make sure that multiple operations can run in parallel on a Ganeti
404
Cluster. In order for this to happen we need to make sure concurrently run
405
operations don't step on each other toes and break the cluster.
406

    
407
This design addresses how we are going to deal with locking so that:
408

    
409
- we preserve data coherency
410
- we prevent deadlocks
411
- we prevent job starvation
412

    
413
Reaching the maximum possible parallelism is a Non-Goal. We have identified a
414
set of operations that are currently bottlenecks and need to be parallelised
415
and have worked on those. In the future it will be possible to address other
416
needs, thus making the cluster more and more parallel one step at a time.
417

    
418
This section only talks about parallelising Ganeti level operations, aka
419
Logical Units, and the locking needed for that. Any other synchronization lock
420
needed internally by the code is outside its scope.
421

    
422
Library details
423
+++++++++++++++
424

    
425
The proposed library has these features:
426

    
427
- internally managing all the locks, making the implementation transparent
428
  from their usage
429
- automatically grabbing multiple locks in the right order (avoid deadlock)
430
- ability to transparently handle conversion to more granularity
431
- support asynchronous operation (future goal)
432

    
433
Locking will be valid only on the master node and will not be a
434
distributed operation. Therefore, in case of master failure, the
435
operations currently running will be aborted and the locks will be
436
lost; it remains to the administrator to cleanup (if needed) the
437
operation result (e.g. make sure an instance is either installed
438
correctly or removed).
439

    
440
A corollary of this is that a master-failover operation with both
441
masters alive needs to happen while no operations are running, and
442
therefore no locks are held.
443

    
444
All the locks will be represented by objects (like
445
``lockings.SharedLock``), and the individual locks for each object
446
will be created at initialisation time, from the config file.
447

    
448
The API will have a way to grab one or more than one locks at the same time.
449
Any attempt to grab a lock while already holding one in the wrong order will be
450
checked for, and fail.
451

    
452

    
453
The Locks
454
+++++++++
455

    
456
At the first stage we have decided to provide the following locks:
457

    
458
- One "config file" lock
459
- One lock per node in the cluster
460
- One lock per instance in the cluster
461

    
462
All the instance locks will need to be taken before the node locks, and the
463
node locks before the config lock. Locks will need to be acquired at the same
464
time for multiple instances and nodes, and internal ordering will be dealt
465
within the locking library, which, for simplicity, will just use alphabetical
466
order.
467

    
468
Each lock has the following three possible statuses:
469

    
470
- unlocked (anyone can grab the lock)
471
- shared (anyone can grab/have the lock but only in shared mode)
472
- exclusive (no one else can grab/have the lock)
473

    
474
Handling conversion to more granularity
475
+++++++++++++++++++++++++++++++++++++++
476

    
477
In order to convert to a more granular approach transparently each time we
478
split a lock into more we'll create a "metalock", which will depend on those
479
sub-locks and live for the time necessary for all the code to convert (or
480
forever, in some conditions). When a metalock exists all converted code must
481
acquire it in shared mode, so it can run concurrently, but still be exclusive
482
with old code, which acquires it exclusively.
483

    
484
In the beginning the only such lock will be what replaces the current "command"
485
lock, and will acquire all the locks in the system, before proceeding. This
486
lock will be called the "Big Ganeti Lock" because holding that one will avoid
487
any other concurrent Ganeti operations.
488

    
489
We might also want to devise more metalocks (eg. all nodes, all nodes+config)
490
in order to make it easier for some parts of the code to acquire what it needs
491
without specifying it explicitly.
492

    
493
In the future things like the node locks could become metalocks, should we
494
decide to split them into an even more fine grained approach, but this will
495
probably be only after the first 2.0 version has been released.
496

    
497
Adding/Removing locks
498
+++++++++++++++++++++
499

    
500
When a new instance or a new node is created an associated lock must be added
501
to the list. The relevant code will need to inform the locking library of such
502
a change.
503

    
504
This needs to be compatible with every other lock in the system, especially
505
metalocks that guarantee to grab sets of resources without specifying them
506
explicitly. The implementation of this will be handled in the locking library
507
itself.
508

    
509
When instances or nodes disappear from the cluster the relevant locks
510
must be removed. This is easier than adding new elements, as the code
511
which removes them must own them exclusively already, and thus deals
512
with metalocks exactly as normal code acquiring those locks. Any
513
operation queuing on a removed lock will fail after its removal.
514

    
515
Asynchronous operations
516
+++++++++++++++++++++++
517

    
518
For the first version the locking library will only export synchronous
519
operations, which will block till the needed lock are held, and only fail if
520
the request is impossible or somehow erroneous.
521

    
522
In the future we may want to implement different types of asynchronous
523
operations such as:
524

    
525
- try to acquire this lock set and fail if not possible
526
- try to acquire one of these lock sets and return the first one you were
527
  able to get (or after a timeout) (select/poll like)
528

    
529
These operations can be used to prioritize operations based on available locks,
530
rather than making them just blindly queue for acquiring them. The inherent
531
risk, though, is that any code using the first operation, or setting a timeout
532
for the second one, is susceptible to starvation and thus may never be able to
533
get the required locks and complete certain tasks. Considering this
534
providing/using these operations should not be among our first priorities.
535

    
536
Locking granularity
537
+++++++++++++++++++
538

    
539
For the first version of this code we'll convert each Logical Unit to
540
acquire/release the locks it needs, so locking will be at the Logical Unit
541
level.  In the future we may want to split logical units in independent
542
"tasklets" with their own locking requirements. A different design doc (or mini
543
design doc) will cover the move from Logical Units to tasklets.
544

    
545
Code examples
546
+++++++++++++
547

    
548
In general when acquiring locks we should use a code path equivalent to::
549

    
550
  lock.acquire()
551
  try:
552
    ...
553
    # other code
554
  finally:
555
    lock.release()
556

    
557
This makes sure we release all locks, and avoid possible deadlocks. Of
558
course extra care must be used not to leave, if possible locked
559
structures in an unusable state. Note that with Python 2.5 a simpler
560
syntax will be possible, but we want to keep compatibility with Python
561
2.4 so the new constructs should not be used.
562

    
563
In order to avoid this extra indentation and code changes everywhere in the
564
Logical Units code, we decided to allow LUs to declare locks, and then execute
565
their code with their locks acquired. In the new world LUs are called like
566
this::
567

    
568
  # user passed names are expanded to the internal lock/resource name,
569
  # then known needed locks are declared
570
  lu.ExpandNames()
571
  ... some locking/adding of locks may happen ...
572
  # late declaration of locks for one level: this is useful because sometimes
573
  # we can't know which resource we need before locking the previous level
574
  lu.DeclareLocks() # for each level (cluster, instance, node)
575
  ... more locking/adding of locks can happen ...
576
  # these functions are called with the proper locks held
577
  lu.CheckPrereq()
578
  lu.Exec()
579
  ... locks declared for removal are removed, all acquired locks released ...
580

    
581
The Processor and the LogicalUnit class will contain exact documentation on how
582
locks are supposed to be declared.
583

    
584
Caveats
585
+++++++
586

    
587
This library will provide an easy upgrade path to bring all the code to
588
granular locking without breaking everything, and it will also guarantee
589
against a lot of common errors. Code switching from the old "lock everything"
590
lock to the new system, though, needs to be carefully scrutinised to be sure it
591
is really acquiring all the necessary locks, and none has been overlooked or
592
forgotten.
593

    
594
The code can contain other locks outside of this library, to synchronise other
595
threaded code (eg for the job queue) but in general these should be leaf locks
596
or carefully structured non-leaf ones, to avoid deadlock race conditions.
597

    
598

    
599
Job Queue
600
~~~~~~~~~
601

    
602
Granular locking is not enough to speed up operations, we also need a
603
queue to store these and to be able to process as many as possible in
604
parallel.
605

    
606
A Ganeti job will consist of multiple ``OpCodes`` which are the basic
607
element of operation in Ganeti 1.2 (and will remain as such). Most
608
command-level commands are equivalent to one OpCode, or in some cases
609
to a sequence of opcodes, all of the same type (e.g. evacuating a node
610
will generate N opcodes of type replace disks).
611

    
612

    
613
Job executionโ€”โ€œLife of a Ganeti jobโ€
614
++++++++++++++++++++++++++++++++++++
615

    
616
#. Job gets submitted by the client. A new job identifier is generated and
617
   assigned to the job. The job is then automatically replicated [#replic]_
618
   to all nodes in the cluster. The identifier is returned to the client.
619
#. A pool of worker threads waits for new jobs. If all are busy, the job has
620
   to wait and the first worker finishing its work will grab it. Otherwise any
621
   of the waiting threads will pick up the new job.
622
#. Client waits for job status updates by calling a waiting RPC function.
623
   Log message may be shown to the user. Until the job is started, it can also
624
   be canceled.
625
#. As soon as the job is finished, its final result and status can be retrieved
626
   from the server.
627
#. If the client archives the job, it gets moved to a history directory.
628
   There will be a method to archive all jobs older than a a given age.
629

    
630
.. [#replic] We need replication in order to maintain the consistency across
631
   all nodes in the system; the master node only differs in the fact that
632
   now it is running the master daemon, but it if fails and we do a master
633
   failover, the jobs are still visible on the new master (though marked as
634
   failed).
635

    
636
Failures to replicate a job to other nodes will be only flagged as
637
errors in the master daemon log if more than half of the nodes failed,
638
otherwise we ignore the failure, and rely on the fact that the next
639
update (for still running jobs) will retry the update. For finished
640
jobs, it is less of a problem.
641

    
642
Future improvements will look into checking the consistency of the job
643
list and jobs themselves at master daemon startup.
644

    
645

    
646
Job storage
647
+++++++++++
648

    
649
Jobs are stored in the filesystem as individual files, serialized
650
using JSON (standard serialization mechanism in Ganeti).
651

    
652
The choice of storing each job in its own file was made because:
653

    
654
- a file can be atomically replaced
655
- a file can easily be replicated to other nodes
656
- checking consistency across nodes can be implemented very easily, since
657
  all job files should be (at a given moment in time) identical
658

    
659
The other possible choices that were discussed and discounted were:
660

    
661
- single big file with all job data: not feasible due to difficult updates
662
- in-process databases: hard to replicate the entire database to the
663
  other nodes, and replicating individual operations does not mean wee keep
664
  consistency
665

    
666

    
667
Queue structure
668
+++++++++++++++
669

    
670
All file operations have to be done atomically by writing to a temporary file
671
and subsequent renaming. Except for log messages, every change in a job is
672
stored and replicated to other nodes.
673

    
674
::
675

    
676
  /var/lib/ganeti/queue/
677
    job-1 (JSON encoded job description and status)
678
    [โ€ฆ]
679
    job-37
680
    job-38
681
    job-39
682
    lock (Queue managing process opens this file in exclusive mode)
683
    serial (Last job ID used)
684
    version (Queue format version)
685

    
686

    
687
Locking
688
+++++++
689

    
690
Locking in the job queue is a complicated topic. It is called from more than
691
one thread and must be thread-safe. For simplicity, a single lock is used for
692
the whole job queue.
693

    
694
A more detailed description can be found in doc/locking.txt.
695

    
696

    
697
Internal RPC
698
++++++++++++
699

    
700
RPC calls available between Ganeti master and node daemons:
701

    
702
jobqueue_update(file_name, content)
703
  Writes a file in the job queue directory.
704
jobqueue_purge()
705
  Cleans the job queue directory completely, including archived job.
706
jobqueue_rename(old, new)
707
  Renames a file in the job queue directory.
708

    
709

    
710
Client RPC
711
++++++++++
712

    
713
RPC between Ganeti clients and the Ganeti master daemon supports the following
714
operations:
715

    
716
SubmitJob(ops)
717
  Submits a list of opcodes and returns the job identifier. The identifier is
718
  guaranteed to be unique during the lifetime of a cluster.
719
WaitForJobChange(job_id, fields, [โ€ฆ], timeout)
720
  This function waits until a job changes or a timeout expires. The condition
721
  for when a job changed is defined by the fields passed and the last log
722
  message received.
723
QueryJobs(job_ids, fields)
724
  Returns field values for the job identifiers passed.
725
CancelJob(job_id)
726
  Cancels the job specified by identifier. This operation may fail if the job
727
  is already running, canceled or finished.
728
ArchiveJob(job_id)
729
  Moves a job into the โ€ฆ/archive/ directory. This operation will fail if the
730
  job has not been canceled or finished.
731

    
732

    
733
Job and opcode status
734
+++++++++++++++++++++
735

    
736
Each job and each opcode has, at any time, one of the following states:
737

    
738
Queued
739
  The job/opcode was submitted, but did not yet start.
740
Waiting
741
  The job/opcode is waiting for a lock to proceed.
742
Running
743
  The job/opcode is running.
744
Canceled
745
  The job/opcode was canceled before it started.
746
Success
747
  The job/opcode ran and finished successfully.
748
Error
749
  The job/opcode was aborted with an error.
750

    
751
If the master is aborted while a job is running, the job will be set to the
752
Error status once the master started again.
753

    
754

    
755
History
756
+++++++
757

    
758
Archived jobs are kept in a separate directory,
759
``/var/lib/ganeti/queue/archive/``.  This is done in order to speed up
760
the queue handling: by default, the jobs in the archive are not
761
touched by any functions. Only the current (unarchived) jobs are
762
parsed, loaded, and verified (if implemented) by the master daemon.
763

    
764

    
765
Ganeti updates
766
++++++++++++++
767

    
768
The queue has to be completely empty for Ganeti updates with changes
769
in the job queue structure. In order to allow this, there will be a
770
way to prevent new jobs entering the queue.
771

    
772

    
773
Object parameters
774
~~~~~~~~~~~~~~~~~
775

    
776
Across all cluster configuration data, we have multiple classes of
777
parameters:
778

    
779
A. cluster-wide parameters (e.g. name of the cluster, the master);
780
   these are the ones that we have today, and are unchanged from the
781
   current model
782

    
783
#. node parameters
784

    
785
#. instance specific parameters, e.g. the name of disks (LV), that
786
   cannot be shared with other instances
787

    
788
#. instance parameters, that are or can be the same for many
789
   instances, but are not hypervisor related; e.g. the number of VCPUs,
790
   or the size of memory
791

    
792
#. instance parameters that are hypervisor specific (e.g. kernel_path
793
   or PAE mode)
794

    
795

    
796
The following definitions for instance parameters will be used below:
797

    
798
:hypervisor parameter:
799
  a hypervisor parameter (or hypervisor specific parameter) is defined
800
  as a parameter that is interpreted by the hypervisor support code in
801
  Ganeti and usually is specific to a particular hypervisor (like the
802
  kernel path for `PVM`_ which makes no sense for `HVM`_).
803

    
804
:backend parameter:
805
  a backend parameter is defined as an instance parameter that can be
806
  shared among a list of instances, and is either generic enough not
807
  to be tied to a given hypervisor or cannot influence at all the
808
  hypervisor behaviour.
809

    
810
  For example: memory, vcpus, auto_balance
811

    
812
  All these parameters will be encoded into constants.py with the prefix "BE\_"
813
  and the whole list of parameters will exist in the set "BES_PARAMETERS"
814

    
815
:proper parameter:
816
  a parameter whose value is unique to the instance (e.g. the name of a LV,
817
  or the MAC of a NIC)
818

    
819
As a general rule, for all kind of parameters, โ€œNoneโ€ (or in
820
JSON-speak, โ€œnilโ€) will no longer be a valid value for a parameter. As
821
such, only non-default parameters will be saved as part of objects in
822
the serialization step, reducing the size of the serialized format.
823

    
824
Cluster parameters
825
++++++++++++++++++
826

    
827
Cluster parameters remain as today, attributes at the top level of the
828
Cluster object. In addition, two new attributes at this level will
829
hold defaults for the instances:
830

    
831
- hvparams, a dictionary indexed by hypervisor type, holding default
832
  values for hypervisor parameters that are not defined/overridden by
833
  the instances of this hypervisor type
834

    
835
- beparams, a dictionary holding (for 2.0) a single element 'default',
836
  which holds the default value for backend parameters
837

    
838
Node parameters
839
+++++++++++++++
840

    
841
Node-related parameters are very few, and we will continue using the
842
same model for these as previously (attributes on the Node object).
843

    
844
Instance parameters
845
+++++++++++++++++++
846

    
847
As described before, the instance parameters are split in three:
848
instance proper parameters, unique to each instance, instance
849
hypervisor parameters and instance backend parameters.
850

    
851
The โ€œhvparamsโ€ and โ€œbeparamsโ€ are kept in two dictionaries at instance
852
level. Only non-default parameters are stored (but once customized, a
853
parameter will be kept, even with the same value as the default one,
854
until reset).
855

    
856
The names for hypervisor parameters in the instance.hvparams subtree
857
should be choosen as generic as possible, especially if specific
858
parameters could conceivably be useful for more than one hypervisor,
859
e.g. ``instance.hvparams.vnc_console_port`` instead of using both
860
``instance.hvparams.hvm_vnc_console_port`` and
861
``instance.hvparams.kvm_vnc_console_port``.
862

    
863
There are some special cases related to disks and NICs (for example):
864
a disk has both Ganeti-related parameters (e.g. the name of the LV)
865
and hypervisor-related parameters (how the disk is presented to/named
866
in the instance). The former parameters remain as proper-instance
867
parameters, while the latter value are migrated to the hvparams
868
structure. In 2.0, we will have only globally-per-instance such
869
hypervisor parameters, and not per-disk ones (e.g. all NICs will be
870
exported as of the same type).
871

    
872
Starting from the 1.2 list of instance parameters, here is how they
873
will be mapped to the three classes of parameters:
874

    
875
- name (P)
876
- primary_node (P)
877
- os (P)
878
- hypervisor (P)
879
- status (P)
880
- memory (BE)
881
- vcpus (BE)
882
- nics (P)
883
- disks (P)
884
- disk_template (P)
885
- network_port (P)
886
- kernel_path (HV)
887
- initrd_path (HV)
888
- hvm_boot_order (HV)
889
- hvm_acpi (HV)
890
- hvm_pae (HV)
891
- hvm_cdrom_image_path (HV)
892
- hvm_nic_type (HV)
893
- hvm_disk_type (HV)
894
- vnc_bind_address (HV)
895
- serial_no (P)
896

    
897

    
898
Parameter validation
899
++++++++++++++++++++
900

    
901
To support the new cluster parameter design, additional features will
902
be required from the hypervisor support implementations in Ganeti.
903

    
904
The hypervisor support  implementation API will be extended with the
905
following features:
906

    
907
:PARAMETERS: class-level attribute holding the list of valid parameters
908
  for this hypervisor
909
:CheckParamSyntax(hvparams): checks that the given parameters are
910
  valid (as in the names are valid) for this hypervisor; usually just
911
  comparing ``hvparams.keys()`` and ``cls.PARAMETERS``; this is a class
912
  method that can be called from within master code (i.e. cmdlib) and
913
  should be safe to do so
914
:ValidateParameters(hvparams): verifies the values of the provided
915
  parameters against this hypervisor; this is a method that will be
916
  called on the target node, from backend.py code, and as such can
917
  make node-specific checks (e.g. kernel_path checking)
918

    
919
Default value application
920
+++++++++++++++++++++++++
921

    
922
The application of defaults to an instance is done in the Cluster
923
object, via two new methods as follows:
924

    
925
- ``Cluster.FillHV(instance)``, returns 'filled' hvparams dict, based on
926
  instance's hvparams and cluster's ``hvparams[instance.hypervisor]``
927

    
928
- ``Cluster.FillBE(instance, be_type="default")``, which returns the
929
  beparams dict, based on the instance and cluster beparams
930

    
931
The FillHV/BE transformations will be used, for example, in the RpcRunner
932
when sending an instance for activation/stop, and the sent instance
933
hvparams/beparams will have the final value (noded code doesn't know
934
about defaults).
935

    
936
LU code will need to self-call the transformation, if needed.
937

    
938
Opcode changes
939
++++++++++++++
940

    
941
The parameter changes will have impact on the OpCodes, especially on
942
the following ones:
943

    
944
- ``OpCreateInstance``, where the new hv and be parameters will be sent as
945
  dictionaries; note that all hv and be parameters are now optional, as
946
  the values can be instead taken from the cluster
947
- ``OpQueryInstances``, where we have to be able to query these new
948
  parameters; the syntax for names will be ``hvparam/$NAME`` and
949
  ``beparam/$NAME`` for querying an individual parameter out of one
950
  dictionary, and ``hvparams``, respectively ``beparams``, for the whole
951
  dictionaries
952
- ``OpModifyInstance``, where the the modified parameters are sent as
953
  dictionaries
954

    
955
Additionally, we will need new OpCodes to modify the cluster-level
956
defaults for the be/hv sets of parameters.
957

    
958
Caveats
959
+++++++
960

    
961
One problem that might appear is that our classification is not
962
complete or not good enough, and we'll need to change this model. As
963
the last resort, we will need to rollback and keep 1.2 style.
964

    
965
Another problem is that classification of one parameter is unclear
966
(e.g. ``network_port``, is this BE or HV?); in this case we'll take
967
the risk of having to move parameters later between classes.
968

    
969
Security
970
++++++++
971

    
972
The only security issue that we foresee is if some new parameters will
973
have sensitive value. If so, we will need to have a way to export the
974
config data while purging the sensitive value.
975

    
976
E.g. for the drbd shared secrets, we could export these with the
977
values replaced by an empty string.
978

    
979
Feature changes
980
---------------
981

    
982
The main feature-level changes will be:
983

    
984
- a number of disk related changes
985
- removal of fixed two-disk, one-nic per instance limitation
986

    
987
Disk handling changes
988
~~~~~~~~~~~~~~~~~~~~~
989

    
990
The storage options available in Ganeti 1.x were introduced based on
991
then-current software (first DRBD 0.7 then later DRBD 8) and the
992
estimated usage patters. However, experience has later shown that some
993
assumptions made initially are not true and that more flexibility is
994
needed.
995

    
996
One main assumption made was that disk failures should be treated as 'rare'
997
events, and that each of them needs to be manually handled in order to ensure
998
data safety; however, both these assumptions are false:
999

    
1000
- disk failures can be a common occurrence, based on usage patterns or cluster
1001
  size
1002
- our disk setup is robust enough (referring to DRBD8 + LVM) that we could
1003
  automate more of the recovery
1004

    
1005
Note that we still don't have fully-automated disk recovery as a goal, but our
1006
goal is to reduce the manual work needed.
1007

    
1008
As such, we plan the following main changes:
1009

    
1010
- DRBD8 is much more flexible and stable than its previous version (0.7),
1011
  such that removing the support for the ``remote_raid1`` template and
1012
  focusing only on DRBD8 is easier
1013

    
1014
- dynamic discovery of DRBD devices is not actually needed in a cluster that
1015
  where the DRBD namespace is controlled by Ganeti; switching to a static
1016
  assignment (done at either instance creation time or change secondary time)
1017
  will change the disk activation time from O(n) to O(1), which on big
1018
  clusters is a significant gain
1019

    
1020
- remove the hard dependency on LVM (currently all available storage types are
1021
  ultimately backed by LVM volumes) by introducing file-based storage
1022

    
1023
Additionally, a number of smaller enhancements are also planned:
1024
- support variable number of disks
1025
- support read-only disks
1026

    
1027
Future enhancements in the 2.x series, which do not require base design
1028
changes, might include:
1029

    
1030
- enhancement of the LVM allocation method in order to try to keep
1031
  all of an instance's virtual disks on the same physical
1032
  disks
1033

    
1034
- add support for DRBD8 authentication at handshake time in
1035
  order to ensure each device connects to the correct peer
1036

    
1037
- remove the restrictions on failover only to the secondary
1038
  which creates very strict rules on cluster allocation
1039

    
1040
DRBD minor allocation
1041
+++++++++++++++++++++
1042

    
1043
Currently, when trying to identify or activate a new DRBD (or MD)
1044
device, the code scans all in-use devices in order to see if we find
1045
one that looks similar to our parameters and is already in the desired
1046
state or not. Since this needs external commands to be run, it is very
1047
slow when more than a few devices are already present.
1048

    
1049
Therefore, we will change the discovery model from dynamic to
1050
static. When a new device is logically created (added to the
1051
configuration) a free minor number is computed from the list of
1052
devices that should exist on that node and assigned to that
1053
device.
1054

    
1055
At device activation, if the minor is already in use, we check if
1056
it has our parameters; if not so, we just destroy the device (if
1057
possible, otherwise we abort) and start it with our own
1058
parameters.
1059

    
1060
This means that we in effect take ownership of the minor space for
1061
that device type; if there's a user-created DRBD minor, it will be
1062
automatically removed.
1063

    
1064
The change will have the effect of reducing the number of external
1065
commands run per device from a constant number times the index of the
1066
first free DRBD minor to just a constant number.
1067

    
1068
Removal of obsolete device types (MD, DRBD7)
1069
++++++++++++++++++++++++++++++++++++++++++++
1070

    
1071
We need to remove these device types because of two issues. First,
1072
DRBD7 has bad failure modes in case of dual failures (both network and
1073
disk - it cannot propagate the error up the device stack and instead
1074
just panics. Second, due to the asymmetry between primary and
1075
secondary in MD+DRBD mode, we cannot do live failover (not even if we
1076
had MD+DRBD8).
1077

    
1078
File-based storage support
1079
++++++++++++++++++++++++++
1080

    
1081
Using files instead of logical volumes for instance storage would
1082
allow us to get rid of the hard requirement for volume groups for
1083
testing clusters and it would also allow usage of SAN storage to do
1084
live failover taking advantage of this storage solution.
1085

    
1086
Better LVM allocation
1087
+++++++++++++++++++++
1088

    
1089
Currently, the LV to PV allocation mechanism is a very simple one: at
1090
each new request for a logical volume, tell LVM to allocate the volume
1091
in order based on the amount of free space. This is good for
1092
simplicity and for keeping the usage equally spread over the available
1093
physical disks, however it introduces a problem that an instance could
1094
end up with its (currently) two drives on two physical disks, or
1095
(worse) that the data and metadata for a DRBD device end up on
1096
different drives.
1097

    
1098
This is bad because it causes unneeded ``replace-disks`` operations in
1099
case of a physical failure.
1100

    
1101
The solution is to batch allocations for an instance and make the LVM
1102
handling code try to allocate as close as possible all the storage of
1103
one instance. We will still allow the logical volumes to spill over to
1104
additional disks as needed.
1105

    
1106
Note that this clustered allocation can only be attempted at initial
1107
instance creation, or at change secondary node time. At add disk time,
1108
or at replacing individual disks, it's not easy enough to compute the
1109
current disk map so we'll not attempt the clustering.
1110

    
1111
DRBD8 peer authentication at handshake
1112
++++++++++++++++++++++++++++++++++++++
1113

    
1114
DRBD8 has a new feature that allow authentication of the peer at
1115
connect time. We can use this to prevent connecting to the wrong peer
1116
more that securing the connection. Even though we never had issues
1117
with wrong connections, it would be good to implement this.
1118

    
1119

    
1120
LVM self-repair (optional)
1121
++++++++++++++++++++++++++
1122

    
1123
The complete failure of a physical disk is very tedious to
1124
troubleshoot, mainly because of the many failure modes and the many
1125
steps needed. We can safely automate some of the steps, more
1126
specifically the ``vgreduce --removemissing`` using the following
1127
method:
1128

    
1129
#. check if all nodes have consistent volume groups
1130
#. if yes, and previous status was yes, do nothing
1131
#. if yes, and previous status was no, save status and restart
1132
#. if no, and previous status was no, do nothing
1133
#. if no, and previous status was yes:
1134
    #. if more than one node is inconsistent, do nothing
1135
    #. if only one node is inconsistent:
1136
        #. run ``vgreduce --removemissing``
1137
        #. log this occurrence in the Ganeti log in a form that
1138
           can be used for monitoring
1139
        #. [FUTURE] run ``replace-disks`` for all
1140
           instances affected
1141

    
1142
Failover to any node
1143
++++++++++++++++++++
1144

    
1145
With a modified disk activation sequence, we can implement the
1146
*failover to any* functionality, removing many of the layout
1147
restrictions of a cluster:
1148

    
1149
- the need to reserve memory on the current secondary: this gets reduced to
1150
  a must to reserve memory anywhere on the cluster
1151

    
1152
- the need to first failover and then replace secondary for an
1153
  instance: with failover-to-any, we can directly failover to
1154
  another node, which also does the replace disks at the same
1155
  step
1156

    
1157
In the following, we denote the current primary by P1, the current
1158
secondary by S1, and the new primary and secondaries by P2 and S2. P2
1159
is fixed to the node the user chooses, but the choice of S2 can be
1160
made between P1 and S1. This choice can be constrained, depending on
1161
which of P1 and S1 has failed.
1162

    
1163
- if P1 has failed, then S1 must become S2, and live migration is not possible
1164
- if S1 has failed, then P1 must become S2, and live migration could be
1165
  possible (in theory, but this is not a design goal for 2.0)
1166

    
1167
The algorithm for performing the failover is straightforward:
1168

    
1169
- verify that S2 (the node the user has chosen to keep as secondary) has
1170
  valid data (is consistent)
1171

    
1172
- tear down the current DRBD association and setup a DRBD pairing between
1173
  P2 (P2 is indicated by the user) and S2; since P2 has no data, it will
1174
  start re-syncing from S2
1175

    
1176
- as soon as P2 is in state SyncTarget (i.e. after the resync has started
1177
  but before it has finished), we can promote it to primary role (r/w)
1178
  and start the instance on P2
1179

    
1180
- as soon as the P2?S2 sync has finished, we can remove
1181
  the old data on the old node that has not been chosen for
1182
  S2
1183

    
1184
Caveats: during the P2?S2 sync, a (non-transient) network error
1185
will cause I/O errors on the instance, so (if a longer instance
1186
downtime is acceptable) we can postpone the restart of the instance
1187
until the resync is done. However, disk I/O errors on S2 will cause
1188
data loss, since we don't have a good copy of the data anymore, so in
1189
this case waiting for the sync to complete is not an option. As such,
1190
it is recommended that this feature is used only in conjunction with
1191
proper disk monitoring.
1192

    
1193

    
1194
Live migration note: While failover-to-any is possible for all choices
1195
of S2, migration-to-any is possible only if we keep P1 as S2.
1196

    
1197
Caveats
1198
+++++++
1199

    
1200
The dynamic device model, while more complex, has an advantage: it
1201
will not reuse by mistake the DRBD device of another instance, since
1202
it always looks for either our own or a free one.
1203

    
1204
The static one, in contrast, will assume that given a minor number N,
1205
it's ours and we can take over. This needs careful implementation such
1206
that if the minor is in use, either we are able to cleanly shut it
1207
down, or we abort the startup. Otherwise, it could be that we start
1208
syncing between two instance's disks, causing data loss.
1209

    
1210

    
1211
Variable number of disk/NICs per instance
1212
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1213

    
1214
Variable number of disks
1215
++++++++++++++++++++++++
1216

    
1217
In order to support high-security scenarios (for example read-only sda
1218
and read-write sdb), we need to make a fully flexibly disk
1219
definition. This has less impact that it might look at first sight:
1220
only the instance creation has hard coded number of disks, not the disk
1221
handling code. The block device handling and most of the instance
1222
handling code is already working with "the instance's disks" as
1223
opposed to "the two disks of the instance", but some pieces are not
1224
(e.g. import/export) and the code needs a review to ensure safety.
1225

    
1226
The objective is to be able to specify the number of disks at
1227
instance creation, and to be able to toggle from read-only to
1228
read-write a disk afterward.
1229

    
1230
Variable number of NICs
1231
+++++++++++++++++++++++
1232

    
1233
Similar to the disk change, we need to allow multiple network
1234
interfaces per instance. This will affect the internal code (some
1235
function will have to stop assuming that ``instance.nics`` is a list
1236
of length one), the OS API which currently can export/import only one
1237
instance, and the command line interface.
1238

    
1239
Interface changes
1240
-----------------
1241

    
1242
There are two areas of interface changes: API-level changes (the OS
1243
interface and the RAPI interface) and the command line interface
1244
changes.
1245

    
1246
OS interface
1247
~~~~~~~~~~~~
1248

    
1249
The current Ganeti OS interface, version 5, is tailored for Ganeti 1.2. The
1250
interface is composed by a series of scripts which get called with certain
1251
parameters to perform OS-dependent operations on the cluster. The current
1252
scripts are:
1253

    
1254
create
1255
  called when a new instance is added to the cluster
1256
export
1257
  called to export an instance disk to a stream
1258
import
1259
  called to import from a stream to a new instance
1260
rename
1261
  called to perform the os-specific operations necessary for renaming an
1262
  instance
1263

    
1264
Currently these scripts suffer from the limitations of Ganeti 1.2: for example
1265
they accept exactly one block and one swap devices to operate on, rather than
1266
any amount of generic block devices, they blindly assume that an instance will
1267
have just one network interface to operate, they can not be configured to
1268
optimise the instance for a particular hypervisor.
1269

    
1270
Since in Ganeti 2.0 we want to support multiple hypervisors, and a non-fixed
1271
number of network and disks the OS interface need to change to transmit the
1272
appropriate amount of information about an instance to its managing operating
1273
system, when operating on it. Moreover since some old assumptions usually used
1274
in OS scripts are no longer valid we need to re-establish a common knowledge on
1275
what can be assumed and what cannot be regarding Ganeti environment.
1276

    
1277

    
1278
When designing the new OS API our priorities are:
1279
- ease of use
1280
- future extensibility
1281
- ease of porting from the old API
1282
- modularity
1283

    
1284
As such we want to limit the number of scripts that must be written to support
1285
an OS, and make it easy to share code between them by uniforming their input.
1286
We also will leave the current script structure unchanged, as far as we can,
1287
and make a few of the scripts (import, export and rename) optional. Most
1288
information will be passed to the script through environment variables, for
1289
ease of access and at the same time ease of using only the information a script
1290
needs.
1291

    
1292

    
1293
The Scripts
1294
+++++++++++
1295

    
1296
As in Ganeti 1.2, every OS which wants to be installed in Ganeti needs to
1297
support the following functionality, through scripts:
1298

    
1299
create:
1300
  used to create a new instance running that OS. This script should prepare the
1301
  block devices, and install them so that the new OS can boot under the
1302
  specified hypervisor.
1303
export (optional):
1304
  used to export an installed instance using the given OS to a format which can
1305
  be used to import it back into a new instance.
1306
import (optional):
1307
  used to import an exported instance into a new one. This script is similar to
1308
  create, but the new instance should have the content of the export, rather
1309
  than contain a pristine installation.
1310
rename (optional):
1311
  used to perform the internal OS-specific operations needed to rename an
1312
  instance.
1313

    
1314
If any optional script is not implemented Ganeti will refuse to perform the
1315
given operation on instances using the non-implementing OS. Of course the
1316
create script is mandatory, and it doesn't make sense to support the either the
1317
export or the import operation but not both.
1318

    
1319
Incompatibilities with 1.2
1320
__________________________
1321

    
1322
We expect the following incompatibilities between the OS scripts for 1.2 and
1323
the ones for 2.0:
1324

    
1325
- Input parameters: in 1.2 those were passed on the command line, in 2.0 we'll
1326
  use environment variables, as there will be a lot more information and not
1327
  all OSes may care about all of it.
1328
- Number of calls: export scripts will be called once for each device the
1329
  instance has, and import scripts once for every exported disk. Imported
1330
  instances will be forced to have a number of disks greater or equal to the
1331
  one of the export.
1332
- Some scripts are not compulsory: if such a script is missing the relevant
1333
  operations will be forbidden for instances of that OS. This makes it easier
1334
  to distinguish between unsupported operations and no-op ones (if any).
1335

    
1336

    
1337
Input
1338
_____
1339

    
1340
Rather than using command line flags, as they do now, scripts will accept
1341
inputs from environment variables.  We expect the following input values:
1342

    
1343
OS_API_VERSION
1344
  The version of the OS API that the following parameters comply with;
1345
  this is used so that in the future we could have OSes supporting
1346
  multiple versions and thus Ganeti send the proper version in this
1347
  parameter
1348
INSTANCE_NAME
1349
  Name of the instance acted on
1350
HYPERVISOR
1351
  The hypervisor the instance should run on (e.g. 'xen-pvm', 'xen-hvm', 'kvm')
1352
DISK_COUNT
1353
  The number of disks this instance will have
1354
NIC_COUNT
1355
  The number of NICs this instance will have
1356
DISK_<N>_PATH
1357
  Path to the Nth disk.
1358
DISK_<N>_ACCESS
1359
  W if read/write, R if read only. OS scripts are not supposed to touch
1360
  read-only disks, but will be passed them to know.
1361
DISK_<N>_FRONTEND_TYPE
1362
  Type of the disk as seen by the instance. Can be 'scsi', 'ide', 'virtio'
1363
DISK_<N>_BACKEND_TYPE
1364
  Type of the disk as seen from the node. Can be 'block', 'file:loop' or
1365
  'file:blktap'
1366
NIC_<N>_MAC
1367
  Mac address for the Nth network interface
1368
NIC_<N>_IP
1369
  Ip address for the Nth network interface, if available
1370
NIC_<N>_BRIDGE
1371
  Node bridge the Nth network interface will be connected to
1372
NIC_<N>_FRONTEND_TYPE
1373
  Type of the Nth NIC as seen by the instance. For example 'virtio',
1374
  'rtl8139', etc.
1375
DEBUG_LEVEL
1376
  Whether more out should be produced, for debugging purposes. Currently the
1377
  only valid values are 0 and 1.
1378

    
1379
These are only the basic variables we are thinking of now, but more
1380
may come during the implementation and they will be documented in the
1381
``ganeti-os-api`` man page. All these variables will be available to
1382
all scripts.
1383

    
1384
Some scripts will need a few more information to work. These will have
1385
per-script variables, such as for example:
1386

    
1387
OLD_INSTANCE_NAME
1388
  rename: the name the instance should be renamed from.
1389
EXPORT_DEVICE
1390
  export: device to be exported, a snapshot of the actual device. The data must be exported to stdout.
1391
EXPORT_INDEX
1392
  export: sequential number of the instance device targeted.
1393
IMPORT_DEVICE
1394
  import: device to send the data to, part of the new instance. The data must be imported from stdin.
1395
IMPORT_INDEX
1396
  import: sequential number of the instance device targeted.
1397

    
1398
(Rationale for INSTANCE_NAME as an environment variable: the instance name is
1399
always needed and we could pass it on the command line. On the other hand,
1400
though, this would force scripts to both access the environment and parse the
1401
command line, so we'll move it for uniformity.)
1402

    
1403

    
1404
Output/Behaviour
1405
________________
1406

    
1407
As discussed scripts should only send user-targeted information to stderr. The
1408
create and import scripts are supposed to format/initialise the given block
1409
devices and install the correct instance data. The export script is supposed to
1410
export instance data to stdout in a format understandable by the the import
1411
script. The data will be compressed by Ganeti, so no compression should be
1412
done. The rename script should only modify the instance's knowledge of what
1413
its name is.
1414

    
1415
Other declarative style features
1416
++++++++++++++++++++++++++++++++
1417

    
1418
Similar to Ganeti 1.2, OS specifications will need to provide a
1419
'ganeti_api_version' containing list of numbers matching the
1420
version(s) of the API they implement. Ganeti itself will always be
1421
compatible with one version of the API and may maintain backwards
1422
compatibility if it's feasible to do so. The numbers are one-per-line,
1423
so an OS supporting both version 5 and version 20 will have a file
1424
containing two lines. This is different from Ganeti 1.2, which only
1425
supported one version number.
1426

    
1427
In addition to that an OS will be able to declare that it does support only a
1428
subset of the Ganeti hypervisors, by declaring them in the 'hypervisors' file.
1429

    
1430

    
1431
Caveats/Notes
1432
+++++++++++++
1433

    
1434
We might want to have a "default" import/export behaviour that just dumps all
1435
disks and restores them. This can save work as most systems will just do this,
1436
while allowing flexibility for different systems.
1437

    
1438
Environment variables are limited in size, but we expect that there will be
1439
enough space to store the information we need. If we discover that this is not
1440
the case we may want to go to a more complex API such as storing those
1441
information on the filesystem and providing the OS script with the path to a
1442
file where they are encoded in some format.
1443

    
1444

    
1445

    
1446
Remote API changes
1447
~~~~~~~~~~~~~~~~~~
1448

    
1449
The first Ganeti remote API (RAPI) was designed and deployed with the
1450
Ganeti 1.2.5 release.  That version provide read-only access to the
1451
cluster state. Fully functional read-write API demands significant
1452
internal changes which will be implemented in version 2.0.
1453

    
1454
We decided to go with implementing the Ganeti RAPI in a RESTful way,
1455
which is aligned with key features we looking. It is simple,
1456
stateless, scalable and extensible paradigm of API implementation. As
1457
transport it uses HTTP over SSL, and we are implementing it with JSON
1458
encoding, but in a way it possible to extend and provide any other
1459
one.
1460

    
1461
Design
1462
++++++
1463

    
1464
The Ganeti RAPI is implemented as independent daemon, running on the
1465
same node with the same permission level as Ganeti master
1466
daemon. Communication is done through the LUXI library to the master
1467
daemon. In order to keep communication asynchronous RAPI processes two
1468
types of client requests:
1469

    
1470
- queries: server is able to answer immediately
1471
- job submission: some time is required for a useful response
1472

    
1473
In the query case requested data send back to client in the HTTP
1474
response body. Typical examples of queries would be: list of nodes,
1475
instances, cluster info, etc.
1476

    
1477
In the case of job submission, the client receive a job ID, the
1478
identifier which allows to query the job progress in the job queue
1479
(see `Job Queue`_).
1480

    
1481
Internally, each exported object has an version identifier, which is
1482
used as a state identifier in the HTTP header E-Tag field for
1483
requests/responses to avoid race conditions.
1484

    
1485

    
1486
Resource representation
1487
+++++++++++++++++++++++
1488

    
1489
The key difference of using REST instead of others API is that REST
1490
requires separation of services via resources with unique URIs. Each
1491
of them should have limited amount of state and support standard HTTP
1492
methods: GET, POST, DELETE, PUT.
1493

    
1494
For example in Ganeti's case we can have a set of URI:
1495

    
1496
 - ``/{clustername}/instances``
1497
 - ``/{clustername}/instances/{instancename}``
1498
 - ``/{clustername}/instances/{instancename}/tag``
1499
 - ``/{clustername}/tag``
1500

    
1501
A GET request to ``/{clustername}/instances`` will return the list of
1502
instances, a POST to ``/{clustername}/instances`` should create a new
1503
instance, a DELETE ``/{clustername}/instances/{instancename}`` should
1504
delete the instance, a GET ``/{clustername}/tag`` should return get
1505
cluster tags.
1506

    
1507
Each resource URI will have a version prefix. The resource IDs are to
1508
be determined.
1509

    
1510
Internal encoding might be JSON, XML, or any other. The JSON encoding
1511
fits nicely in Ganeti RAPI needs. The client can request a specific
1512
representation via the Accept field in the HTTP header.
1513

    
1514
REST uses HTTP as its transport and application protocol for resource
1515
access. The set of possible responses is a subset of standard HTTP
1516
responses.
1517

    
1518
The statelessness model provides additional reliability and
1519
transparency to operations (e.g. only one request needs to be analyzed
1520
to understand the in-progress operation, not a sequence of multiple
1521
requests/responses).
1522

    
1523

    
1524
Security
1525
++++++++
1526

    
1527
With the write functionality security becomes a much bigger an issue.
1528
The Ganeti RAPI uses basic HTTP authentication on top of an
1529
SSL-secured connection to grant access to an exported resource. The
1530
password is stored locally in an Apache-style ``.htpasswd`` file. Only
1531
one level of privileges is supported.
1532

    
1533
Caveats
1534
+++++++
1535

    
1536
The model detailed above for job submission requires the client to
1537
poll periodically for updates to the job; an alternative would be to
1538
allow the client to request a callback, or a 'wait for updates' call.
1539

    
1540
The callback model was not considered due to the following two issues:
1541

    
1542
- callbacks would require a new model of allowed callback URLs,
1543
  together with a method of managing these
1544
- callbacks only work when the client and the master are in the same
1545
  security domain, and they fail in the other cases (e.g. when there is
1546
  a firewall between the client and the RAPI daemon that only allows
1547
  client-to-RAPI calls, which is usual in DMZ cases)
1548

    
1549
The 'wait for updates' method is not suited to the HTTP protocol,
1550
where requests are supposed to be short-lived.
1551

    
1552
Command line changes
1553
~~~~~~~~~~~~~~~~~~~~
1554

    
1555
Ganeti 2.0 introduces several new features as well as new ways to
1556
handle instance resources like disks or network interfaces. This
1557
requires some noticeable changes in the way command line arguments are
1558
handled.
1559

    
1560
- extend and modify command line syntax to support new features
1561
- ensure consistent patterns in command line arguments to reduce
1562
  cognitive load
1563

    
1564
The design changes that require these changes are, in no particular
1565
order:
1566

    
1567
- flexible instance disk handling: support a variable number of disks
1568
  with varying properties per instance,
1569
- flexible instance network interface handling: support a variable
1570
  number of network interfaces with varying properties per instance
1571
- multiple hypervisors: multiple hypervisors can be active on the same
1572
  cluster, each supporting different parameters,
1573
- support for device type CDROM (via ISO image)
1574

    
1575
As such, there are several areas of Ganeti where the command line
1576
arguments will change:
1577

    
1578
- Cluster configuration
1579

    
1580
  - cluster initialization
1581
  - cluster default configuration
1582

    
1583
- Instance configuration
1584

    
1585
  - handling of network cards for instances,
1586
  - handling of disks for instances,
1587
  - handling of CDROM devices and
1588
  - handling of hypervisor specific options.
1589

    
1590
There are several areas of Ganeti where the command line arguments
1591
will change:
1592

    
1593
- Cluster configuration
1594

    
1595
  - cluster initialization
1596
  - cluster default configuration
1597

    
1598
- Instance configuration
1599

    
1600
  - handling of network cards for instances,
1601
  - handling of disks for instances,
1602
  - handling of CDROM devices and
1603
  - handling of hypervisor specific options.
1604

    
1605
Notes about device removal/addition
1606
+++++++++++++++++++++++++++++++++++
1607

    
1608
To avoid problems with device location changes (e.g. second network
1609
interface of the instance becoming the first or third and the like)
1610
the list of network/disk devices is treated as a stack, i.e. devices
1611
can only be added/removed at the end of the list of devices of each
1612
class (disk or network) for each instance.
1613

    
1614
gnt-instance commands
1615
+++++++++++++++++++++
1616

    
1617
The commands for gnt-instance will be modified and extended to allow
1618
for the new functionality:
1619

    
1620
- the add command will be extended to support the new device and
1621
  hypervisor options,
1622
- the modify command continues to handle all modifications to
1623
  instances, but will be extended with new arguments for handling
1624
  devices.
1625

    
1626
Network Device Options
1627
++++++++++++++++++++++
1628

    
1629
The generic format of the network device option is:
1630

    
1631
  --net $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1632

    
1633
:$DEVNUM: device number, unsigned integer, starting at 0,
1634
:$OPTION: device option, string,
1635
:$VALUE: device option value, string.
1636

    
1637
Currently, the following device options will be defined (open to
1638
further changes):
1639

    
1640
:mac: MAC address of the network interface, accepts either a valid
1641
  MAC address or the string 'auto'. If 'auto' is specified, a new MAC
1642
  address will be generated randomly. If the mac device option is not
1643
  specified, the default value 'auto' is assumed.
1644
:bridge: network bridge the network interface is connected
1645
  to. Accepts either a valid bridge name (the specified bridge must
1646
  exist on the node(s)) as string or the string 'auto'. If 'auto' is
1647
  specified, the default brigde is used. If the bridge option is not
1648
  specified, the default value 'auto' is assumed.
1649

    
1650
Disk Device Options
1651
+++++++++++++++++++
1652

    
1653
The generic format of the disk device option is:
1654

    
1655
  --disk $DEVNUM[:$OPTION=$VALUE][,$OPTION=VALUE]
1656

    
1657
:$DEVNUM: device number, unsigned integer, starting at 0,
1658
:$OPTION: device option, string,
1659
:$VALUE: device option value, string.
1660

    
1661
Currently, the following device options will be defined (open to
1662
further changes):
1663

    
1664
:size: size of the disk device, either a positive number, specifying
1665
  the disk size in mebibytes, or a number followed by a magnitude suffix
1666
  (M for mebibytes, G for gibibytes). Also accepts the string 'auto' in
1667
  which case the default disk size will be used. If the size option is
1668
  not specified, 'auto' is assumed. This option is not valid for all
1669
  disk layout types.
1670
:access: access mode of the disk device, a single letter, valid values
1671
  are:
1672

    
1673
  - *w*: read/write access to the disk device or
1674
  - *r*: read-only access to the disk device.
1675

    
1676
  If the access mode is not specified, the default mode of read/write
1677
  access will be configured.
1678
:path: path to the image file for the disk device, string. No default
1679
  exists. This option is not valid for all disk layout types.
1680

    
1681
Adding devices
1682
++++++++++++++
1683

    
1684
To add devices to an already existing instance, use the device type
1685
specific option to gnt-instance modify. Currently, there are two
1686
device type specific options supported:
1687

    
1688
:--net: for network interface cards
1689
:--disk: for disk devices
1690

    
1691
The syntax to the device specific options is similar to the generic
1692
device options, but instead of specifying a device number like for
1693
gnt-instance add, you specify the magic string add. The new device
1694
will always be appended at the end of the list of devices of this type
1695
for the specified instance, e.g. if the instance has disk devices 0,1
1696
and 2, the newly added disk device will be disk device 3.
1697

    
1698
Example: gnt-instance modify --net add:mac=auto test-instance
1699

    
1700
Removing devices
1701
++++++++++++++++
1702

    
1703
Removing devices from and instance is done via gnt-instance
1704
modify. The same device specific options as for adding instances are
1705
used. Instead of a device number and further device options, only the
1706
magic string remove is specified. It will always remove the last
1707
device in the list of devices of this type for the instance specified,
1708
e.g. if the instance has disk devices 0, 1, 2 and 3, the disk device
1709
number 3 will be removed.
1710

    
1711
Example: gnt-instance modify --net remove test-instance
1712

    
1713
Modifying devices
1714
+++++++++++++++++
1715

    
1716
Modifying devices is also done with device type specific options to
1717
the gnt-instance modify command. There are currently two device type
1718
options supported:
1719

    
1720
:--net: for network interface cards
1721
:--disk: for disk devices
1722

    
1723
The syntax to the device specific options is similar to the generic
1724
device options. The device number you specify identifies the device to
1725
be modified.
1726

    
1727
Example::
1728

    
1729
  gnt-instance modify --disk 2:access=r
1730

    
1731
Hypervisor Options
1732
++++++++++++++++++
1733

    
1734
Ganeti 2.0 will support more than one hypervisor. Different
1735
hypervisors have various options that only apply to a specific
1736
hypervisor. Those hypervisor specific options are treated specially
1737
via the ``--hypervisor`` option. The generic syntax of the hypervisor
1738
option is as follows::
1739

    
1740
  --hypervisor $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1741

    
1742
:$HYPERVISOR: symbolic name of the hypervisor to use, string,
1743
  has to match the supported hypervisors. Example: xen-pvm
1744

    
1745
:$OPTION: hypervisor option name, string
1746
:$VALUE: hypervisor option value, string
1747

    
1748
The hypervisor option for an instance can be set on instance creation
1749
time via the ``gnt-instance add`` command. If the hypervisor for an
1750
instance is not specified upon instance creation, the default
1751
hypervisor will be used.
1752

    
1753
Modifying hypervisor parameters
1754
+++++++++++++++++++++++++++++++
1755

    
1756
The hypervisor parameters of an existing instance can be modified
1757
using ``--hypervisor`` option of the ``gnt-instance modify``
1758
command. However, the hypervisor type of an existing instance can not
1759
be changed, only the particular hypervisor specific option can be
1760
changed. Therefore, the format of the option parameters has been
1761
simplified to omit the hypervisor name and only contain the comma
1762
separated list of option-value pairs.
1763

    
1764
Example::
1765

    
1766
  gnt-instance modify --hypervisor cdrom=/srv/boot.iso,boot_order=cdrom:network test-instance
1767

    
1768
gnt-cluster commands
1769
++++++++++++++++++++
1770

    
1771
The command for gnt-cluster will be extended to allow setting and
1772
changing the default parameters of the cluster:
1773

    
1774
- The init command will be extend to support the defaults option to
1775
  set the cluster defaults upon cluster initialization.
1776
- The modify command will be added to modify the cluster
1777
  parameters. It will support the --defaults option to change the
1778
  cluster defaults.
1779

    
1780
Cluster defaults
1781

    
1782
The generic format of the cluster default setting option is:
1783

    
1784
  --defaults $OPTION=$VALUE[,$OPTION=$VALUE]
1785

    
1786
:$OPTION: cluster default option, string,
1787
:$VALUE: cluster default option value, string.
1788

    
1789
Currently, the following cluster default options are defined (open to
1790
further changes):
1791

    
1792
:hypervisor: the default hypervisor to use for new instances,
1793
  string. Must be a valid hypervisor known to and supported by the
1794
  cluster.
1795
:disksize: the disksize for newly created instance disks, where
1796
  applicable. Must be either a positive number, in which case the unit
1797
  of megabyte is assumed, or a positive number followed by a supported
1798
  magnitude symbol (M for megabyte or G for gigabyte).
1799
:bridge: the default network bridge to use for newly created instance
1800
  network interfaces, string. Must be a valid bridge name of a bridge
1801
  existing on the node(s).
1802

    
1803
Hypervisor cluster defaults
1804
+++++++++++++++++++++++++++
1805

    
1806
The generic format of the hypervisor cluster wide default setting
1807
option is::
1808

    
1809
  --hypervisor-defaults $HYPERVISOR:$OPTION=$VALUE[,$OPTION=$VALUE]
1810

    
1811
:$HYPERVISOR: symbolic name of the hypervisor whose defaults you want
1812
  to set, string
1813
:$OPTION: cluster default option, string,
1814
:$VALUE: cluster default option value, string.
1815

    
1816
Glossary
1817
========
1818

    
1819
Since this document is only a delta from the Ganeti 1.2, there are
1820
some unexplained terms. Here is a non-exhaustive list.
1821

    
1822
.. _HVM:
1823

    
1824
HVM
1825
  hardware virtualization mode, where the virtual machine is oblivious
1826
  to the fact that's being virtualized and all the hardware is emulated
1827

    
1828
.. _LU:
1829

    
1830
LogicalUnit
1831
  the code associated with an OpCode, i.e. the code that implements the
1832
  startup of an instance
1833

    
1834
.. _opcode:
1835

    
1836
OpCode
1837
  a data structure encapsulating a basic cluster operation; for example,
1838
  start instance, add instance, etc.;
1839

    
1840
.. _PVM:
1841

    
1842
PVM
1843
  para-virtualization mode, where the virtual machine knows it's being
1844
  virtualized and as such there is no need for hardware emulation
1845

    
1846
.. _watcher:
1847

    
1848
watcher
1849
  ``ganeti-watcher`` is a tool that should be run regularly from cron
1850
  and takes care of restarting failed instances, restarting secondary
1851
  DRBD devices, etc. For more details, see the man page
1852
  ``ganeti-watcher(8)``.