Statistics
| Branch: | Tag: | Revision:

root / doc / design-glusterfs-ganeti-support.rst @ ffde7fb6

History | View | Annotate | Download (4.3 kB)

1
========================
2
GlusterFS Ganeti support
3
========================
4

    
5
This document describes the plan for adding GlusterFS support inside Ganeti.
6

    
7
.. contents:: :depth: 4
8
.. highlight:: shell-example
9

    
10
Objective
11
=========
12

    
13
The aim is to let Ganeti support GlusterFS as one of its backend storage.
14
This includes three aspects to finish:
15

    
16
- Add Gluster as a storage backend.
17
- Make sure Ganeti VMs can use GlusterFS backends in userspace mode (for
18
  newer QEMU/KVM which has this support) and otherwise, if possible, through
19
  some kernel exported block device.
20
- Make sure Ganeti can configure GlusterFS by itself, by just joining
21
  storage space on new nodes to a GlusterFS nodes pool. Note that this
22
  may need another design document that explains how it interacts with
23
  storage pools, and that the node might or might not host VMs as well.
24

    
25
Background
26
==========
27

    
28
There are two possible ways to implement "GlusterFS Ganeti Support". One is
29
GlusterFS as one of external backend storage, the other one is realizing
30
GlusterFS inside Ganeti, that is, as a new disk type for Ganeti. The benefit
31
of the latter one is that it would not be opaque but fully supported and
32
integrated in Ganeti, which would not need to add infrastructures for
33
testing/QAing and such. Having it internal we can also provide a monitoring
34
agent for it and more visibility into what's going on. For these reasons,
35
GlusterFS support will be added directly inside Ganeti.
36

    
37
Implementation Plan
38
===================
39

    
40
Ganeti Side
41
-----------
42

    
43
To realize an internal storage backend for Ganeti, one should realize
44
BlockDev class in `ganeti/lib/storage/base.py` that is a specific
45
class including create, remove and such. These functions should be
46
realized in `ganeti/lib/storage/bdev.py`. Actually, the differences
47
between implementing inside and outside (external) Ganeti are how to
48
finish these functions in BlockDev class and how to combine with Ganeti
49
itself. The internal implementation is not based on external scripts
50
and combines with Ganeti in a more compact way. RBD patches may be a
51
good reference here. Adding a backend storage steps are as follows:
52

    
53
- Implement the BlockDev interface in bdev.py.
54
- Add the logic in cmdlib (eg, migration, verify).
55
- Add the new storage type name to constants.
56
- Modify objects.Disk to support GlusterFS storage type.
57
- The implementation will be performed similarly to the RBD one (see
58
  commit 7181fba).
59

    
60
GlusterFS side
61
--------------
62

    
63
GlusterFS is a distributed file system implemented in user space.
64
The way to access GlusterFS namespace is via FUSE based Gluster native
65
client except NFS and CIFS. The efficiency of this way is lower because
66
the data would be pass the kernel space and then come to user space.
67
Now, there are two specific enhancements:
68

    
69
- A new library called libgfapi is now available as part of GlusterFS
70
  that provides POSIX-like C APIs for accessing Gluster volumes.
71
  libgfapi support will be available from GlusterFS-3.4 release.
72
- QEMU/KVM (starting from QEMU-1.3) will have GlusterFS block driver that
73
  uses libgfapi and hence there is no FUSE overhead any longer when QEMU/KVM
74
  works with VM images on Gluster volumes.
75

    
76
Proposed implementation
77
-----------------------
78

    
79
QEMU/KVM includes support for GlusterFS and Ganeti could support GlusterFS
80
through QEMU/KVM. However, this way could just let VMs of QEMU/KVM use GlusterFS
81
backend storage but not other VMs like XEN and such. There are two parts that need
82
to be implemented for supporting GlusterFS inside Ganeti so that it can not only
83
support QEMU/KVM VMs, but also XEN and other VMs. One part is GlusterFS for XEN VM,
84
which is similar to sharedfile disk template. The other part is GlusterFS for
85
QEMU/KVM VM, which is supported by the GlusterFS driver for QEMU/KVM. After
86
``gnt-instance add -t gluster instance.example.com`` command is executed, the added
87
instance should be checked. If the instance is a XEN VM, it would run the GlusterFS
88
sharedfile way. However, if the instance is a QEMU/KVM VM, it would run the
89
QEMU/KVM + GlusterFS way. For the first part (GlusterFS for XEN VMs), sharedfile
90
disk template would be a good reference. For the second part (GlusterFS for QEMU/KVM
91
VMs), RBD disk template would be a good reference. The first part would be finished
92
at first and then the second part would be completed, which is based on the first
93
part.
94

    
95
.. vim: set textwidth=72 :
96
.. Local Variables:
97
.. mode: rst
98
.. fill-column: 72
99
.. End: