1 Ganeti administrator's guide
2 ============================
4 Documents Ganeti version |version|
11 Ganeti is a virtualization cluster management software. You are
12 expected to be a system administrator familiar with your Linux
13 distribution and the Xen or KVM virtualization environments before
17 The various components of Ganeti all have man pages and interactive
18 help. This manual though will help you getting familiar with the
19 system by explaining the most common operations, grouped by related
22 After a terminology glossary and a section on the prerequisites needed
23 to use this manual, the rest of this document is divided in three main
24 sections, which group different features of Ganeti:
27 - High Availability Features
33 This section provides a small introduction to Ganeti terminology,
34 which might be useful to read the rest of the document.
37 A set of machines (nodes) that cooperate to offer a coherent
38 highly available virtualization service.
41 A physical machine which is member of a cluster.
42 Nodes are the basic cluster infrastructure, and are
46 The node which controls the Cluster, from which all
47 Ganeti commands must be given.
50 A virtual machine which runs on a cluster. It can be a
51 fault tolerant highly available entity.
54 A pool is a set of clusters sharing the same network.
57 Anything that concerns more than one cluster.
62 You need to have your Ganeti cluster installed and configured before
63 you try any of the commands in this document. Please follow the
64 *Ganeti installation tutorial* for instructions on how to do that.
69 Adding/Removing an instance
70 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
72 Adding a new virtual instance to your Ganeti cluster is really easy.
76 -n TARGET_NODE:SECONDARY_NODE -o OS_TYPE -t DISK_TEMPLATE \
79 The instance name must be resolvable (e.g. exist in DNS) and usually
80 to an address in the same subnet as the cluster itself. Options you
81 can give to this command include:
83 - The disk size (``-s``) for a single-disk instance, or multiple
84 ``--disk N:size=SIZE`` options for multi-instance disks
86 - The memory size (``-B memory``)
88 - The number of virtual CPUs (``-B vcpus``)
90 - Arguments for the NICs of the instance; by default, a single-NIC
91 instance is created. The IP and/or bridge of the NIC can be changed
92 via ``--nic 0:ip=IP,bridge=BRIDGE``
95 There are four types of disk template you can choose from:
98 The instance has no disks. Only used for special purpouse operating
99 systems or for testing.
102 The instance will use plain files as backend for its disks. No
103 redundancy is provided, and this is somewhat more difficult to
104 configure for high performance.
107 The instance will use LVM devices as backend for its disks. No
108 redundancy is provided.
111 .. note:: This is only valid for multi-node clusters using DRBD 8.0.x
113 A mirror is set between the local node and a remote one, which must
114 be specified with the second value of the --node option. Use this
115 option to obtain a highly available instance that can be failed over
116 to a remote node should the primary one fail.
118 For example if you want to create an highly available instance use the
119 drbd disk templates::
121 gnt-instance add -n TARGET_NODE:SECONDARY_NODE -o OS_TYPE -t drbd \
124 To know which operating systems your cluster supports you can use
129 Removing an instance is even easier than creating one. This operation
130 is irrereversible and destroys all the contents of your instance. Use
133 gnt-instance remove INSTANCE_NAME
135 Starting/Stopping an instance
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138 Instances are automatically started at instance creation time. To
139 manually start one which is currently stopped you can run::
141 gnt-instance startup INSTANCE_NAME
143 While the command to stop one is::
145 gnt-instance shutdown INSTANCE_NAME
147 The command to see all the instances configured and their status is::
151 Do not use the Xen commands to stop instances. If you run for example
152 xm shutdown or xm destroy on an instance Ganeti will automatically
153 restart it (via the ``ganeti-watcher``).
155 Exporting/Importing an instance
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 You can create a snapshot of an instance disk and Ganeti
159 configuration, which then you can backup, or import into another
160 cluster. The way to export an instance is::
162 gnt-backup export -n TARGET_NODE INSTANCE_NAME
164 The target node can be any node in the cluster with enough space under
165 ``/srv/ganeti`` to hold the instance image. Use the *--noshutdown*
166 option to snapshot an instance without rebooting it. Any previous
167 snapshot of the same instance existing cluster-wide under
168 ``/srv/ganeti`` will be removed by this operation: if you want to keep
169 them move them out of the Ganeti exports directory.
171 Importing an instance is similar to creating a new one. The command is::
173 gnt-backup import -n TARGET_NODE -t DISK_TEMPLATE \
174 --src-node=NODE --src-dir=DIR INSTANCE_NAME
176 Most of the options available for the command :command:`gnt-instance
177 add` are supported here too.
179 High availability features
180 --------------------------
182 .. note:: This section only applies to multi-node clusters
184 Failing over an instance
185 ~~~~~~~~~~~~~~~~~~~~~~~~
187 If an instance is built in highly available mode you can at any time
188 fail it over to its secondary node, even if the primary has somehow
189 failed and it's not up anymore. Doing it is really easy, on the master
190 node you can just run::
192 gnt-instance failover INSTANCE_NAME
194 That's it. After the command completes the secondary node is now the
195 primary, and vice versa.
197 Live migrating an instance
198 ~~~~~~~~~~~~~~~~~~~~~~~~~~
200 If an instance is built in highly available mode, it currently runs
201 and both its nodes are running fine, you can at migrate it over to its
202 secondary node, without dowtime. On the master node you need to run::
204 gnt-instance migrate INSTANCE_NAME
206 Replacing an instance disks
207 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
209 So what if instead the secondary node for an instance has failed, or
210 you plan to remove a node from your cluster, and you failed over all
211 its instances, but it's still secondary for some? The solution here is
212 to replace the instance disks, changing the secondary node::
214 gnt-instance replace-disks -n NODE INSTANCE_NAME
216 This process is a bit long, but involves no instance downtime, and at
217 the end of it the instance has changed its secondary node, to which it
218 can if necessary be failed over.
220 Failing over the master node
221 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223 This is all good as long as the Ganeti Master Node is up. Should it go
224 down, or should you wish to decommission it, just run on any other
227 gnt-cluster masterfailover
229 and the node you ran it on is now the new master.
231 Adding/Removing nodes
232 ~~~~~~~~~~~~~~~~~~~~~
234 And of course, now that you know how to move instances around, it's
235 easy to free up a node, and then you can remove it from the cluster::
237 gnt-node remove NODE_NAME
239 and maybe add a new one::
241 gnt-node add --secondary-ip=ADDRESS NODE_NAME
246 At some point you might need to do some debugging operations on your
247 cluster or on your instances. This section will help you with the most
248 used debugging functionalities.
250 Accessing an instance's disks
251 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
253 From an instance's primary node you have access to its disks. Never
254 ever mount the underlying logical volume manually on a fault tolerant
255 instance, or you risk breaking replication. The correct way to access
256 them is to run the command::
258 gnt-instance activate-disks INSTANCE_NAME
260 And then access the device that gets created. After you've finished
261 you can deactivate them with the deactivate-disks command, which works
264 Accessing an instance's console
265 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
267 The command to access a running instance's console is::
269 gnt-instance console INSTANCE_NAME
271 Use the console normally and then type ``^]`` when
274 Instance OS definitions Debugging
275 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277 Should you have any problems with operating systems support the
278 command to ran to see a complete status for all your nodes is::
282 Cluster-wide debugging
283 ~~~~~~~~~~~~~~~~~~~~~~
285 The :command:`gnt-cluster` command offers several options to run tests
286 or execute cluster-wide operations. For example::
291 gnt-cluster verify-disks
292 gnt-cluster getmaster
295 See the man page :manpage:`gnt-cluster` to know more about their usage.