1 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
3 <article class="specification">
5 <title>Ganeti administrator's guide</title>
7 <para>Documents Ganeti version 1.2</para>
9 <title>Introduction</title>
11 <para>Ganeti is a virtualization cluster management software. You are
12 expected to be a system administrator familiar with your Linux distribution
13 and the Xen virtualization environment before using it.
16 <para>The various components of Ganeti all have man pages and interactive
17 help. This manual though will help you getting familiar with the system by
18 explaining the most common operations, grouped by related use.
21 <para>After a terminology glossary and a section on the prerequisites
22 needed to use this manual, the rest of this document is divided in three
23 main sections, which group different features of Ganeti:
26 <simpara>Instance Management</simpara>
29 <simpara>High Availability Features</simpara>
32 <simpara>Debugging Features</simpara>
38 <title>Ganeti Terminology</title>
40 <para>This section provides a small introduction to Ganeti terminology,
41 which might be useful to read the rest of the document.
46 <listitem><para>A set of machines (nodes) that cooperate to offer a
47 coherent highly available virtualization service.</para></listitem>
52 <listitem><para>A physical machine which is member of a cluster.
53 Nodes are the basic cluster infrastructure, and are not fault
54 tolerant.</para></listitem>
58 <term>Master Node</term>
59 <listitem><para>The node which controls the Cluster, from which all
60 Ganeti commands must be given.</para></listitem>
65 <listitem><para>A virtual machine which runs on a cluster. It can be
66 a fault tolerant highly available entity.</para></listitem>
71 <listitem><para>A pool is a set of clusters sharing the same
72 network.</para></listitem>
76 <term>Meta-Cluster</term>
77 <listitem><para>Anything that concerns more than one
78 cluster.</para></listitem>
87 <title>Prerequisites</title>
89 <para>You need to have your Ganeti cluster installed and configured
90 before you try any of the commands in this document. Please follow the
91 "installing tutorial" for instructions on how to do that.
98 <title>Managing Instances</title>
101 <title>Adding/Removing an instance</title>
103 <para>Adding a new virtual instance to your Ganeti cluster is really
104 easy. The command is:
106 gnt-instance add -n TARGET_NODE -o OS_TYPE -t DISK_TEMPLATE INSTANCE_NAME
108 The instance name must exist in dns and of course map to an address in
109 the same subnet as the cluster itself. Options you can give to this
113 <simpara>The disk size (-s)</simpara>
116 <simpara>The swap size (--swap-size)</simpara>
119 <simpara>The memory size (-m)</simpara>
122 <simpara>The number of virtual CPUs (-p)</simpara>
125 <simpara>The instance ip address (-i) (use -i auto to make Ganeti
126 record the address from dns)</simpara>
129 <simpara>The bridge to connect the instance to (-b), if you don't
130 want to use the default one</simpara>
135 <para>There are four types of disk template you can choose from:
139 <term>diskless</term>
140 <listitem><para>The instance has no disks. Only used for special
141 purpouse operating systems or for testing.</para></listitem>
146 <listitem><para>The instance will use LVM devices as backend for its
147 disks. No redundancy is provided.</para></listitem>
151 <term>local_raid1</term>
152 <listitem><para>A local mirror is set between LVM devices to back the
153 instance. This provides some redundancy for the instance's
154 data.</para></listitem>
158 <term>remote_raid1</term>
159 <listitem><para>A mirror is set between the local node and a remote
160 one, which must be specified with the --secondary-node option. Use
161 this option to obtain a highly available instance that can be failed
162 over to a remote node should the primary one fail.
168 For example if you want to create an highly available instance use the
169 remote_raid1 disk template:
171 gnt-instance add -n TARGET_NODE -o OS_TYPE -t remote_raid1 \
172 --secondary-node=SECONDARY_NODE INSTANCE_NAME
174 To know which operating systems your cluster supports you can use:
181 Removing an instance is even easier than creating one. This operation is
182 non-reversible and destroys all the contents of your instance. Use with
185 gnt-instance remove INSTANCE_NAME
191 <title>Starting/Stopping an instance</title>
193 <para>Instances are automatically started at instance creation time. To
194 manually start one which is currently stopped you can run:
196 gnt-instance startup INSTANCE_NAME
198 While the command to stop one is:
200 gnt-instance shutdown INSTANCE_NAME
202 The command to see all the instances configured and their status is:
208 <para>Do not use the xen commands to stop instances. If you run for
209 example xm shutdown or xm destroy on an instance Ganeti will
210 automatically restart it (via the
211 <citerefentry><refentrytitle>ganeti-watcher</refentrytitle>
212 <manvolnum>8</manvolnum></citerefentry>)
218 <title>Exporting/Importing an instance</title>
220 <para>You can create a snapshot of an instance disk and Ganeti
221 configuration, which then you can backup, or import into another cluster.
222 The way to export an instance is:
224 gnt-backup export -n TARGET_NODE INSTANCE_NAME
226 The target node can be any node in the cluster with enough space under
227 /srv/ganeti to hold the instance image. Use the --noshutdown option to
228 snapshot an instance without rebooting it. Any previous snapshot of the
229 same instance existing cluster-wide under /srv/ganeti will be removed by
230 this operation: if you want to keep them move them out of the Ganeti
234 <para>Importing an instance is as easy as creating a new one. The command
237 gnt-backup import -n TRGT_NODE -t DISK_TMPL --src-node=NODE --src-dir=DIR INST_NAME
239 Most of the options available for gnt-instance add are supported here
248 <title>High availability features</title>
251 <title>Failing over an instance</title>
253 <para>If an instance is built in highly available mode you can at any
254 time fail it over to its secondary node, even if the primary has somehow
255 failed and it's not up anymore. Doing it is really easy, on the master
256 node you can just run:
258 gnt-instance failover INSTANCE_NAME
260 That's it. After the command completes the secondary node is now the
261 primary, and vice versa.
265 <title>Replacing an instance disks</title>
267 <para>So what if instead the secondary node for an instance has failed,
268 or you plan to remove a node from your cluster, and you failed over all
269 its instances, but it's still secondary for some? The solution here is to
270 replace the instance disks, changing the secondary node:
272 gnt-instance replace-disks -n NEW_SECONDARY INSTANCE_NAME
274 This process is a bit longer, but involves no instance downtime, and at
275 the end of it the instance has changed its secondary node, to which it
276 can if necessary be failed over.
280 <title>Failing over the master node</title>
282 <para>This is all good as long as the Ganeti Master Node is up. Should it
283 go down, or should you wish to decommission it, just run on any other node
286 gnt-cluster masterfailover
288 and the node you ran it on is now the new master.
292 <title>Adding/Removing nodes</title>
294 <para>And of course, now that you know how to move instances around, it's
295 easy to free up a node, and then you can remove it from the cluster:
297 gnt-node remove NODE_NAME
299 and maybe add a new one:
301 gnt-node add [--secondary-ip=ADDRESS] NODE_NAME
308 <title>Debugging Features</title>
310 <para>At some point you might need to do some debugging operations on your
311 cluster or on your instances. This section will help you with the most used
312 debugging functionalities.
316 <title>Accessing an instance's disks</title>
318 <para>From an instance's primary node you have access to its disks. Never
319 ever mount the underlying logical volume manually on a fault tolerant
320 instance, though or you risk breaking replication. The correct way to
321 access them is to run the command:
323 gnt-instance activate-disks INSTANCE_NAME
325 And then access the device that gets created. Of course after you've
326 finished you can deactivate them with the deactivate-disks command, which
327 works in the same way.
332 <title>Accessing an instance's console</title>
334 <para>The command to access a running instance's console is:
336 gnt-instance console INSTANCE_NAME
338 Use the console normally and then type ^] when done, to exit.
343 <title>Instance Operating System Debugging</title>
345 <para>Should you have any problems with operating systems support the
346 command to ran to see a complete status for all your nodes is:
355 <title>Cluster-wide debugging</title>
357 <para>The gnt-cluster command offers several options to run tests or
358 execute cluster-wide operations. For example:
363 gnt-cluster getmaster
366 See the respective help to know more about their usage.