Revision e3c826ec doc/admin.sgml

b/doc/admin.sgml
35 35
    </para>
36 36

  
37 37
    <sect2>
38

  
39 38
      <title>Ganeti terminology</title>
40 39

  
41
      <para>This section provides a small introduction to Ganeti terminology,
42
      which might be useful to read the rest of the document.
40
      <para>
41
        This section provides a small introduction to Ganeti terminology, which
42
        might be useful to read the rest of the document.
43 43

  
44
      <glosslist>
44
        <glosslist>
45 45
          <glossentry>
46 46
            <glossterm>Cluster</glossterm>
47 47
            <glossdef>
......
95 95
              </simpara>
96 96
            </glossdef>
97 97
          </glossentry>
98
      </glosslist>
99

  
98
        </glosslist>
100 99
      </para>
101 100
    </sect2>
102 101

  
......
104 103
      <title>Prerequisites</title>
105 104

  
106 105
      <para>
107
        You need to have your Ganeti cluster installed and configured
108
        before you try any of the commands in this document. Please
109
        follow the <emphasis>Ganeti installation tutorial</emphasis>
110
        for instructions on how to do that.
106
        You need to have your Ganeti cluster installed and configured before
107
        you try any of the commands in this document. Please follow the
108
        <emphasis>Ganeti installation tutorial</emphasis> for instructions on
109
        how to do that.
111 110
      </para>
112 111
    </sect2>
113 112

  
......
120 119
      <title>Adding/Removing an instance</title>
121 120

  
122 121
      <para>
123
        Adding a new virtual instance to your Ganeti cluster is really
124
        easy. The command is:
122
        Adding a new virtual instance to your Ganeti cluster is really easy.
123
        The command is:
125 124

  
126
      <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable> -o <replaceable>OS_TYPE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
125
        <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable> -o <replaceable>OS_TYPE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
127 126

  
128 127
        The instance name must be resolvable (e.g. exist in DNS) and
129 128
        of course map to an address in the same subnet as the cluster
......
143 142
          <simpara>The number of virtual CPUs (<option>-p</option>)</simpara>
144 143
        </listitem>
145 144
        <listitem>
146
	  <simpara>The instance ip address (<option>-i</option>) (use
147
	  the value <literal>auto</literal> to make Ganeti record the
148
	  address from dns)</simpara>
145
          <simpara>The instance ip address (<option>-i</option>) (use the value
146
            <literal>auto</literal> to make Ganeti record the address from
147
            dns)</simpara>
149 148
        </listitem>
150 149
        <listitem>
151
	  <simpara>The bridge to connect the instance to
152
	  (<option>-b</option>), if you don't want to use the default
153
	  one</simpara>
150
          <simpara>The bridge to connect the instance to (<option>-b</option>),
151
            if you don't want to use the default one</simpara>
154 152
        </listitem>
155 153
      </itemizedlist>
156 154
      </para>
......
160 158
      <variablelist>
161 159
        <varlistentry>
162 160
          <term>diskless</term>
163
	  <listitem><para>The instance has no disks. Only used for special
164
	  purpouse operating systems or for testing.</para></listitem>
161
          <listitem>
162
            <para>The instance has no disks. Only used for special purpouse
163
              operating systems or for testing.</para>
164
          </listitem>
165 165
        </varlistentry>
166 166

  
167 167
        <varlistentry>
168 168
          <term>plain</term>
169
	  <listitem><para>The instance will use LVM devices as backend for its
170
	  disks. No redundancy is provided.</para></listitem>
169
          <listitem>
170
            <para>The instance will use LVM devices as backend for its disks.
171
              No redundancy is provided.</para>
172
          </listitem>
171 173
        </varlistentry>
172 174

  
173 175
        <varlistentry>
174 176
          <term>local_raid1</term>
175
	  <listitem><para>A local mirror is set between LVM devices to back the
176
	  instance. This provides some redundancy for the instance's
177
	  data.</para></listitem>
177
          <listitem>
178
            <para>A local mirror is set between LVM devices to back the
179
              instance. This provides some redundancy for the instance's
180
              data.</para>
181
          </listitem>
178 182
        </varlistentry>
179 183

  
180 184
        <varlistentry>
181 185
          <term>remote_raid1</term>
182
	  <listitem>
183
            <simpara><emphasis role="strong">Note:</emphasis> This is
184
            only valid for multi-node clusters.</simpara>
186
          <listitem>
187
            <simpara><emphasis role="strong">Note:</emphasis> This is only
188
              valid for multi-node clusters.</simpara>
185 189
            <simpara>
186 190
              A mirror is set between the local node and a remote one, which
187 191
              must be specified with the second value of the --node option. Use
188 192
              this option to obtain a highly available instance that can be
189 193
              failed over to a remote node should the primary one fail.
190
	      </simpara>
191
            </listitem>
194
            </simpara>
195
          </listitem>
192 196
        </varlistentry>
193 197

  
194 198
      </variablelist>
195 199

  
196 200
      <para>
197
        For example if you want to create an highly available instance
198
        use the remote_raid1 disk template:
201
        For example if you want to create an highly available instance use the
202
        remote_raid1 disk template:
199 203
        <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable><optional>:<replaceable>SECONDARY_NODE</replaceable></optional> -o <replaceable>OS_TYPE</replaceable> -t remote_raid1 \
200 204
  <replaceable>INSTANCE_NAME</replaceable></synopsis>
201 205

  
202 206
      <para>
203
        To know which operating systems your cluster supports you can use:
204

  
207
        To know which operating systems your cluster supports you can use
205 208
        <synopsis>gnt-os list</synopsis>
206

  
207 209
      </para>
208 210

  
209 211
      <para>
210
        Removing an instance is even easier than creating one. This
211
        operation is non-reversible and destroys all the contents of
212
        your instance. Use with care:
213

  
214
      <synopsis>gnt-instance remove <replaceable>INSTANCE_NAME</replaceable></synopsis>
212
        Removing an instance is even easier than creating one. This operation
213
        is non-reversible and destroys all the contents of your instance. Use
214
        with care:
215 215

  
216
        <synopsis>gnt-instance remove <replaceable>INSTANCE_NAME</replaceable></synopsis>
216 217
      </para>
217 218
    </sect2>
218 219

  
......
220 221
      <title>Starting/Stopping an instance</title>
221 222

  
222 223
      <para>
223
        Instances are automatically started at instance creation
224
        time. To manually start one which is currently stopped you can
225
        run:
224
        Instances are automatically started at instance creation time. To
225
        manually start one which is currently stopped you can run:
226 226

  
227
      <synopsis>gnt-instance startup <replaceable>INSTANCE_NAME</replaceable></synopsis>
227
        <synopsis>gnt-instance startup <replaceable>INSTANCE_NAME</replaceable></synopsis>
228 228

  
229 229
        While the command to stop one is:
230 230

  
231
      <synopsis>gnt-instance shutdown <replaceable>INSTANCE_NAME</replaceable></synopsis>
231
        <synopsis>gnt-instance shutdown <replaceable>INSTANCE_NAME</replaceable></synopsis>
232 232

  
233
        The command to see all the instances configured and their
234
        status is:
233
        The command to see all the instances configured and their status is:
235 234

  
236
      <synopsis>gnt-instance list</synopsis>
235
        <synopsis>gnt-instance list</synopsis>
237 236

  
238 237
      </para>
239 238

  
......
253 252
      <para>
254 253
        You can create a snapshot of an instance disk and Ganeti
255 254
        configuration, which then you can backup, or import into
256
        another cluster.  The way to export an instance is:
255
        another cluster. The way to export an instance is:
257 256

  
258
      <synopsis>gnt-backup export -n <replaceable>TARGET_NODE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
257
        <synopsis>gnt-backup export -n <replaceable>TARGET_NODE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
259 258

  
260 259
        The target node can be any node in the cluster with enough
261 260
        space under <filename class="directory">/srv/ganeti</filename>
......
269 268
      </para>
270 269

  
271 270
      <para>
272
        Importing an instance is similar to creating a new one. The
273
        command is:
271
        Importing an instance is similar to creating a new one. The command is:
274 272

  
275
      <synopsis>gnt-backup import -n <replaceable>TARGET_NODE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> --src-node=<replaceable>NODE</replaceable> --src-dir=DIR INSTANCE_NAME</synopsis>
273
        <synopsis>gnt-backup import -n <replaceable>TARGET_NODE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> --src-node=<replaceable>NODE</replaceable> --src-dir=DIR INSTANCE_NAME</synopsis>
276 274

  
277 275
        Most of the options available for the command
278 276
        <emphasis>gnt-instance add</emphasis> are supported here too.
......
299 297
        primary has somehow failed and it's not up anymore. Doing it
300 298
        is really easy, on the master node you can just run:
301 299

  
302
      <synopsis>gnt-instance failover <replaceable>INSTANCE_NAME</replaceable></synopsis>
300
        <synopsis>gnt-instance failover <replaceable>INSTANCE_NAME</replaceable></synopsis>
303 301

  
304 302
        That's it. After the command completes the secondary node is
305 303
        now the primary, and vice versa.
......
316 314
        for some? The solution here is to replace the instance disks,
317 315
        changing the secondary node:
318 316

  
319
      <synopsis>gnt-instance replace-disks -n <replaceable>NEW_SECONDARY</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
317
        <synopsis>gnt-instance replace-disks -n <replaceable>NEW_SECONDARY</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
320 318

  
321 319
        This process is a bit longer, but involves no instance
322 320
        downtime, and at the end of it the instance has changed its
......
331 329
        up. Should it go down, or should you wish to decommission it,
332 330
        just run on any other node the command:
333 331

  
334
      <synopsis>gnt-cluster masterfailover</synopsis>
332
        <synopsis>gnt-cluster masterfailover</synopsis>
335 333

  
336 334
        and the node you ran it on is now the new master.
337 335
      </para>
......
344 342
        it's easy to free up a node, and then you can remove it from
345 343
        the cluster:
346 344

  
347
      <synopsis>
348
gnt-node remove <replaceable>NODE_NAME</replaceable>
349
      </synopsis>
345
        <synopsis>gnt-node remove <replaceable>NODE_NAME</replaceable></synopsis>
350 346

  
351 347
        and maybe add a new one:
352 348

  
353
      <synopsis>
354
gnt-node add <optional><option>--secondary-ip=<replaceable>ADDRESS</replaceable></option></optional> <replaceable>NODE_NAME</replaceable>
349
        <synopsis>gnt-node add <optional><option>--secondary-ip=<replaceable>ADDRESS</replaceable></option></optional> <replaceable>NODE_NAME</replaceable>
355 350

  
356 351
      </synopsis>
357 352
      </para>
......
377 372
        replication. The correct way to access them is to run the
378 373
        command:
379 374

  
380
      <synopsis> gnt-instance activate-disks <replaceable>INSTANCE_NAME</replaceable></synopsis>
375
        <synopsis>gnt-instance activate-disks <replaceable>INSTANCE_NAME</replaceable></synopsis>
381 376

  
382 377
        And then access the device that gets created.  After you've
383 378
        finished you can deactivate them with the deactivate-disks
......
391 386
      <para>
392 387
        The command to access a running instance's console is:
393 388

  
394
      <synopsis>gnt-instance console <replaceable>INSTANCE_NAME</replaceable></synopsis>
389
        <synopsis>gnt-instance console <replaceable>INSTANCE_NAME</replaceable></synopsis>
395 390

  
396 391
        Use the console normally and then type
397 392
        <userinput>^]</userinput> when done, to exit.
......
406 401
        the command to ran to see a complete status for all your nodes
407 402
        is:
408 403

  
409
      <synopsis>gnt-os diagnose</synopsis>
404
        <synopsis>gnt-os diagnose</synopsis>
410 405

  
411 406
      </para>
412 407

  

Also available in: Unified diff