Revision 6c4811dc

b/INSTALL
12 12
Before installing, please verify that you have the following programs:
13 13
  - Xen virtualization (version 3.0.x or 3.1)
14 14
    http://xen.xensource.com/
15
  - DRBD (kernel module and userspace utils)
15
  - DRBD (kernel module and userspace utils), version 0.7.x or 8.0.x
16 16
    http://www.drbd.org/
17 17
  - LVM2
18 18
    http://sourceware.org/lvm2/
......
26 26
    http://developer.osdl.org/dev/iproute2
27 27
  - arping (part of iputils package)
28 28
    ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz
29
  - mdadm (Linux Software Raid tools)
29
  - mdadm (Linux Software Raid tools) (needed only with drbd 0.7.x)
30 30
    http://www.kernel.org/pub/linux/utils/raid/mdadm/
31 31
  - Python 2.4
32 32
    http://www.python.org
b/NEWS
1
News for Ganeti
2
===============
3

  
4
This file lists major changes between versions.
5

  
6
1.2b2:
7
  * Change configuration file format from Python's Pickle to JSON.
1
Version 1.2b2
2
  - Change configuration file format from Python's Pickle to JSON.
8 3
    Upgrading is possible using the cfgupgrade utility.
4
  - Add support for DRBD 8.0 (new disk template `drbd`) which allows for
5
    faster replace disks and is more stable (DRBD 8 has many
6
    improvements compared to DRBD 0.7)
7
  - Added command line tags support (see man pages for gnt-instance,
8
    gnt-node, gnt-cluster)
9
  - Added instance rename support
10
  - Added multi-instance startup/shutdown
11
  - Added cluster rename support
12
  - Added `gnt-node evacuate` to simplify some node operations
13
  - Added instance reboot operation that can speedup reboot as compared
14
    to stop and start
15
  - Soften the requirement that hostnames are in FQDN format
16
  - The ganeti-watcher now activates drbd pairs after secondary node
17
    reboots
18
  - Removed dependency on debian's patched fping that uses the
19
    non-standard -S option
20
  - Now the OS definitions are searched for in multiple, configurable
21
    paths (easier for distros to package)
22
  - Some changes to the hooks infrastructure (especially the new
23
    post-configuration update hook)
24
  - Other small bugfixes
b/doc/admin.sgml
185 185
          <term>remote_raid1</term>
186 186
          <listitem>
187 187
            <simpara><emphasis role="strong">Note:</emphasis> This is only
188
              valid for multi-node clusters.</simpara>
188
              valid for multi-node clusters using drbd 0.7.</simpara>
189 189
            <simpara>
190 190
              A mirror is set between the local node and a remote one, which
191 191
              must be specified with the second value of the --node option. Use
......
195 195
          </listitem>
196 196
        </varlistentry>
197 197

  
198
        <varlistentry>
199
          <term>drbd</term>
200
          <listitem>
201
            <simpara><emphasis role="strong">Note:</emphasis> This is only
202
              valid for multi-node clusters using drbd 8.0.</simpara>
203
            <simpara>
204
              This is similar to the
205
              <replaceable>remote_raid1</replaceable> option, but uses
206
              new features in drbd 8 to simplify the device
207
              stack. From a user's point of view, this will improve
208
              the speed of the <command>replace-disks</command>
209
              command and (in future versions) provide more
210
              functionality.
211
            </simpara>
212
          </listitem>
213
        </varlistentry>
214

  
198 215
      </variablelist>
199 216

  
200 217
      <para>
201 218
        For example if you want to create an highly available instance use the
202
        remote_raid1 disk template:
219
        remote_raid1 or drbd disk templates:
203 220
        <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable><optional>:<replaceable>SECONDARY_NODE</replaceable></optional> -o <replaceable>OS_TYPE</replaceable> -t remote_raid1 \
204 221
  <replaceable>INSTANCE_NAME</replaceable></synopsis>
205 222

  
......
312 329
        failed, or you plan to remove a node from your cluster, and
313 330
        you failed over all its instances, but it's still secondary
314 331
        for some? The solution here is to replace the instance disks,
315
        changing the secondary node:
332
        changing the secondary node. This is done in two ways, depending on the disk template type. For <literal>remote_raid1</literal>:
333

  
334
        <synopsis>gnt-instance replace-disks <option>-n <replaceable>NEW_SECONDARY</replaceable></option> <replaceable>INSTANCE_NAME</replaceable></synopsis>
316 335

  
317
        <synopsis>gnt-instance replace-disks -n <replaceable>NEW_SECONDARY</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
336
        and for <literal>drbd</literal>:
337
        <synopsis>gnt-instance replace-disks <option>-s</option> <option>-n <replaceable>NEW_SECONDARY</replaceable></option> <replaceable>INSTANCE_NAME</replaceable></synopsis>
318 338

  
319 339
        This process is a bit longer, but involves no instance
320 340
        downtime, and at the end of it the instance has changed its
......
394 414
    </sect2>
395 415

  
396 416
    <sect2>
397
      <title>Instance Operating System Debugging</title>
417
      <title>Instance OS definitions Debugging</title>
398 418

  
399 419
      <para>
400 420
        Should you have any problems with operating systems support
b/doc/install.sgml
291 291
      </para>
292 292

  
293 293
      <para>
294
        Supported DRBD version: the <literal>0.7</literal>
295
        series. It's recommended to have at least version
296
        <literal>0.7.24</literal> if you use <command>udev</command>
297
        since older versions have a bug related to device discovery
298
        which can be triggered in cases of hard drive failure.
294
        Supported DRBD versions: the <literal>0.7</literal> series
295
        <emphasis role="strong">or</emphasis>
296
        <literal>8.0.x</literal>. It's recommended to have at least
297
        version <literal>0.7.24</literal> if you use
298
        <command>udev</command> since older versions have a bug
299
        related to device discovery which can be triggered in cases of
300
        hard drive failure.
299 301
      </para>
300 302

  
301 303
      <para>
......
311 313
        you have the DRBD utils installed and the module in your
312 314
        kernel you're fine. Please check that your system is
313 315
        configured to load the module at every boot, and that it
314
        passes the following option to the module:
316
        passes the following option to the module (for
317
        <literal>0.7.x</literal>:
315 318
        <computeroutput>minor_count=64</computeroutput> (this will
316
        allow you to use up to 32 instances per node).
319
        allow you to use up to 32 instances per node) or for
320
        <literal>8.0.x</literal> you can use up to
321
        <constant>255</constant>
322
        (i.e. <computeroutput>minor_count=255</computeroutput>, but
323
        for most clusters <constant>128</constant> should be enough).
317 324
      </para>
318 325

  
319 326
      <formalpara><title>Debian</title>
......
331 338
echo drbd minor_count=64 >> /etc/modules
332 339
modprobe drbd minor_count=64
333 340
      </screen>
341
      <para>or for using DRBD <literal>8.x</literal> from the etch
342
      backports:</para>
343
      <screen>
344
apt-get install -t etch-backports drbd8-module-source drbd8-utils
345
m-a update
346
m-a a-i drbd8
347
echo drbd minor_count=128 >> /etc/modules
348
modprobe drbd minor_count=128
349
      </screen>
334 350

  
335 351
      <para>
336 352
        It is also recommended that you comment out the default
......
772 788
        This step shows how to setup a virtual instance with either
773 789
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
774 790
        with network mirrored disks
775
        (<computeroutput>remote_raid1</computeroutput>).  All commands
776
        need to be executed on the Ganeti master node (the one on
777
        which <computeroutput>gnt-cluster init</computeroutput> was
778
        run).  Verify that the OS scripts are present on all cluster
779
        nodes with <computeroutput>gnt-os list</computeroutput>.
791
        (<computeroutput>remote_raid1</computeroutput> for drbd 0.7
792
        and <computeroutput>drbd</computeroutput> for drbd 8.x).  All
793
        commands need to be executed on the Ganeti master node (the
794
        one on which <computeroutput>gnt-cluster init</computeroutput>
795
        was run).  Verify that the OS scripts are present on all
796
        cluster nodes with <computeroutput>gnt-os
797
        list</computeroutput>.
780 798
      </para>
781 799
      <para>
782 800
        To create a virtual instance, you need a hostname which is
......
808 826
      <para>
809 827
        To create a network mirrored instance, change the argument to
810 828
        the <option>-t</option> option from <literal>plain</literal>
811
        to <literal>remote_raid1</literal> and specify the node on
829
        to <literal>remote_raid1</literal> (drbd 0.7) or
830
        <literal>drbd</literal> (drbd 8.0) and specify the node on
812 831
        which the mirror should reside with the second value of the
813 832
        <option>--node</option> option, like this:
814 833
      </para>
......
854 873

  
855 874
      <para>
856 875
        To failover an instance to its secondary node (only possible
857
        in <literal>remote_raid1</literal> setup), use
858
        <computeroutput>gnt-instance failover
876
        in <literal>remote_raid1</literal> or <literal>drbd</literal>
877
        disk templates), use <computeroutput>gnt-instance failover
859 878
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
860 879
      </para>
861 880

  

Also available in: Unified diff