Revision 1cdc9dbb
b/doc/admin.rst | ||
---|---|---|
591 | 591 |
|
592 | 592 |
Since the process involves copying all data from the working node to the |
593 | 593 |
target node, it will take a while, depending on the instance's disk |
594 |
size, node I/O system and network speed. But it is (baring any network |
|
594 |
size, node I/O system and network speed. But it is (barring any network
|
|
595 | 595 |
interruption) completely transparent for the instance. |
596 | 596 |
|
597 | 597 |
Re-creating disks for non-redundant instances |
b/doc/walkthrough.rst | ||
---|---|---|
104 | 104 |
debootstrap |
105 | 105 |
node1# |
106 | 106 |
|
107 |
Running a burnin |
|
108 |
---------------- |
|
107 |
Running a burn-in
|
|
108 |
-----------------
|
|
109 | 109 |
|
110 | 110 |
Now that the cluster is created, it is time to check that the hardware |
111 | 111 |
works correctly, that the hypervisor can actually create instances, |
... | ... | |
263 | 263 |
… |
264 | 264 |
node1# |
265 | 265 |
|
266 |
You can see in the above what operations the burnin does. Ideally, the |
|
267 |
burnin log would proceed successfully through all the steps and end |
|
266 |
You can see in the above what operations the burn-in does. Ideally, the
|
|
267 |
burn-in log would proceed successfully through all the steps and end
|
|
268 | 268 |
cleanly, without throwing errors. |
269 | 269 |
|
270 | 270 |
Instance operations |
... | ... | |
584 | 584 |
Mon Oct 26 05:27:39 2009 - INFO: Readding a node, the offline/drained flags were reset |
585 | 585 |
Mon Oct 26 05:27:39 2009 - INFO: Node will be a master candidate |
586 | 586 |
|
587 |
And is now working again:: |
|
587 |
And it is now working again::
|
|
588 | 588 |
|
589 | 589 |
node1# gnt-node list |
590 | 590 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
... | ... | |
592 | 592 |
node2 1.3T 1.3T 32.0G 1.0G 30.4G 1 3 |
593 | 593 |
node3 1.3T 1.3T 32.0G 1.0G 30.4G 0 0 |
594 | 594 |
|
595 |
.. note:: If you have the Ganeti has been built with the htools
|
|
595 |
.. note:: If Ganeti has been built with the htools |
|
596 | 596 |
component enabled, you can shuffle the instances around to have a |
597 | 597 |
better use of the nodes. |
598 | 598 |
|
b/man/gnt-backup.rst | ||
---|---|---|
138 | 138 |
|
139 | 139 |
maxmem |
140 | 140 |
the maximum memory size of the instance; as usual, suffixes can be |
141 |
used to denote the unit, otherwise the value is taken in mebibites
|
|
141 |
used to denote the unit, otherwise the value is taken in mebibytes
|
|
142 | 142 |
|
143 | 143 |
minmem |
144 | 144 |
the minimum memory size of the instance; as usual, suffixes can be |
145 |
used to denote the unit, otherwise the value is taken in mebibites
|
|
145 |
used to denote the unit, otherwise the value is taken in mebibytes
|
|
146 | 146 |
|
147 | 147 |
vcpus |
148 | 148 |
the number of VCPUs to assign to the instance (if this value makes |
b/man/gnt-instance.rst | ||
---|---|---|
61 | 61 |
instance's disks. Ganeti will rename these volumes to the standard |
62 | 62 |
format, and (without installing the OS) will use them as-is for the |
63 | 63 |
instance. This allows migrating instances from non-managed mode |
64 |
(e.q. plain KVM with LVM) to being managed via Ganeti. Note that
|
|
64 |
(e.g. plain KVM with LVM) to being managed via Ganeti. Please note that
|
|
65 | 65 |
this works only for the \`plain' disk template (see below for |
66 | 66 |
template details). |
67 | 67 |
|
... | ... | |
130 | 130 |
|
131 | 131 |
maxmem |
132 | 132 |
the maximum memory size of the instance; as usual, suffixes can be |
133 |
used to denote the unit, otherwise the value is taken in mebibites
|
|
133 |
used to denote the unit, otherwise the value is taken in mebibytes
|
|
134 | 134 |
|
135 | 135 |
minmem |
136 | 136 |
the minimum memory size of the instance; as usual, suffixes can be |
137 |
used to denote the unit, otherwise the value is taken in mebibites
|
|
137 |
used to denote the unit, otherwise the value is taken in mebibytes
|
|
138 | 138 |
|
139 | 139 |
vcpus |
140 | 140 |
the number of VCPUs to assign to the instance (if this value makes |
... | ... | |
180 | 180 |
n |
181 | 181 |
network boot (PXE) |
182 | 182 |
|
183 |
The default is not to set an HVM boot order which is interpreted |
|
183 |
The default is not to set an HVM boot order, which is interpreted
|
|
184 | 184 |
as 'dc'. |
185 | 185 |
|
186 | 186 |
For KVM the boot order is either "floppy", "cdrom", "disk" or |
... | ... | |
1444 | 1444 |
The option ``-f`` will skip the prompting for confirmation. |
1445 | 1445 |
|
1446 | 1446 |
If ``--allow-failover`` is specified it tries to fallback to failover if |
1447 |
it already can determine that a migration wont work (i.e. if the |
|
1448 |
instance is shutdown). Please note that the fallback will not happen |
|
1447 |
it already can determine that a migration won't work (i.e. if the
|
|
1448 |
instance is shut down). Please note that the fallback will not happen
|
|
1449 | 1449 |
during execution. If a migration fails during execution it still fails. |
1450 | 1450 |
|
1451 | 1451 |
Example (and expected output):: |
1452 | 1452 |
|
1453 | 1453 |
# gnt-instance migrate instance1 |
1454 |
Migrate will happen to the instance instance1. Note that migration is
|
|
1455 |
**experimental** in this version. This might impact the instance if
|
|
1456 |
anything goes wrong. Continue?
|
|
1454 |
Instance instance1 will be migrated. Note that migration
|
|
1455 |
might impact the instance if anything goes wrong (e.g. due to bugs in
|
|
1456 |
the hypervisor). Continue?
|
|
1457 | 1457 |
y/[n]/?: y |
1458 |
Migrating instance instance1.example.com |
|
1458 | 1459 |
* checking disk consistency between source and target |
1459 |
* ensuring the target is in secondary mode |
|
1460 |
* switching node node2.example.com to secondary mode |
|
1461 |
* changing into standalone mode |
|
1460 | 1462 |
* changing disks into dual-master mode |
1461 |
- INFO: Waiting for instance instance1 to sync disks.
|
|
1462 |
- INFO: Instance instance1's disks are in sync.
|
|
1463 |
* wait until resync is done
|
|
1464 |
* preparing node2.example.com to accept the instance
|
|
1463 | 1465 |
* migrating instance to node2.example.com |
1464 |
* changing the instance's disks on source node to secondary |
|
1465 |
- INFO: Waiting for instance instance1 to sync disks. |
|
1466 |
- INFO: Instance instance1's disks are in sync. |
|
1467 |
* changing the instance's disks to single-master |
|
1466 |
* switching node node1.example.com to secondary mode |
|
1467 |
* wait until resync is done |
|
1468 |
* changing into standalone mode |
|
1469 |
* changing disks into single-master mode |
|
1470 |
* wait until resync is done |
|
1471 |
* done |
|
1468 | 1472 |
# |
1469 | 1473 |
|
1470 | 1474 |
|
Also available in: Unified diff