Revision 926feaf1 doc/walkthrough.rst

b/doc/walkthrough.rst
17 17
Our simulated, example cluster will have three machines, named
18 18
``node1``, ``node2``, ``node3``. Note that in real life machines will
19 19
usually FQDNs but here we use short names for brevity. We will use a
20
secondary network for replication data, ``192.168.2.0/24``, with nodes
20
secondary network for replication data, ``192.0.2.0/24``, with nodes
21 21
having the last octet the same as their index. The cluster name will be
22 22
``example-cluster``. All nodes have the same simulated hardware
23 23
configuration, two disks of 750GB, 32GB of memory and 4 CPUs.
......
32 32
Follow the :doc:`install` document and prepare the nodes. Then it's time
33 33
to initialise the cluster::
34 34

  
35
  node1# gnt-cluster init -s 192.168.2.1 --enabled-hypervisors=xen-pvm cluster
35
  node1# gnt-cluster init -s 192.0.2.1 --enabled-hypervisors=xen-pvm example-cluster
36 36
  node1#
37 37

  
38 38
The creation was fine. Let's check that one node we have is functioning
......
55 55

  
56 56
Since this proceeded correctly, let's add the other two nodes::
57 57

  
58
  node1# gnt-node add -s 192.168.2.2 node2
58
  node1# gnt-node add -s 192.0.2.2 node2
59 59
  -- WARNING --
60 60
  Performing this operation is going to replace the ssh daemon keypair
61 61
  on the target machine (node2) with the ones of the current one
62 62
  and grant full intra-cluster ssh root access to/from it
63 63

  
64
  The authenticity of host 'node2 (192.168.1.2)' can't be established.
64
  The authenticity of host 'node2 (192.0.2.2)' can't be established.
65 65
  RSA key fingerprint is 9f:…
66 66
  Are you sure you want to continue connecting (yes/no)? yes
67 67
  root@node2's password:
68 68
  Mon Oct 26 02:11:54 2009  - INFO: Node will be a master candidate
69
  node1# gnt-node add -s 192.168.2.3 node3
69
  node1# gnt-node add -s 192.0.2.3 node3
70 70
  -- WARNING --
71 71
  Performing this operation is going to replace the ssh daemon keypair
72 72
  on the target machine (node2) with the ones of the current one
73 73
  and grant full intra-cluster ssh root access to/from it
74 74

  
75
  The authenticity of host 'node3 (192.168.1.3)' can't be established.
75
  The authenticity of host 'node3 (192.0.2.3)' can't be established.
76 76
  RSA key fingerprint is 9f:…
77 77
  Are you sure you want to continue connecting (yes/no)? yes
78 78
  root@node2's password:
......
378 378

  
379 379
  node1# gnt-node info node3
380 380
  Node name: node3
381
    primary ip: 172.24.227.1
382
    secondary ip: 192.168.2.3
381
    primary ip: 198.51.100.1
382
    secondary ip: 192.0.2.3
383 383
    master candidate: True
384 384
    drained: False
385 385
    offline: False
......
578 578
reused. Re-adding it is simple::
579 579

  
580 580
  node1# gnt-node add --readd node3
581
  The authenticity of host 'node3 (172.24.227.1)' can't be established.
581
  The authenticity of host 'node3 (198.51.100.1)' can't be established.
582 582
  RSA key fingerprint is 9f:2e:5a:2e:e0:bd:00:09:e4:5c:32:f2:27:57:7a:f4.
583 583
  Are you sure you want to continue connecting (yes/no)? yes
584 584
  Mon Oct 26 05:27:39 2009  - INFO: Readding a node, the offline/drained flags were reset

Also available in: Unified diff