Revision 4b1eede6

b/docs/index.rst
7 7
=======================================
8 8

  
9 9
snf-network is a set of scripts that handle the network configuration of
10
an instance inside a Ganeti cluster. It takes advantange of the
10
an instance inside a Ganeti cluster. It takes advantage of the
11 11
variables that Ganeti exports to their execution environment and issue
12 12
all the necessary commands to ensure network connectivity to the instance
13 13
based on the requested setup.
......
40 40
Scripts
41 41
-------
42 42

  
43
The scripts can be devided into two categories:
43
The scripts can be divided into two categories:
44 44

  
45 45
1. The scripts that are invoked explicitly by Ganeti upon NIC creation.
46 46

  
......
51 51
the latter one has the info of the whole instance. The big difference is that
52 52
instance configuration (from the master perspective) might vary or be total
53 53
different from the one that is currently running. The reason is that some
54
modifications can take place without hotplug.
54
modifications can take place without hotplugging.
55 55

  
56 56

  
57 57
kvm-ifup-custom
58 58
^^^^^^^^^^^^^^^
59 59

  
60
Ganeti upon instance startup and NIC hotplug creates the TAP devices to
60
Ganeti upon instance startup and NIC hotplugging creates the TAP devices to
61 61
reflect to the instance's NICs. After that it invokes the Ganeti's `kvm-ifup`
62 62
script with the TAP name as first argument and an environment including
63 63
all NIC's and the corresponding network's info. This script searches for
......
89 89
snf-network-hook
90 90
^^^^^^^^^^^^^^^^
91 91

  
92
This hook gets all static info related to an instance from evironment variables
92
This hook gets all static info related to an instance from environment variables
93 93
and issues any commands needed. It was used to fix node's setup upon migration
94 94
when ifdown script was not supported but now it does nothing.
95 95

  
......
101 101
``nsupdate``. Since we add/remove entries during ifup/ifdown scripts, we use
102 102
this only during instance remove/shutdown/rename. It does not rely on exported
103 103
environment but it queries first the DNS server to obtain current entries and
104
then it invokes the neccessary commands to remove them (and the relevant
104
then it invokes the necessary commands to remove them (and the relevant
105 105
reverse ones too).
106 106

  
107 107

  
......
121 121

  
122 122
This setup has the following characteristics:
123 123

  
124
* An external gateway on the same collition domain with all nodes on some
124
* An external gateway on the same collision domain with all nodes on some
125 125
  interface (e.g. eth1, eth0.200) is needed.
126
* Each node is a router for the hostes VMs
126
* Each node is a router for the hosted VMs
127 127
* The node itself does not have an IP inside the routed network
128 128
* The node does proxy ARP for IPv4 networks
129 129
* The node does proxy NDP for IPv6 networks while RA and NA are
......
153 153

  
154 154
 - ARP, Request who-has GW_IP tell IP
155 155
 - ARP, Reply GW_IP is-at TAP_MAC ``echo 1 > /proc/sys/net/conf/TAP/proxy_arp``
156
 - So `arp -na` insided the VM shows: ``(GW_IP) at TAP_MAC [ether] on eth0``
156
 - So `arp -na` inside the VM shows: ``(GW_IP) at TAP_MAC [ether] on eth0``
157 157

  
158 158
2) The host wants to know the GW_MAC. Since the node does **not** have an IP
159 159
   inside the network we use the dummy one specified above.
......
180 180
via simple L3 routing. Lets assume the following:
181 181

  
182 182
* ``TABLE`` is the extra routing table
183
* ``SUBNET`` is the IPv4 subnet where the VM's IP reside
183
* ``SUBNET`` is the IPv4 subnet where the VM's IP resides
184 184

  
185 185
1) Outgoing traffic:
186 186

  
......
193 193

  
194 194
 - Packet arrives at router
195 195
 - Router knows from proxy ARP that the IP is at DEV_MAC.
196
 - Router sends ethernet packet with tgt DEV_MAC
196
 - Router sends Ethernet packet with tgt DEV_MAC
197 197
 - Host receives the packet from DEV interface
198 198
 - Traffic coming out DEV is routed via TABLE
199 199
   ``ip rule add dev DEV table TABLE``
......
223 223
^^^^^^^^^^^^^^^^
224 224

  
225 225
In order to provide L2 isolation among several VMs we can use ebtables on a
226
**single** bridge. The infrastracture must provide a physical VLAN or separate
227
interaface shared among all nodes in the cluster. All virtual interfaces will
226
**single** bridge. The infrastructure must provide a physical VLAN or separate
227
interface shared among all nodes in the cluster. All virtual interfaces will
228 228
be bridged on a common bridge (e.g. ``prv0``) and filtering will be done via
229 229
ebtables and MAC prefix. The concept is that all interfaces on the same L2
230
should have the same MAC prefix. MAC prefix uniqueness is quaranteed by
231
synnefo and passed to Ganeti as a network option.
230
should have the same MAC prefix. MAC prefix uniqueness is guaranteed by
231
Synnefo and passed to Ganeti as a network option.
232 232

  
233 233
To ensure isolation we should allow traffic coming from tap to have specific
234 234
source MAC and at the same time allow traffic coming to tap to have a source
235 235
MAC in the same MAC prefix. Applying those rules only in FORWARD chain will not
236
guarantee isolation. The reason is because packets with target MAC a `mutlicast
236
guarantee isolation. The reason is because packets with target MAC a `multicast
237 237
address <http://en.wikipedia.org/wiki/Multicast_address>`_ go through INPUT and
238 238
OUTPUT chains. To sum up the following ebtables rules are applied:
239 239

  

Also available in: Unified diff