Revision 36f3d51e

b/docs/quick-install-guide.rst
89 89

  
90 90
Then open a browser and point to:
91 91

  
92
`https://synnefo.live/`
92
`https://accounts.synnefo.live/astakos/ui/login`
93 93

  
94 94
Local access
95 95
------------
......
97 97
If you want to access the installation from the same machine it runs on, just
98 98
open a browser and point to:
99 99

  
100
`https://synnefo.live/`
100
`https://accounts.synnefo.live/astakos/ui/login`
101 101

  
102 102
The default <domain> is set to ``synnefo.live``. A local BIND is already
103 103
set up by `snf-deploy` to serve all FQDNs.
b/docs/snf-deploy.rst
7 7
You can use `snf-deploy` to deploy Synnefo, in two ways:
8 8

  
9 9
1. Create a virtual cluster on your local machine and then deploy on that cluster.
10
2. Deploy on a pre-existent cluster of physical nodes running Debian Squeeze.
10
2. Deploy on a pre-existent cluster of physical nodes running Debian Wheezy.
11 11

  
12 12
Currently, `snf-deploy` is mostly useful for testing/demo installations and is
13 13
not recommended for production environment Synnefo deployments. If you want to
......
25 25
scale up and use all available features (e.g. RADOS, Archipelago, etc).
26 26

  
27 27
`snf-deploy` is a debian package that should be installed locally and allows
28
you to install Synnefo on remote nodes (if you go for (2)), or spawn a cluster
29
of VMs on your local machine using KVM and then install Synnefo on this cluster
30
(if you go for (1)). To this end, here we will break down our description into
31
three sections:
28
you to install Synnefo locally, or on remote nodes,  or spawn a cluster of VMs
29
on your local machine using KVM and then install Synnefo on this cluster. To
30
this end, here we will break down our description into three sections:
32 31

  
33 32
a. :ref:`snf-deploy configuration <conf>`
34 33
b. :ref:`Creating a virtual cluster <vcluster>` (needed for (1))
......
41 40

  
42 41
Before getting any further we should mention the roles that `snf-deploy` refers
43 42
to. The Synnefo roles are described in detail :ref:`here
44
<physical-node-roles>`. Note that multiple roles can co-exist in the same node
43
<physical-node-roles>`. Those nodes consist of certain components.
44
Note that multiple roles can co-exist in the same node
45 45
(virtual or physical).
46 46

  
47
Currently, `snf-deploy` recognizes the following combined roles:
48

  
49
* **accounts** = **WEBSERVER** + **ASTAKOS**
50
* **pithos** = **WEBSERVER** + **PITHOS**
51
* **cyclades** = **WEBSERVER** + **CYCLADES**
52
* **db** = **ASTAKOS_DB** + **PITHOS_DB** + **CYCLADES_DB**
53

  
54
the following independent roles:
55

  
56
* **qh** = **QHOLDER**
57
* **cms** = **CMS**
58
* **mq** = **MQ**
59
* **ns** = **NS**
60
* **client** = **CLIENT**
61
* **router**: The node to do any routing and NAT needed
62

  
63
The above define the roles relative to the Synnefo components. However, in
64
order to have instances up-and-running, at least one backend must be associated
65
with Cyclades. Backends are Ganeti clusters, each with multiple **GANETI_NODE**
66
s. Please note that these nodes may be the same as the ones used for the
67
previous roles. To this end, `snf-deploy` also recognizes:
68

  
69
* **cluster_nodes** = **G_BACKEND** = All available nodes of a specific backend
70
* **master_node** = **GANETI_MASTER**
71

  
72
Finally, it recognizes the group role:
73

  
74
* **existing_nodes** = **SYNNEFO** + (N x **G_BACKEND**)
75

  
76
In the future, `snf-deploy` will recognize all the independent roles of a scale
77
out deployment as stated in the :ref:`scale up section <scale-up>`. When that's
78
done, it won't need to introduce its own roles (stated here with lowercase) but
79
rather use the scale out ones (stated with uppercase on the admin guide).
80

  
47
Currently, `snf-deploy` defines the following roles:
48

  
49
* ns: bind server (DNS)
50
* db: postgresql server (database)
51
* mq: rabbitmq server (message queue)
52
* nfs: nfs server
53
* astakos: identity service
54
* pithos: storage service
55
* cyclades: compute service
56
* cms: cms service
57
* stats: stats service
58
* ganeti: ganeti node
59
* master: master node
60

  
61

  
62
The previous roles are combinations of the following software components:
63

  
64
* HW: IP and internet access
65
* SSH: ssh keys and config
66
* DDNS: ddns keys and ddns client config
67
* NS: nameserver with ddns config
68
* DNS: resolver config
69
* APT: apt sources config
70
* DB: database server with postgresql
71
* MQ: message queue server with rabbitmq
72
* NFS: nfs server
73
* Mount: nfs mount point
74
* Apache: web server with Apache
75
* Gunicorn: gunicorn server
76
* Common: synnefo common
77
* WEB: synnefo webclient
78
* Astakos: astakos webapp
79
* Pithos: pithos webapp
80
* Cyclades: cyclades webapp
81
* CMS: cms webapp
82
* VNC: vnc authentication proxy
83
* Collectd: collectd config
84
* Stats: stats webapp
85
* Kamaki: kamaki client
86
* Burnin: qa software
87
* Ganeti: ganeti node
88
* Master: ganeti master node
89
* Image: synnefo image os provider
90
* Network: synnefo networking scripts
91
* GTools: synnefo tools for ganeti
92
* GanetiCollectd: collectd config for ganeti nodes
93

  
94
Each component defines the following things:
95

  
96
* commands to check prereqs
97
* commands to prepare installation
98
* list of packages to install
99
* specific configuration files (templates)
100
* restart/reload commands
101
* initialization commands
102
* test commands
103

  
104
All a components needs is the node info that it gets installed to and the
105
snf-deploy configuration environment (available after parsing conf files).
81 106

  
82 107
.. _conf:
83 108

  
......
94 119
deployed and is the first to be set before running `snf-deploy`.
95 120

  
96 121
Defines the nodes' hostnames and their IPs. Currently `snf-deploy` expects all
97
nodes to reside in the same network subnet and domain, and share the same
98
gateway and nameserver. Since Synnefo requires FQDNs to operate, a nameserver
99
is going to be automatically setup in the cluster by `snf-deploy`. Thus, the
100
nameserver's IP should appear among the defined node IPs. From now on, we will
101
refer to the nodes with their hostnames. This implies their FQDN and their IP.
122
nodes to reside under the same domain. Since Synnefo requires FQDNs to operate,
123
a nameserver is going to be automatically setup in the cluster by `snf-deploy`
124
and all nodes with use this node for resolver.
102 125

  
103 126
Also, defines the nodes' authentication credentials (username, password).
104 127
Furthermore, whether nodes have an extra disk (used for LVM/DRBD storage in
......
116 139
tell `snf-deploy` whether the nodes on this file should be created, or treated
117 140
as pre-existing.
118 141

  
142
In case you deploy all-in-one you can install `snf-deploy` package in the
143
target node and use `--autoconf` option. By that you must change only
144
the passwords section and everything else will be automatically configured.
145

  
119 146
An example ``nodes.conf`` file looks like this:
120 147

  
121 148
FIXME: example file here
......
129 156
The important section here is the roles. In this file we assing each of the
130 157
roles described in the :ref:`introduction <snf-deploy>` to a specific node. The
131 158
node is one of the nodes defined at ``nodes.conf``. Note that we refer to nodes
132
with their short hostnames.
159
with their ID (node1, node2, etc).
133 160

  
134 161
Here we also define all credentials related to users needed by the various
135 162
Synnefo services (database, RAPI, RabbitMQ) and the credentials of a test
......
153 180
defined at ``nodes.conf``.
154 181

  
155 182
Here we include all info with regard to Ganeti backends. That is: the master
156
node, its floating IP, the volume group name (in case of LVM support) and the
157
VMs' public network associated to it. Please note that currently Synnefo
158
expects different public networks per backend but still can support multiple
159
public networks per backend.
183
node, its floating IP, the rest of the cluster nodes (if any) the volume group
184
name (in case of LVM support) and the VMs' public network associated to it.
160 185

  
161 186
FIXME: example file here
162 187

  
......
177 202
``vcluster.conf``
178 203
-----------------
179 204

  
180
This file defines options that are relevant to the virtual cluster creationi, if
205
This file defines options that are relevant to the virtual cluster creation, if
181 206
one chooses to create one.
182 207

  
183 208
There is an option to define the URL of the Image that will be used as the host
......
200 225
Synnefo on existing physical nodes, you should skip this section.
201 226

  
202 227
The first thing you need to deploy a virtual cluster, is a Debian Base image,
203
which will be used to spawn the VMs. We already provide an 8GB Debian Squeeze
204
Base image with preinstalled keys and network-manager hostname hooks. This
205
resides on our production Pithos service. Please see the corresponding
206
``squeeze_image_url`` variable in ``vcluster.conf``. The image can be fetched
207
by running:
228
which will be used to spawn the VMs.
229

  
230
FIXME: Find a way to provide this image.
231

  
232
The virtual cluster can be created by running:
208 233

  
209 234
.. code-block:: console
210 235

  
211
   snf-deploy vcluster image
236
   snf-deploy vcluster
212 237

  
213
This will download the image from the URL defined at ``squeeez_image_url``
238
This will download the image from the URL defined at ``squeeze_image_url``
214 239
(Pithos by default) and save it locally under ``/var/lib/snf-deploy/images``.
215 240

  
216 241
TODO: mention related options: --img-dir, --extra-disk, --lvg, --os
217 242

  
218
Once you have the image, then you need to setup the local machine's networking
219
appropriately. You can do this by running:
220

  
221
.. code-block:: console
222

  
223
   snf-deploy vcluster network
224

  
225
This will add a bridge (defined with the ``bridge`` option inside
243
Afterwards it will add a bridge (defined with the ``bridge`` option inside
226 244
``vcluster.conf``), iptables to allow traffic from/to the cluster, and enable
227 245
forwarding and NAT for the selected network subnet (defined inside
228 246
``nodes.conf`` in the ``subnet`` option).
229 247

  
230 248
To complete the preparation, you need a DHCP server that will provide the
231 249
selected hostnames and IPs to the cluster (defined under ``[ips]`` in
232
``nodes.conf``). To do so, run:
233

  
234
.. code-block:: console
250
``nodes.conf``).
235 251

  
236
   snf-deploy vcluster dhcp
252
It will launch a dnsmasq instance, acting only as DHCP server and listening
253
only on the cluster's bridge.
237 254

  
238
This will launch a dnsmasq instance, acting only as DHCP server and listening
239
only on the cluster's bridge. Every time you make changes inside ``nodes.conf``
240
you should re-create the dnsmasq related files (under ``/etc/snf-deploy``) by
241
passing --save-config option.
242

  
243
After running all the above preparation tasks we can finally create the cluster
244
defined in ``nodes.conf`` by running:
245

  
246
.. code-block:: console
247

  
248
   snf-deploy vcluster create
249

  
250
This will launch all the needed KVM virtual machines, snapshotting the image we
251
fetched before. Their taps will be connected with the already created bridge
252
and their primary interface will get the given address.
255
Finally it will launch all the needed KVM virtual machines, snapshotting the
256
image we fetched before. Their taps will be connected with the already created
257
bridge and their primary interface will get the given address.
253 258

  
254 259
Now that we have the nodes ready, we can move on and deploy Synnefo on them.
255 260

  
......
270 275
Node Requirements
271 276
-----------------
272 277

  
273
 - OS: Debian Squeeze
274
 - authentication: `root` with same password for all nodes
278
 - OS: Debian Wheezy
279
 - authentication: `root` user with corresponding for each node password
275 280
 - primary network interface: `eth0`
276
 - primary IP in the same IPv4 subnet and network domain
277 281
 - spare network interfaces: `eth1`, `eth2` (or vlans on `eth0`)
278 282

  
279 283
In case you have created a virtual cluster as described in the :ref:`section
......
281 285
physical cluster, you need to set them up manually by yourself, before
282 286
proceeding with the Synnefo installation.
283 287

  
284
Preparing the Synnefo deployment
285
--------------------------------
286

  
287
The following actions are mandatory and must run before the actual deployment.
288
In the following we refer to the sub commands of ``snf-deploy prepare`` and
289
what they actually do.
290

  
291
Synnefo expects FQDNs and therefore a nameserver (BIND) should be setup in a
292
node inside the cluster. All nodes along with your local machine should use
293
this nameserver and search in the corresponding network domain. To this end,
294
add to your local ``resolv.conf`` (please change the default values with the
295
ones of your custom configuration):
296

  
297
.. code-block:: console
298

  
299
   search <your_domain> synnefo.deploy.local
300
   nameserver 192.168.0.1
301

  
302
WARNING: In case you are running the installation on physical nodes please
303
ensure that they have the same `resolv.conf` and it does not change during
304
and after installation (because of NetworkManager hooks or something..)
305

  
306
To actually setup the nameserver in the node specified as ``ns`` in
307
``synnefo.conf`` run:
308

  
309
.. code-block:: console
310

  
311
   snf-deploy prepare ns
312

  
313
To do some node tweaking and install correct `id_rsa/dsa` keys and `authorized_keys`
314
needed for password-less intra-node communication run:
315

  
316
.. code-block:: console
317

  
318
   snf-deploy prepare hosts
319

  
320
At this point you should have a cluster with FQDNs and reverse DNS lookups
321
ready for the Synnefo deployment. To sum up, we mention all the node
322
requirements for a successful Synnefo installation, before proceeding.
323

  
324
To check the network configuration (FQDNs, connectivity):
325

  
326
.. code-block:: console
327

  
328
   snf-deploy prepare check
329

  
330
WARNING: In case ping fails check ``/etc/nsswitch.conf`` hosts entry and put dns
331
after files!!!
332

  
333
To setup the apt repository and update each nodes' package index files:
334

  
335
.. code-block:: console
336

  
337
   snf-deploy prepare apt
338

  
339
Finally Synnefo needs a shared file system, so we need to setup the NFS server
340
on node ``pithos`` defined in ``synnefo.conf``:
341

  
342
.. code-block:: console
343

  
344
   snf-deploy prepare nfs
345

  
346
If everything is setup correctly and all prerequisites are met, we can start
347
the Synnefo deployment.
348 288

  
349 289
Synnefo deployment
350 290
------------------
......
353 293

  
354 294
.. code-block:: console
355 295

  
356
   snf-deploy synnefo -vvv
296
   snf-deploy all -vvv
357 297

  
358 298
This might take a while.
359 299

  
......
361 301
from your local machine (make sure you have already setup your local
362 302
``resolv.conf`` to point at the cluster's DNS):
363 303

  
364
| https://accounts.synnefo.deploy.local/im/
304
| https://accounts.synnefo.live/astakos/ui/
365 305

  
366 306
and login with:
367 307

  
368
| username: dimara@grnet.gr password: lala
308
| username: user@synnefo.org password: 12345
369 309

  
370 310
or the ``user_name`` and ``user_passwd`` defined in your ``synnefo.conf``.
371 311
Take a small tour checking out Pithos and the rest of the Web UI. You can
372
upload a sample file on Pithos to see that Pithos is working. Do not try to
373
create a VM yet, since we have not yet added a Ganeti backend.
374

  
375
If everything seems to work, we go ahead to the last step which is adding a
376
Ganeti backend.
377

  
378
Adding a Ganeti Backend
379
-----------------------
380

  
381
Assuming that everything works as expected, you must have Astakos, Pithos, CMS,
382
DB and RabbitMQ up and running. Cyclades should work too, but partially. That's
383
because no backend is registered yet. Let's setup one. Currently, Synnefo
384
supports only Ganeti clusters as valid backends. They have to be created
385
independently with `snf-deploy` and once they are up and running, we register
386
them to Cyclades. From version 0.12, Synnefo supports multiple Ganeti backends.
387
`snf-deploy` defines them in ``ganeti.conf``.
388

  
389
After setting up ``ganeti.conf``, run:
312
upload a sample file on Pithos to see that Pithos is working. To test
313
everything went as expected, visit from your local machine:
390 314

  
391 315
.. code-block:: console
392 316

  
393
   snf-deploy backend create --backend-name ganeti1 -vvv
317
    https://cyclades.synnefo.live/cyclades/ui/
394 318

  
395
where ``ganeti1`` should have previously been defined as a section in
396
``ganeti.conf``. This will create the ``ganeti1`` backend on the corresponding
397
nodes (``cluster_nodes``, ``master_node``) defined in the ``ganeti1`` section
398
of the ``ganeti.conf`` file. If you are an experienced user and want to deploy
399
more than one Ganeti backend you should create multiple sections in
400
``ganeti.conf`` and re-run the above command with the corresponding backend
401
names.
319
and try to create a VM. Also create a Private Network and try to connect it. If
320
everything works, you have setup Synnefo successfully. Enjoy!
402 321

  
403
After creating and adding the Ganeti backend, we need to setup the backend
404
networking. To do so, we run:
405 322

  
406
.. code-block:: console
323
Adding another Ganeti Backend
324
-----------------------------
407 325

  
408
   snf-deploy backend network --backend-name ganeti1
326
From version 0.12, Synnefo supports multiple Ganeti backends.
327
`snf-deploy` defines them in ``ganeti.conf``.
409 328

  
410
And finally, we need to setup the backend storage:
329
After adding another section in ``ganeti.conf``, run:
411 330

  
412 331
.. code-block:: console
413 332

  
414
   snf-deploy backend storage --backend-name ganeti1
333
   snf-deploy backend --cluster-name ganeti2 -vvv
415 334

  
416
This command will first check the ``extra_disk`` in ``nodes.conf`` and try to
417
find it on the nodes of the cluster. If the nodes indeed have that disk,
418
`snf-deploy` will create a PV and the corresponding VG and will enable LVM and
419
DRBD storage in the Ganeti cluster.
420 335

  
421
If the option is blank or `snf-deploy` can't find the disk on the nodes, LVM
422
and DRBD will be disabled and only Ganeti's ``file`` disk template will be
423
enabled.
336
snf-deploy for Ganeti
337
=====================
424 338

  
425
To test everything went as expected, visit from your local machine:
339
`snf-deploy` can be used to deploy a Ganeti cluster on pre-existing nodes
340
by issuing:
426 341

  
427 342
.. code-block:: console
428 343

  
429
    https://cyclades.synnefo.deploy.local/ui/
430

  
431
and try to create a VM. Also create a Private Network and try to connect it. If
432
everything works, you have setup Synnefo successfully. Enjoy!
344
   snf-deploy ganeti --cluster-name ganeti3 -vvv
433 345

  
434 346

  
435 347
snf-deploy as a DevTool
436 348
=======================
437 349

  
438 350
For developers, a single node setup is highly recommended and `snf-deploy` is a
439
very helpful tool. `snf-deploy` also supports updating packages that are
440
locally generated. For this to work please add all \*.deb files in packages
441
directory (see ``deploy.conf``) and set the ``use_local_packages`` option to
442
``True``. Then run:
351
very helpful tool. `snf-deploy` also setting up components using packages that
352
are locally generated. For this to work please add all related \*.deb files in
353
packages directory (see ``deploy.conf``) and set the ``use_local_packages``
354
option to ``True``. Then run:
355

  
356
.. code-block:: console
357

  
358
   snf-deploy run <action1> [<action2>..]
359

  
360

  
361
to execute predefined actions or:
443 362

  
444 363
.. code-block:: console
445 364

  
446
   snf-deploy synnefo update --use-local-packages
447
   snf-deploy backend update --backend-name ganeti2 --use-local-packages
365
   snf-deploy run setup --node nodeX \
366
        --role ROLE | --component COMPONENT --method METHOD
448 367

  
449
For advanced users, `snf-deploy` gives the ability to run one or more times
450
independently some of the supported actions. To find out which are those, run:
368
to setup a synnefo role on a target node or run a specific component's method.
369

  
370
For instance, to add another node to an existing ganeti backend run:
451 371

  
452 372
.. code-block:: console
453 373

  
454
   snf-deploy run --help
374
   snf-deploy run setup --node5 --role ganeti --cluster-name ganeti3
375

  
376
`snf-deploy` keeps track of installed components per node in
377
``/etc/snf-deploy/status.conf``. If a deployment command fails, the developer
378
can make the required fix and then re-run the same command; `snf-deploy` will
379
not re-install components that have been already setup and their status
380
is ``ok``.

Also available in: Unified diff