Revision 0c068fc6 docs/quick-install-admin-guide.rst

b/docs/quick-install-admin-guide.rst
37 37

  
38 38
For the rest of the documentation we will refer to the first physical node as
39 39
"node1" and the second as "node2". We will also assume that their domain names
40
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
41
"4.3.2.2" respectively.
42

  
43
.. note:: It is import that the two machines are under the same domain name.
44
    If they are not, you can do this by editting the file ``/etc/hosts``
45
    on both machines, and add the following lines:
46

  
47
    .. code-block:: console
48

  
49
        4.3.2.1     node1.example.com
50
        4.3.2.2     node2.example.com
51

  
40
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
41
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
42
In case you choose to follow a private installation you will need to
43
set up a private dns server, using dnsmasq for example. See node1 below for more.
52 44

  
53 45
General Prerequisites
54 46
=====================
......
93 85
Node1
94 86
-----
95 87

  
88

  
96 89
General Synnefo dependencies
97 90
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98 91

  
99
    * apache (http server)
100
    * gunicorn (WSGI http server)
101
    * postgresql (database)
102
    * rabbitmq (message queue)
103
    * ntp (NTP daemon)
104
    * gevent
92
		* apache (http server)
93
		* public certificate
94
		* gunicorn (WSGI http server)
95
		* postgresql (database)
96
		* rabbitmq (message queue)
97
		* ntp (NTP daemon)
98
		* gevent
99
		* dns server
105 100

  
106 101
You can install apache2, progresql and ntp by running:
107 102

  
......
230 225

  
231 226
       # /etc/init.d/gunicorn stop
232 227

  
228
Certificate Creation
229
~~~~~~~~~~~~~~~~~~~~~
230

  
231
Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection.
232
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate.
233
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
234
and sign your keys with this. To do so on node1 run
235

  
236
.. code-block:: console
237

  
238
		# aptitude install openvpn
239
		# mkdir /etc/openvpn/easy-rsa
240
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
241
		# cd /etc/openvpn/easy-rsa/2.0
242
		# vim vars
243

  
244
In vars you can set your own parameters such as KEY_COUNTRY
245

  
246
.. code-block:: console
247

  
248
	# . ./vars
249
	# ./clean-all
250

  
251
Now you can create the certificate
252

  
253
.. code-block:: console 
254
		
255
		# ./build-ca
256

  
257
The previous will create a ``ca.crt`` file. Copy this file under
258
``/usr/local/share/ca-certificates/`` directory and run :
259

  
260
.. code-block:: console
261

  
262
		# update-ca-certificates
263

  
264
to update the records. You will have to do the following on node2 as well.
265

  
266
Now you can create the keys and sign them with the certificate
267

  
268
.. code-block:: console
269
		
270
		# ./build-key-server node1.example.com
271

  
272
This will create a .pem and a .key file in your current folder. Copy these in
273
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and 
274
use them in the apache2 configuration file below instead of the defaults.
275

  
233 276
Apache2 setup
234 277
~~~~~~~~~~~~~
235 278

  
......
247 290
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
248 291
    </VirtualHost>
249 292

  
293

  
250 294
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
251 295
following:
252 296

  
......
304 348

  
305 349
       # /etc/init.d/apache2 stop
306 350

  
351

  
307 352
.. _rabbitmq-setup:
308 353

  
309 354
Message Queue setup
......
335 380
   # chown www-data:www-data data
336 381
   # chmod g+ws data
337 382

  
383
DNS server setup
384
~~~~~~~~~~~~~~~~
385

  
386
If your machines are not under the same domain nameyou have to set up a dns server.
387
In order to set up a dns server using dnsmasq do the following
388

  
389
.. code-block:: console
390
			
391
				# apt-get install dnsmasq
392

  
393
Then edit you ``/etc/hosts/`` as follows
394

  
395
.. code-block:: console
396

  
397
		4.3.2.1     node1.example.com
398
		4.3.2.2     node2.example.com
399

  
400
Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and
401
the ``interface`` you would like to listen to.
402

  
403
Also add the following in your ``/etc/resolv.conf`` file 
404

  
405
.. code-block:: console
406

  
407
		nameserver 4.3.2.1
408

  
338 409
You are now ready with all general prerequisites concerning node1. Let's go to
339 410
node2.
340 411

  
......
349 420
    * postgresql (database)
350 421
    * ntp (NTP daemon)
351 422
    * gevent
423
    * certificates
424
    * dns setup
352 425

  
353 426
You can install the above by running:
354 427

  
......
487 560

  
488 561
       # /etc/init.d/apache2 stop
489 562

  
563

  
564
Acquire certificate
565
~~~~~~~~~~~~~~~~~~~
566

  
567
Copy the certificate you created before on node1 (`ca.crt`) under the directory
568
``/usr/local/share/ca-certificate``
569

  
570
and run:
571

  
572
.. code-block:: console
573

  
574
		# update-ca-certificates
575

  
576
to update the records.
577

  
578

  
579
DNS Setup
580
~~~~~~~~~
581

  
582
Add the following line in ``/etc/resolv.conf`` file
583

  
584
.. code-block:: console
585
		
586
		nameserver 4.3.2.1
587

  
588
to inform the node about the new dns server.
589

  
490 590
We are now ready with all general prerequisites for node2. Now that we have
491 591
finished with all general prerequisites for both nodes, we can start installing
492 592
the services. First, let's install Astakos on node1.
493 593

  
494

  
495 594
Installation of Astakos on node1
496 595
================================
497 596

  
......
628 727

  
629 728
Astakos uses the Django internal email delivering mechanism to send email 
630 729
notifications. A simple configuration, using an external smtp server to 
631
deliver messages, is shown below. 
730
deliver messages, is shown below. Alter the following example to meet your 
731
smtp server characteristics. Notice that the smtp server is needed for a proper
732
installation
632 733

  
633 734
.. code-block:: python
634 735
    
635
    # /etc/synnefo/10-snf-common-admins.conf
736
    # /etc/synnefo/00-snf-common-admins.conf
636 737
    EMAIL_HOST = "mysmtp.server.synnefo.org"
637 738
    EMAIL_HOST_USER = "<smtpuser>"
638 739
    EMAIL_HOST_PASSWORD = "<smtppassword>"
......
810 911
       # copy the file to astakos-host
811 912
       astakos-host$ snf-manage service-import --json pithos.json
812 913

  
914
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2
915

  
813 916
Setting Default Base Quota for Resources
814 917
----------------------------------------
815 918

  
......
1116 1219
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
1117 1220

  
1118 1221
We highly recommend that you read the official Ganeti documentation, if you are
1119
not familiar with Ganeti.
1222
not familiar with Ganeti. 
1120 1223

  
1121
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
1224
Unfortunately, the current stable version of the stock Ganeti (v2.6.2) doesn't
1122 1225
support IP pool management. This feature will be available in Ganeti >= 2.7.
1123 1226
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
1124
GRNET provided packages until stable 2.7 is out. To do so:
1227
GRNET provided packages until stable 2.7 is out. These packages will also install
1228
the proper version of Ganeti. To do so:
1125 1229

  
1126 1230
.. code-block:: console
1127 1231

  
1128 1232
   # apt-get install snf-ganeti ganeti-htools
1129
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1130 1233

  
1131
You should have:
1234
Ganeti will make use of drbd. To enable this and make the configuration pemanent 
1235
you have to do the following :
1236

  
1237
.. code-block:: console
1238

  
1239
		# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true	
1240
		# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
1132 1241

  
1133
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1134 1242

  
1135 1243
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1136 1244
both nodes, choose a domain name that resolves to a valid floating IP (let's
1137
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
1138
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
1139
If not then skip passing --no-ssh-init but be aware that it will replace
1140
/root/.ssh/* related files and you might lose access to master node. Also,
1141
make sure there is an lvm volume group named ``ganeti`` that will host your
1142
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
1245
say it's ``ganeti.node1.example.com``). This IP is needed to communicate with 
1246
the Ganeti cluster. Make sure node1 and node2 have same dsa,rsa keys and authorised_keys 
1247
for password-less root ssh between each other. If not then skip passing --no-ssh-init but be 
1248
aware that it will replace /root/.ssh/* related files and you might lose access to master node. 
1249
Also, Ganeti will need a volume to host your VMs' disks. So, make sure there is an lvm volume 
1250
group named ``ganeti``. Finally, setup a bridge interface on the host machines (e.g: br0). This  
1251
will be needed for the network configuration afterwards.
1252

  
1143 1253
Then run on node1:
1144 1254

  
1145 1255
.. code-block:: console
......
1181 1291
able to access the Pithos database. This is why, we also install them on *all*
1182 1292
VM-capable Ganeti nodes.
1183 1293

  
1184
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
1185
    not  work out of the box if you try to use URLs served by servers which do
1186
    not have a valid certificate. To circumvent this you should edit the file
1187
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
1294
.. warning:: 
1295
		snf-image uses ``curl`` for handling URLs. This means that it will
1296
		not  work out of the box if you try to use URLs served by servers which do
1297
		not have a valid certificate. In case you haven't followed the guide's
1298
		directions about the certificates,in order to circumvent this you should edit the file
1299
		``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
1188 1300

  
1189 1301
After `snf-image` has been installed successfully, create the helper VM by
1190 1302
running on *both* nodes:
......
1303 1415
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
1304 1416
 * ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
1305 1417
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1306
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
1418
               * ``UUID``: the username found in Cyclades Web UI under API access
1307 1419
               * ``container``: ``pithos`` (default, if the Web UI was used)
1308 1420
               * ``filename``: the name of file (visible also from the Web UI)
1309 1421
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
......
1494 1606
                      --net 0:ip=pool,network=test-net-public \
1495 1607
                      testvm2
1496 1608

  
1497
If the above returns successfully, connect to the new VM and run:
1609
If the above returns successfully, connect to the new VM through VNC as before and run:
1498 1610

  
1499 1611
.. code-block:: console
1500 1612

  
......
1766 1878

  
1767 1879
   CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
1768 1880
   CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
1769
   CLOUDBAR_MENU_URL = 'https://account.node1.example.com/astakos/ui/get_menu'
1881
   CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1770 1882

  
1771 1883
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
1772 1884
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
......
2027 2139
   node2 # snf-manage reconcile-resources-pithos --fix
2028 2140
   node1 # snf-manage reconcile-resources-cyclades --fix
2029 2141

  
2142

  
2030 2143
If all the above return successfully, then you have finished with the Cyclades
2031 2144
installation and setup.
2032 2145

  
......
2171 2284
.. code-block:: console
2172 2285

  
2173 2286
   $ kamaki image register "Debian Base" \
2174
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump \
2287
                           pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump \
2175 2288
                           --public \
2176 2289
                           --disk-format=diskdump \
2177 2290
                           --property OSFAMILY=linux --property ROOT_PARTITION=1 \
......
2180 2293
                           --property sortorder=1 --property USERS=root --property OS=debian
2181 2294

  
2182 2295
This command registers the Pithos file
2183
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump`` as an
2296
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump`` as an
2184 2297
Image in Cyclades. This Image will be public (``--public``), so all users will
2185 2298
be able to spawn VMs from it and is of type ``diskdump``. The first two
2186 2299
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest

Also available in: Unified diff