Revision 7bc95d52

b/docs/snf-burnin.rst
4 4
^^^^^^^^^^
5 5

  
6 6
:ref:`snf-burnin <snf-burnin>` is an integration testing tool for a running
7
Synnefo deployment. It uses the Synnefo REST APIs to run test scenarios for the
8
following categories:
7
Synnefo deployment. Using the Synnefo REST APIs, it simulates a real user and
8
tries to identify any bugs or performance issues by running a series of a tests.
9
The tests are devided into the following categories:
9 10

  
10
* :ref:`Authentication <unauthorizedtestcase>`
11
* :ref:`Images <imagestestcase>`
12
* :ref:`Flavors <flavorstestcase>`
13
* :ref:`Servers <serverstestcase>`
14
* :ref:`Network <networktestcase>`
15
* :ref:`Storage <pithostestcase>`
11
* :ref:`Authentication Tests <unauthorizedtestcase>`
12
* :ref:`Image Tests <imagestestcase>`
13
* :ref:`Flavor Tests <flavorstestcase>`
14
* :ref:`Server Tests <serverstestcase>`
15
* :ref:`Network Tests <networktestcase>`
16
* :ref:`Storage Tests <pithostestcase>`
16 17

  
17 18

  
18 19
Usage
19 20
=====
20 21

  
21
**Example:**
22
:ref:`snf-burnin <snf-burnin>` is a command line tool written in python. It
23
supports a number of command line options though which the user can change the
24
behaviour of the tests.
25

  
26
A typical usage of snf-burnin is:
22 27

  
23 28
::
24 29

  
25
  snf-burnin --token=TOKEN \
26
             --api=CYCLADES_BASE_URL
27
             --pithos=PITHOS_BASE_URL \
28
             --astakos=ASTAKOS_BASE_URL \
29
             --plankton=PLANKTON_URL \
30
             --plankton-user=PLANKTON_SYSTEM_USER \
30
  snf-burnin --token=USERS_SECRET_TOKEN \
31
             --auth-url="https://accounts.synnefo.org/identity/v2.0/" \
32
             --system-images-user=SYSTEM_IMAGES_USER_ID \
31 33
             --image-id=IMAGE_ID \
32 34
             --log-folder=LOG_FOLDER
33 35

  
34
For more info
36
The above options are the minimal ones (mandatory) that one has to speficy in
37
order for snf-burnin to properly function. The first two are the credentials
38
needed to access Synnefo's REST API and can be found in the user's dashboard.
39
The third is needed by some :ref:`Image Tests <imagestestcase>` as we will see
40
later. The forth tells snf-burnin which image to use for creating our test
41
servers and the last one specifies the log folder where any results should be
42
saves.
43

  
44
For more information about snf-burnin and it's command line options, run
45
snf-burnin with help.
35 46

  
36 47
::
37 48

  
38
  snf-burnin --help
49
  Usage: snf-burnin [options]
39 50

  
40
::
51
  snf-burnin runs a number of test scenarios on a Synnefo deployment.
41 52

  
42 53
  Options:
43
  -h, --help            show this help message and exit
44
  --api=API             The API URI to use to reach the Synnefo API
45
  --plankton=PLANKTON   The API URI to use to reach the Plankton API
46
  --plankton-user=PLANKTON_USER
47
                        Owner of system images
48
  --pithos=PITHOS       The API URI to use to reach the Pithos API
49
  --astakos=ASTAKOS     The API URI to use to reach the Astakos API
50
  --token=TOKEN         The token to use for authentication to the API
51
  --nofailfast          Do not fail immediately if one of the tests fails
52
                        (EXPERIMENTAL)
53
  --no-ipv6             Disables ipv6 related tests
54
  --action-timeout=TIMEOUT
55
                        Wait SECONDS seconds for a server action to complete,
56
                        then the test is considered failed
57
  --build-warning=TIMEOUT
58
                        Warn if TIMEOUT seconds have passed and a build
59
                        operation is still pending
60
  --build-fail=BUILD_TIMEOUT
61
                        Fail the test if TIMEOUT seconds have passed and a
62
                        build operation is still incomplete
63
  --query-interval=INTERVAL
64
                        Query server status when requests are pending every
65
                        INTERVAL seconds
66
  --fanout=COUNT        Spawn up to COUNT child processes to execute in
67
                        parallel, essentially have up to COUNT server build
68
                        requests outstanding (EXPERIMENTAL)
69
  --force-flavor=FLAVOR ID
70
                        Force all server creations to use the specified FLAVOR
71
                        ID instead of a randomly chosen one, useful if disk
72
                        space is scarce
73
  --image-id=IMAGE ID   Test the specified image id, use 'all' to test all
74
                        available images (mandatory argument)
75
  --show-stale          Show stale servers from previous runs, whose name
76
                        starts with `snf-test-'
77
  --delete-stale        Delete stale servers from previous runs, whose name
78
                        starts with `snf-test-'
79
  --force-personality=PERSONALITY_PATH
80
                        Force a personality file injection.
81
                        File path required.
82
  --log-folder=LOG_FOLDER
83
                        Define the absolute path where the output
84
                        log is stored.
85
  -V, --verbose         Print detailed output about multiple processes
86
                        spawning
87
  --set-tests=TESTS     Set comma seperated tests for this run.
88
                        Available tests: auth, images, flavors,
89
                        servers, server_spawn,
90
                        network_spawn, pithos.
91
                        Default = all
54
    -h, --help            show this help message and exit
55
    --auth-url=AUTH_URL   The AUTH URI to use to reach the Synnefo API
56
    --system-images-user=SYSTEM_IMAGES_USER
57
                          Owner of system images
58
    --token=TOKEN         The token to use for authentication to the API
59
    --nofailfast          Do not fail immediately if one of the tests fails
60
                          (EXPERIMENTAL)
61
    --no-ipv6             Disables ipv6 related tests
62
    --action-timeout=TIMEOUT
63
                          Wait SECONDS seconds for a server action to complete,
64
                          then the test is considered failed
65
    --build-warning=TIMEOUT
66
                          Warn if TIMEOUT seconds have passed and a build
67
                          operation is still pending
68
    --build-fail=BUILD_TIMEOUT
69
                          Fail the test if TIMEOUT seconds have passed and a
70
                          build operation is still incomplete
71
    --query-interval=INTERVAL
72
                          Query server status when requests are pending every
73
                          INTERVAL seconds
74
    --fanout=COUNT        Spawn up to COUNT child processes to execute in
75
                          parallel, essentially have up to COUNT server build
76
                          requests outstanding (EXPERIMENTAL)
77
    --force-flavor=FLAVOR ID
78
                          Force all server creations to use the specified FLAVOR
79
                          ID instead of a randomly chosen one, useful if disk
80
                          space is scarce
81
    --image-id=IMAGE ID   Test the specified image id, use 'all' to test all
82
                          available images (mandatory argument)
83
    --show-stale          Show stale servers from previous runs, whose name
84
                          starts with `snf-test-'
85
    --delete-stale        Delete stale servers from previous runs, whose name
86
                          starts with `snf-test-'
87
    --force-personality=PERSONALITY_PATH
88
                          Force a personality file injection.
89
                          File path required.
90
    --log-folder=LOG_FOLDER
91
                          Define the absolute path where the output
92
                          log is stored.
93
    -V, --verbose         Print detailed output about multiple processes
94
                          spawning
95
    --set-tests=TESTS     Set comma seperated tests for this run.
96
                          Available tests: auth, images, flavors,
97
                          servers, server_spawn,
98
                          network_spawn, pithos.
99
                          Default = all
92 100

  
93 101

  
94 102
Log files
95 103
=========
96 104

  
97
In each run, snf-burnin stores log files in the folder defined in the
98
--log-foler parameter, under the folder with the timestamp of the
99
snf-burnin-run and the image used for it. The name prefixes of the log
100
files are:
105
In each run, snf-burnin stores log files under the folder defined in the
106
--log-folder parameter. For every run, it creates a new subfolder using a
107
timestamp and the image-id as unique names. The name prefixes of the log files
108
are:
101 109

  
102 110
* details: Showing the complete log of snf-burnin run.
103 111
* error: Showing the testcases that encountered a runtime error.
......
107 115
Detailed description of testcases
108 116
=================================
109 117

  
118
Here we have a complete list of all the tests snf-burnin performs, each listed
119
under the category in which it belongs. The user can choose to run some or all
120
of the categories listed below using the "--set-tests" command line flag.
121

  
122

  
110 123
.. _unauthorizedtestcase:
111 124

  
112 125
UnauthorizedTestCase
113 126
--------------------
114
* Test that trying to access without a valid token, fails
127
* Use a random token and try to authenticate to Astakos service. The expected
128
  responce should be "401 Unauthorized".
115 129

  
116 130
.. _imagestestcase:
117 131

  
118 132
ImagesTestCase
119 133
--------------
120
* Test image list actually returns images
121
* Test detailed image list has the same length as list
122
* Test detailed and simple image list contain the same names
123
* Test system images have unique names
124
* Test every image has specific metadata defined
125
* Download image from Pithos
126
* Upload and register image
134
* Request from Cyclades the list of all registered images and check that its
135
  length is greater than 0 (ie test that there are registered images for the
136
  users to use).
137
* Request from Cyclades the list of all registered images with details and check
138
  that is length is greater than 0.
139
* Test that the two lists retrieved earlier contain exactly the same images.
140
* Using the SYSTEM_IMAGES_USER_ID choose only the images that belong to the
141
  system user and check that their names are unique. This test can not be
142
  applied for all images as the users can name their images whatever they want.
143
* Again for the images that belong to the system user check that the "osfamily"
144
  and the "root_partition" metadata values have been defined. These metadata
145
  values are mandatory for an image to be used.
146
* Download from Pithos+ the image specified with the "--image-id" parameter and
147
  save it locally.
148
* Create a new container to Pithos+ named "images".
149
* Upload the download image to Pithos+ under the "images" container.
150
* Use Plankton service to register the above image. Set the "osfamily" and
151
  "root_partition" metadata values which are mandatory.
152
* Request from Cyclades the list of all registered images and check that our
153
  newly registered image is among them.
154
* Delete image from Pithos+ and also the local copy on our disk.
127 155

  
128 156
.. _flavorstestcase:
129 157

  
130 158
FlavorsTestCase
131 159
---------------
132
* Test flavor list actually returns flavors
133
* Test detailed flavor list has the same length as list
134
* Test detailed and simple flavor list contain the same names
135
* Test flavors have unique names
136
* Test flavor names have correct format
160
* Request from Cyclades the list of all flavors and check that its length is
161
  greater than 0 (ie test that there are flavors for the users to use).
162
* Request from Cyclades the list of all flavors with details and check that its
163
  length is greater than 0.
164
* Test that the two lists retrived earlier contain exactly the same flavors.
165
* Test that all flavors have unique names.
166
* Test that all flavors have a name of the form CxxRyyDzz where xx is the vCPU
167
  count, yy is the RAM in MiB, and zz is the Disk in GiB.
137 168

  
138 169
.. _serverstestcase:
139 170

  
140 171
ServersTestCase
141 172
---------------
142
* Test simple and detailed server list have the same length
143
* Test simple and detailed servers have the same names
173
* Request from Cyclades the list of all servers with and without details and
174
  check that the two lists have the same length.
175
* Test that simple and detailed servers lists have the same names.
144 176

  
145 177
SpawnServerTestCase
146 178
-------------------
147
* Submit create server
148
* Test server is in BUILD state in server list
149
* Test server is in BUILD state in server details
150
* Change server metadata
151
* Verify the changed metadata are correct
152
* Verify server metadata are set based on image metadata
153
* Wait until server changes state to ACTIVE, and verify state
154
* Test if OOB server console works
155
* Test if server has IPv4
156
* Test if server has IPv6
157
* Test if server responds to ping on IPv4 address
158
* Test if server responds to ping on IPv6 address
159
* Submit shutdown request
160
* Verify server status is STOPPED
161
* Submit start request
162
* Test server status is ACTIVE
163
* Test if server responds to ping on IPv4 address (verify up and running)
164
* Test if server responds to ping on IPv6 address (verify up and running)
165
* Test SSH to server and verify hostname (IPv4)
166
* Test SSH to server and verify hostname (IPv6)
167
* Test RDP connection to server (only for Window Images) (IPv4)
168
* Test RDP connection to server (only for Window Images) (IPv6)
169
* Test file injection for personality enforcement
170
* Submit server delete request
171
* Test server becomes DELETED
172
* Test server is no longer in server list
179
* Submit a create server request to Cyclades service. Use the IMAGE_ID specified
180
  from the command line. If FLAVOR_ID was specified as well use that one, else
181
  choose one randomly. The name of the new server will start with "snf-test-"
182
  followed by a timestamp so we can know which servers have been created from
183
  snf-burnin and when. Also check that the response from Cyclades service
184
  contains the correct server_name, server_flavor_id, server_image_id and the
185
  status of the server is currenlty "BUILD". Finally from the above response,
186
  extract the server's id and password.
187
* Request from Cyclades the list of all servers with details and check that our
188
  newly created server has correct server_name, server_flavor_id,
189
  server_image_id and the status is "BUILD".
190
* Request from Cyclades the details from the image we used to build our server.
191
  Extract the "os" and "users" metadata values. Using the first one update the
192
  server's metadata and setup the "os" metadata value to be the same with the
193
  one from the image's metadata. Using the second one determine the username to
194
  use for future connections to this host.
195
* Retrieve the server's metadata from Cyclades and verify that server's metadata
196
  "os" key is set based on image's metadata.
197
* Wait until server changes state to ACTIVE. This is done by querying the
198
  service for the server's state every QUERY_INTERVAL period of time until
199
  BUILD_TIMEOUT has been reached. Both QUERY_INTERVAL and BUILD_TIMEOUT values
200
  can be changed from the command line.
201
* Request from Cyclades service a VNC console to our server. In order to verify
202
  that the returned connection is indeed a VNC one, snf-burnin implements the
203
  first basic steps of the RFB protocol:
204
    * Step 1. Send the ProtocolVersion message (par. 6.1.1)
205
    * Step 2. Check that only VNC Authentication is supported (par 6.1.2)
206
    * Step 3. Request VNC Authentication (par 6.1.2)
207
    * Step 4. Receive Challenge (par 6.2.2)
208
    * Step 5. DES-Encrypt challenge, using the password as key (par 6.2.2)
209
    * Step 6. Check that the SecurityResult is correct (par 6.1.3)
210
* Request from Cyclades the server's details and check that our server's has
211
  been assigned with an IPv4 address.
212
* Check that our server has been assigned with an IPv6 address. This test can be
213
  skipped if for some reason the targeted Synnefo deployment doesn't support
214
  IPv6.
215
* Test that our server responds to ping requests on IPv4 address.
216
* Test that our server responds to ping requests on IPv6 address. This test can
217
  also be skipped.
218
* Submit a shutdown request for our server.
219
* Wait and verify that the status of our server became "STOPPED".
220
* Submit a start request for our server.
221
* Wait and verify that the status of our server became "ACTIVE" again.
222
* Test if server responds to ping on IPv4 address (verify up and running).
223
* Test if server responds to ping on IPv6 address (verify up and running).
224
* If the server is a Linux machine, SSH to it using its IPv4 address and verify
225
  that it has a valid hostname.
226
* If the server is a Linux machine, SSH to it using its IPv6 address and verify
227
  that it has a valid hostname.
228
* If the server is a Windows machine, try to connect to its RDP port using both
229
  its IPv4 and IPv6 addresses.
230
* If during the creation of the server, the user chose a personality file to be
231
  used, check that this file is been presented in the server and its contents
232
  are correct.
233
* Submit server delete request.
234
* Wait and verify that the status of our server became "DELETED".
235
* Request from Cyclades the list of all servers and verify that our newly
236
  deleted server is not in the list.
173 237

  
174 238
.. _networktestcase:
175 239

  
176 240
NetworkTestCase
177 241
---------------
178
* Submit create server A request
179
* Test server A becomes ACTIVE
180
* Submit create server B request
181
* Test server B becomes ACTIVE
182
* Submit create private network request
183
* Submit connect VMs to private network
184
* Test if VMs are connected to network
185
* Submit reboot request to server A
186
* Test server A responds to ping on IPv4 address (verify up and running)
187
* Submit reboot request to server B
188
* Test server B responds to ping on IPv4 address (verify up and running)
189
* Connect via SSH and setup the new network interface in server A
190
* Connect via SSH and setup the new network interface in server B
191
* Connect via SSH to server A and test if server B responds to ping on IPv4 address
192
* Disconnect servers from network and verify the network details
193
* Send delete network request and verify that the network is deleted from the list
194
* Send request to delete servers and wait until they are actually deleted
242
* Submit create server A request.
243
* Wait and verify that the status of our A server became "ACTIVE".
244
* Submit create server B request.
245
* Wait and verify that the status of our B server became "ACTIVE".
246
* Submit create private network request. Wait and verify that the status of the
247
  network became "ACTIVE".
248
* Connect the two servers (A and B) into the newly created network. Wait and
249
  verify that both machines got an extra nic, hence have been connected to the
250
  network.
251
* Reboot server A.
252
* Test if server A responds to ping on IPv4 address (verify up and running)
253
* Reboot server B.
254
* Test if server B responds to ping on IPv4 address (verify up and running)
255
* Connect via SSH and setup the new network interface in server A.
256
* Connect via SSH and setup the new network interface in server B.
257
* Connect via SSH to server A and test if server B responds to ping via their
258
  new interface.
259
* Disconnect both servers from network. Check network details and verify that
260
  both servers have been successfully disconnected.
261
* Send delete network request. Verify that the network has been actually
262
  deleted.
263
* Send request to delete servers and wait until they are actually deleted.
195 264

  
196 265
.. _pithostestcase:
197 266

  
198 267
PithosTestCase
199 268
--------------
200
* Test container list is not empty
201
* Test containers have unique names
202
* Create a new container
203
* Upload simple file to newly created container
204
* Download file from Pithos and test it is the same with the one uploaded
205
* Remove created file and container from Pithos
206

  
207

  
208
Example scripts
209
===============
210

  
211
Under /snf-tools/conf you can find example scripts for automating snf-burnin
212
testing using cron.
213

  
214
* **snf-burnin-run.sh** runs snf-burnin with the given parameters, deletes
215
  stale instances (servers, networks) from old runs and delete logs older
216
  than a week. It aborts if snf-burnin runs for longer than expected.
217

  
218
* **snf-burnin-output.sh** checks for failed snf-burnin tests the last 30
219
  minutes in a given log folder. Exit status is 0 if no failures were
220
  encountered, else exit status is 1.
269
* Request from Pithos+ the list of containers and check that its length is
270
  greater than 0 (ie test that there are containers).
271
* Test that the containers have unique names.
272
* Create a new container. Choose a random name for our container and then check
273
  that it has been successfully created.
274
* Upload a file to Pithos+ under our newly created container.
275
* Download the file from Pithos+ and test it is the same with the one uploaded.
276
* Remove created file and container from Pithos+ and verify that they have been
277
  successfully deleted.
278

  
279

  
280
Burnin as alert tool
281
========================
282

  
283
Burnin can be used to verify that a Synnefo deployment is working as expected
284
and verify the admins in case of an error. For this there is a script under the
285
/snf-tools/conf directory named **snf-burnin-run.sh** which is intended to be
286
used from cron to periodically run burnin. It runs simultaneous many instances
287
of burnin for a number of different users and report errors though email.

Also available in: Unified diff