root / docs / snf-burnin.rst @ 999bf7b6
History | View | Annotate | Download (13.1 kB)
1 |
.. _snf-burnin: |
---|---|
2 |
|
3 |
snf-burnin |
4 |
^^^^^^^^^^ |
5 |
|
6 |
:ref:`snf-burnin <snf-burnin>` is an integration testing tool for a running |
7 |
Synnefo deployment. Using the Synnefo REST APIs, it simulates a real user and |
8 |
tries to identify any bugs or performance issues by running a series of a tests. |
9 |
The tests are devided into the following categories: |
10 |
|
11 |
* :ref:`Authentication Tests <unauthorizedtestcase>` |
12 |
* :ref:`Image Tests <imagestestcase>` |
13 |
* :ref:`Flavor Tests <flavorstestcase>` |
14 |
* :ref:`Server Tests <serverstestcase>` |
15 |
* :ref:`Network Tests <networktestcase>` |
16 |
* :ref:`Storage Tests <pithostestcase>` |
17 |
|
18 |
|
19 |
Usage |
20 |
===== |
21 |
|
22 |
:ref:`snf-burnin <snf-burnin>` is a command line tool written in python. It |
23 |
supports a number of command line options though which the user can change the |
24 |
behaviour of the tests. |
25 |
|
26 |
A typical usage of snf-burnin is: |
27 |
|
28 |
:: |
29 |
|
30 |
snf-burnin --token=USERS_SECRET_TOKEN \ |
31 |
--auth-url="https://accounts.synnefo.org/identity/v2.0" \ |
32 |
--system-images-user=SYSTEM_IMAGES_USER_ID \ |
33 |
--image-id=IMAGE_ID \ |
34 |
--log-folder=LOG_FOLDER |
35 |
|
36 |
The above options are the minimal ones (mandatory) that one has to speficy in |
37 |
order for snf-burnin to properly function. The first two are the credentials |
38 |
needed to access Synnefo's REST API and can be found in the user's dashboard. |
39 |
The third is needed by some :ref:`Image Tests <imagestestcase>` as we will see |
40 |
later. The forth tells snf-burnin which image to use for creating our test |
41 |
servers and the last one specifies the log folder where any results should be |
42 |
saves. |
43 |
|
44 |
For more information about snf-burnin and it's command line options, run |
45 |
snf-burnin with help. |
46 |
|
47 |
:: |
48 |
|
49 |
Usage: snf-burnin [options] |
50 |
|
51 |
snf-burnin runs a number of test scenarios on a Synnefo deployment. |
52 |
|
53 |
Options: |
54 |
-h, --help show this help message and exit |
55 |
--auth-url=AUTH_URL The AUTH URI to use to reach the Synnefo API |
56 |
--system-images-user=SYSTEM_IMAGES_USER |
57 |
Owner of system images |
58 |
--token=TOKEN The token to use for authentication to the API |
59 |
--nofailfast Do not fail immediately if one of the tests fails |
60 |
(EXPERIMENTAL) |
61 |
--no-ipv6 Disables ipv6 related tests |
62 |
--action-timeout=TIMEOUT |
63 |
Wait SECONDS seconds for a server action to complete, |
64 |
then the test is considered failed |
65 |
--build-warning=TIMEOUT |
66 |
Warn if TIMEOUT seconds have passed and a build |
67 |
operation is still pending |
68 |
--build-fail=BUILD_TIMEOUT |
69 |
Fail the test if TIMEOUT seconds have passed and a |
70 |
build operation is still incomplete |
71 |
--query-interval=INTERVAL |
72 |
Query server status when requests are pending every |
73 |
INTERVAL seconds |
74 |
--fanout=COUNT Spawn up to COUNT child processes to execute in |
75 |
parallel, essentially have up to COUNT server build |
76 |
requests outstanding (EXPERIMENTAL) |
77 |
--force-flavor=FLAVOR ID |
78 |
Force all server creations to use the specified FLAVOR |
79 |
ID instead of a randomly chosen one, useful if disk |
80 |
space is scarce |
81 |
--image-id=IMAGE ID Test the specified image id, use 'all' to test all |
82 |
available images (mandatory argument) |
83 |
--show-stale Show stale servers from previous runs, whose name |
84 |
starts with `snf-test-' |
85 |
--delete-stale Delete stale servers from previous runs, whose name |
86 |
starts with `snf-test-' |
87 |
--force-personality=PERSONALITY_PATH |
88 |
Force a personality file injection. |
89 |
File path required. |
90 |
--log-folder=LOG_FOLDER |
91 |
Define the absolute path where the output |
92 |
log is stored. |
93 |
-V, --verbose Print detailed output about multiple processes |
94 |
spawning |
95 |
--set-tests=TESTS Set comma seperated tests for this run. |
96 |
Available tests: auth, images, flavors, |
97 |
servers, server_spawn, |
98 |
network_spawn, pithos. |
99 |
Default = all |
100 |
|
101 |
|
102 |
Log files |
103 |
========= |
104 |
|
105 |
In each run, snf-burnin stores log files under the folder defined in the |
106 |
--log-folder parameter. For every run, it creates a new subfolder using a |
107 |
timestamp and the image-id as unique names. The name prefixes of the log files |
108 |
are: |
109 |
|
110 |
* details: Showing the complete log of snf-burnin run. |
111 |
* error: Showing the testcases that encountered a runtime error. |
112 |
* failed: Showing the testcases that encountered a failure. |
113 |
|
114 |
|
115 |
Detailed description of testcases |
116 |
================================= |
117 |
|
118 |
Here we have a complete list of all the tests snf-burnin performs, each listed |
119 |
under the category in which it belongs. The user can choose to run some or all |
120 |
of the categories listed below using the "--set-tests" command line flag. |
121 |
|
122 |
|
123 |
.. _unauthorizedtestcase: |
124 |
|
125 |
UnauthorizedTestCase |
126 |
-------------------- |
127 |
* Use a random token and try to authenticate to Astakos service. The expected |
128 |
responce should be "401 Unauthorized". |
129 |
|
130 |
.. _imagestestcase: |
131 |
|
132 |
ImagesTestCase |
133 |
-------------- |
134 |
* Request from Cyclades the list of all registered images and check that its |
135 |
length is greater than 0 (ie test that there are registered images for the |
136 |
users to use). |
137 |
* Request from Cyclades the list of all registered images with details and check |
138 |
that is length is greater than 0. |
139 |
* Test that the two lists retrieved earlier contain exactly the same images. |
140 |
* Using the SYSTEM_IMAGES_USER_ID choose only the images that belong to the |
141 |
system user and check that their names are unique. This test can not be |
142 |
applied for all images as the users can name their images whatever they want. |
143 |
* Again for the images that belong to the system user check that the "osfamily" |
144 |
and the "root_partition" metadata values have been defined. These metadata |
145 |
values are mandatory for an image to be used. |
146 |
* Download from Pithos+ the image specified with the "--image-id" parameter and |
147 |
save it locally. |
148 |
* Create a new container to Pithos+ named "images". |
149 |
* Upload the download image to Pithos+ under the "images" container. |
150 |
* Use Plankton service to register the above image. Set the "osfamily" and |
151 |
"root_partition" metadata values which are mandatory. |
152 |
* Request from Cyclades the list of all registered images and check that our |
153 |
newly registered image is among them. |
154 |
* Delete image from Pithos+ and also the local copy on our disk. |
155 |
|
156 |
.. _flavorstestcase: |
157 |
|
158 |
FlavorsTestCase |
159 |
--------------- |
160 |
* Request from Cyclades the list of all flavors and check that its length is |
161 |
greater than 0 (ie test that there are flavors for the users to use). |
162 |
* Request from Cyclades the list of all flavors with details and check that its |
163 |
length is greater than 0. |
164 |
* Test that the two lists retrived earlier contain exactly the same flavors. |
165 |
* Test that all flavors have unique names. |
166 |
* Test that all flavors have a name of the form CxxRyyDzz where xx is the vCPU |
167 |
count, yy is the RAM in MiB, and zz is the Disk in GiB. |
168 |
|
169 |
.. _serverstestcase: |
170 |
|
171 |
ServersTestCase |
172 |
--------------- |
173 |
* Request from Cyclades the list of all servers with and without details and |
174 |
check that the two lists have the same length. |
175 |
* Test that simple and detailed servers lists have the same names. |
176 |
|
177 |
SpawnServerTestCase |
178 |
------------------- |
179 |
* Submit a create server request to Cyclades service. Use the IMAGE_ID specified |
180 |
from the command line. If FLAVOR_ID was specified as well use that one, else |
181 |
choose one randomly. The name of the new server will start with "snf-test-" |
182 |
followed by a timestamp so we can know which servers have been created from |
183 |
snf-burnin and when. Also check that the response from Cyclades service |
184 |
contains the correct server_name, server_flavor_id, server_image_id and the |
185 |
status of the server is currenlty "BUILD". Finally from the above response, |
186 |
extract the server's id and password. |
187 |
* Request from Cyclades the list of all servers with details and check that our |
188 |
newly created server has correct server_name, server_flavor_id, |
189 |
server_image_id and the status is "BUILD". |
190 |
* Request from Cyclades the details from the image we used to build our server. |
191 |
Extract the "os" and "users" metadata values. Using the first one update the |
192 |
server's metadata and setup the "os" metadata value to be the same with the |
193 |
one from the image's metadata. Using the second one determine the username to |
194 |
use for future connections to this host. |
195 |
* Retrieve the server's metadata from Cyclades and verify that server's metadata |
196 |
"os" key is set based on image's metadata. |
197 |
* Wait until server changes state to ACTIVE. This is done by querying the |
198 |
service for the server's state every QUERY_INTERVAL period of time until |
199 |
BUILD_TIMEOUT has been reached. Both QUERY_INTERVAL and BUILD_TIMEOUT values |
200 |
can be changed from the command line. |
201 |
* Request from Cyclades service a VNC console to our server. In order to verify |
202 |
that the returned connection is indeed a VNC one, snf-burnin implements the |
203 |
first basic steps of the RFB protocol: |
204 |
* Step 1. Send the ProtocolVersion message (par. 6.1.1) |
205 |
* Step 2. Check that only VNC Authentication is supported (par 6.1.2) |
206 |
* Step 3. Request VNC Authentication (par 6.1.2) |
207 |
* Step 4. Receive Challenge (par 6.2.2) |
208 |
* Step 5. DES-Encrypt challenge, using the password as key (par 6.2.2) |
209 |
* Step 6. Check that the SecurityResult is correct (par 6.1.3) |
210 |
* Request from Cyclades the server's details and check that our server's has |
211 |
been assigned with an IPv4 address. |
212 |
* Check that our server has been assigned with an IPv6 address. This test can be |
213 |
skipped if for some reason the targeted Synnefo deployment doesn't support |
214 |
IPv6. |
215 |
* Test that our server responds to ping requests on IPv4 address. |
216 |
* Test that our server responds to ping requests on IPv6 address. This test can |
217 |
also be skipped. |
218 |
* Submit a shutdown request for our server. |
219 |
* Wait and verify that the status of our server became "STOPPED". |
220 |
* Submit a start request for our server. |
221 |
* Wait and verify that the status of our server became "ACTIVE" again. |
222 |
* Test if server responds to ping on IPv4 address (verify up and running). |
223 |
* Test if server responds to ping on IPv6 address (verify up and running). |
224 |
* If the server is a Linux machine, SSH to it using its IPv4 address and verify |
225 |
that it has a valid hostname. |
226 |
* If the server is a Linux machine, SSH to it using its IPv6 address and verify |
227 |
that it has a valid hostname. |
228 |
* If the server is a Windows machine, try to connect to its RDP port using both |
229 |
its IPv4 and IPv6 addresses. |
230 |
* If during the creation of the server, the user chose a personality file to be |
231 |
used, check that this file is been presented in the server and its contents |
232 |
are correct. |
233 |
* Submit server delete request. |
234 |
* Wait and verify that the status of our server became "DELETED". |
235 |
* Request from Cyclades the list of all servers and verify that our newly |
236 |
deleted server is not in the list. |
237 |
|
238 |
.. _networktestcase: |
239 |
|
240 |
NetworkTestCase |
241 |
--------------- |
242 |
* Submit create server A request. |
243 |
* Wait and verify that the status of our A server became "ACTIVE". |
244 |
* Submit create server B request. |
245 |
* Wait and verify that the status of our B server became "ACTIVE". |
246 |
* Submit create private network request. Wait and verify that the status of the |
247 |
network became "ACTIVE". |
248 |
* Connect the two servers (A and B) into the newly created network. Wait and |
249 |
verify that both machines got an extra nic, hence have been connected to the |
250 |
network. |
251 |
* Reboot server A. |
252 |
* Test if server A responds to ping on IPv4 address (verify up and running) |
253 |
* Reboot server B. |
254 |
* Test if server B responds to ping on IPv4 address (verify up and running) |
255 |
* Connect via SSH and setup the new network interface in server A. |
256 |
* Connect via SSH and setup the new network interface in server B. |
257 |
* Connect via SSH to server A and test if server B responds to ping via their |
258 |
new interface. |
259 |
* Disconnect both servers from network. Check network details and verify that |
260 |
both servers have been successfully disconnected. |
261 |
* Send delete network request. Verify that the network has been actually |
262 |
deleted. |
263 |
* Send request to delete servers and wait until they are actually deleted. |
264 |
|
265 |
.. _pithostestcase: |
266 |
|
267 |
PithosTestCase |
268 |
-------------- |
269 |
* Request from Pithos+ the list of containers and check that its length is |
270 |
greater than 0 (ie test that there are containers). |
271 |
* Test that the containers have unique names. |
272 |
* Create a new container. Choose a random name for our container and then check |
273 |
that it has been successfully created. |
274 |
* Upload a file to Pithos+ under our newly created container. |
275 |
* Download the file from Pithos+ and test it is the same with the one uploaded. |
276 |
* Remove created file and container from Pithos+ and verify that they have been |
277 |
successfully deleted. |
278 |
|
279 |
|
280 |
Burnin as alert tool |
281 |
======================== |
282 |
|
283 |
Burnin can be used to verify that a Synnefo deployment is working as expected |
284 |
and verify the admins in case of an error. For this there is a script under the |
285 |
/snf-tools/conf directory named **snf-burnin-run.sh** which is intended to be |
286 |
used from cron to periodically run burnin. It runs simultaneous many instances |
287 |
of burnin for a number of different users and report errors though email. |