Statistics
| Branch: | Tag: | Revision:

root / docs / api-guide.rst @ 20394434

History | View | Annotate | Download (10 kB)

1
.. _api-guide:
2

    
3
Synnefo REST API Guide
4
^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is Synnefo's REST API Guide.
7

    
8
Here, we document all Synnefo REST APIs, to allow external developers write
9
independent tools that interact with Synnefo.
10

    
11
Synnefo exposes the OpenStack APIs for all of its operations. Also, extensions
12
have been written for advanced operations wherever needed, and minor changes
13
for things that were missing or change frequently.
14

    
15
Most Synnefo services have a corresponding OpenStack API:
16

    
17
| Cyclades/Compute Service -> OpenStack Compute API
18
| Cyclades/Network Service -> OpenStack Neutron API
19
| Cyclades/Image Service -> OpenStack Glance API
20
| Pithos/Storage Service -> OpenStack Object Store API
21
| Astakos/Identity Service -> OpenStack Keystone API
22
| Astakos/Quota Service -> Proprietary API
23
| Astakos/Resource Service -> Proprietary API
24

    
25
Below, we will describe all Synnefo APIs with conjuction to the OpenStack APIs.
26

    
27

    
28
Identity Service API (Astakos)
29
==============================
30

    
31
The Identity Management Service of Synnefo, which is part of Astakos, exposes
32
the OpenStack Keystone API.
33

    
34
The current Astakos/Identity API is:
35

    
36
.. toctree::
37
   :maxdepth: 2
38

    
39
    Identity API (Keystone) <identity-api-guide>
40

    
41

    
42
Resource and Quota Service API (Astakos)
43
========================================
44

    
45
The Resource and Quota Service are implemented inside Astakos and have the
46
following Synnefo specific (proprietary) API:
47

    
48
.. toctree::
49
    :maxdepth: 2
50

    
51
    Resource and Quota API <quota-api-guide.rst>
52

    
53
Project Service API
54
===================
55

    
56
The Projects Service is implemented inside Astakos and has the following
57
Synnefo specific (proprietary) API:
58

    
59
.. toctree::
60
    :maxdepth: 2
61

    
62
    Project API <project-api-guide.rst>
63

    
64
Compute Service API (Cyclades)
65
==============================
66

    
67
The Compute part of Cyclades exposes the OpenStack Compute API with minor
68
changes wherever needed.
69

    
70
This is the Cyclades/Compute API:
71

    
72
.. toctree::
73
   :maxdepth: 2
74

    
75
   Compute API (Compute) <compute-api-guide>
76

    
77

    
78

    
79
Network Service API (Cyclades)
80
==============================
81

    
82
The Network Service is implemented inside Cyclades. It exposes the OpenStack
83
Neutron API.
84

    
85
This is the Cyclades/Network API:
86

    
87
.. toctree::
88
   :maxdepth: 2
89

    
90
   Network API (Neutron) <network-api-guide>
91

    
92

    
93
Image Service API (Cyclades)
94
============================
95

    
96
The Image Service is implemented inside Cyclades. It exposes the OpenStack
97
Glance API with minor changes wherever needed.
98

    
99
This is the Cyclades/Image API:
100

    
101
.. toctree::
102
   :maxdepth: 2
103

    
104
   Image API (Glance) <image-api-guide>
105

    
106

    
107
Storage Service API (Pithos)
108
============================
109

    
110
Pithos is the Storage Service of Synnefo and it exposes the OpenStack Object
111
Storage API with extensions for advanced operations, e.g., syncing.
112

    
113
This is the Pithos Object Storage API:
114

    
115
.. toctree::
116
   :maxdepth: 2
117

    
118
   Storage API (Object Storage) <object-api-guide>
119

    
120

    
121
Implementing new clients
122
========================
123

    
124
In this section we discuss implementation guidelines, that a developer should
125
take into account before writing his own client for the above APIs. Before,
126
starting your client implementation, make sure you have thoroughly read the
127
corresponding Synnefo API.
128

    
129
Pithos clients
130
--------------
131

    
132
User Experience
133
~~~~~~~~~~~~~~~
134

    
135
Hopefully this API will allow for a multitude of client implementations, each
136
supporting a different device or operating system. All clients will be able to
137
manipulate containers and objects - even software only designed for OOS API
138
compatibility. But a Pithos interface should not be only about showing
139
containers and folders. There are some extra user interface elements and
140
functionalities that should be common to all implementations.
141

    
142
Upon entrance to the service, a user is presented with the following elements -
143
which can be represented as folders or with other related icons:
144

    
145
 * The ``home`` element, which is used as the default entry point to the user's
146
   "files". Objects under ``home`` are represented in the usual hierarchical
147
   organization of folders and files.
148
 * The ``trash`` element, which contains files that have been marked for
149
   deletion, but can still be recovered.
150
 * The ``shared`` element, which contains all objects shared by the user to
151
   other users of the system.
152
 * The ``others`` element, which contains all objects that other users share
153
   with the user.
154
 * The ``groups`` element, which contains the names of groups the user has
155
   defined. Each group consists of a user list. Group creation, deletion, and
156
   manipulation is carried out by actions originating here.
157
 * The ``history`` element, which allows browsing past instances of ``home``
158
   and - optionally - ``trash``.
159

    
160
Objects in Pithos can be:
161

    
162
 * Moved to trash and then deleted.
163
 * Shared with specific permissions.
164
 * Made public (shared with non-Pithos users).
165
 * Restored from previous versions.
166

    
167
Some of these functions are performed by the client software and some by the
168
Pithos server.
169

    
170
In the first version of Pithos, objects could also be assigned custom tags.
171
This is no longer supported. Existing deployments can migrate tags into a
172
specific metadata value, i.e. ``X-Object-Meta-Tags``.
173

    
174
Implementation Guidelines
175
~~~~~~~~~~~~~~~~~~~~~~~~~
176

    
177
Pithos clients should use the ``pithos`` and ``trash`` containers for active
178
and inactive objects respectively. If any of these containers is not found, the
179
client software should create it, without interrupting the user's workflow. The
180
``home`` element corresponds to ``pithos`` and the ``trash`` element to
181
``trash``. Use ``PUT`` with the ``X-Move-From`` header, or ``MOVE`` to transfer
182
objects from one container to the other. Use ``DELETE`` to remove from
183
``pithos`` without trashing, or to remove from ``trash``. When moving objects,
184
detect naming conflicts with the ``If-Match`` or ``If-None-Match`` headers.
185
Such conflicts should be resolved by the user.
186

    
187
Object names should use the ``/`` delimiter to impose a hierarchy of folders
188
and files.
189

    
190
The ``shared`` element should be implemented as a read-only view of the
191
``pithos`` container, using the ``shared`` parameter when listing objects. The
192
``others`` element, should start with a top-level ``GET`` to retrieve the list
193
of accounts accessible to the user. It is suggested that the client software
194
hides the next step of navigation - the container - if it only includes
195
``pithos`` and forwards the user directly to the objects.
196

    
197
Public objects are not included in ``shared`` and ``others`` listings. It is
198
suggested that they are marked in a visually distinctive way in ``pithos``
199
listings (for example using an icon overlay).
200

    
201
A special application menu, or a section in application preferences, should be
202
devoted to managing groups (the ``groups`` element). All group-related actions
203
are implemented at the account level.
204

    
205
Browsing past versions of objects should be available both at the object and
206
the container level. At the object level, a list of past versions can be
207
included in the screen showing details or more information on the object
208
(metadata, permissions, etc.). At the container level, it is suggested that
209
clients use a ``history`` element, which presents to the user a read-only,
210
time-variable view of ``pithos`` contents. This can be accomplished via the
211
``until`` parameter in listings. Optionally, ``history`` may include ``trash``.
212

    
213
Uploading and downloading data
214
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
215

    
216
By using hashmaps to upload and download objects the corresponding operations
217
can complete much faster.
218

    
219
In the case of an upload, only the missing blocks will be submitted to the
220
server:
221

    
222
 * Calculate the hash value for each block of the object to be uploaded. Use
223
   the hash algorithm and block size of the destination container.
224
 * Send a hashmap ``PUT`` request for the object.
225

    
226
   * Server responds with status ``201`` (Created):
227

    
228
     * Blocks are already on the server. The object has been created. Done.
229

    
230
   * Server responds with status ``409`` (Conflict):
231

    
232
     * Server's response body contains the hashes of the blocks that do not
233
       exist on the server.
234
     * For each hash value in the server's response (or all hashes together):
235

    
236
       * Send a ``POST`` request to the destination container with the
237
         corresponding data.
238

    
239
 * Repeat hashmap ``PUT``. Fail if the server's response is not ``201``.
240

    
241
Consulting hashmaps when downloading allows for resuming partially transferred
242
objects. The client should retrieve the hashmap from the server and compare it
243
with the hashmap computed from the respective local file. Any missing parts can
244
be downloaded with ``GET`` requests with the additional ``Range`` header.
245

    
246
Syncing
247
~~~~~~~
248

    
249
Consider the following algorithm for synchronizing a local folder with the
250
server. The "state" is the complete object listing, with the corresponding
251
attributes.
252

    
253
.. code-block:: python
254

    
255
  # L: Local State, the last synced state of the object.
256
  # Stored locally (e.g. in an SQLite database)
257

    
258
  # C: Current State, the current local state of the object
259
  # Returned by the filesystem
260

    
261
  # S: Server State, the current server state of the object
262
  # Returned by the server (HTTP request)
263

    
264
  def sync(path):
265
      L = get_local_state(path)   # Database action
266
      C = get_current_state(path) # Filesystem action
267
      S = get_server_state(path)  # Network action
268

    
269
      if C == L:
270
          # No local changes
271
          if S == L:
272
              # No remote changes, nothing to do
273
              return
274
          else:
275
              # Update local state to match that of the server
276
              download(path)
277
              update_local_state(path, S)
278
      else:
279
          # Local changes exist
280
          if S == L:
281
              # No remote changes, update the server and the local state
282
              upload(path)
283
              update_local_state(path, C)
284
          else:
285
              # Both local and server changes exist
286
              if C == S:
287
                  # We were lucky, both did the same
288
                  update_local_state(path, C)
289
              else:
290
                  # Conflicting changes exist
291
                  conflict()
292

    
293

    
294
Notes:
295

    
296
 * States represent file hashes (it is suggested to use Merkle). Deleted or
297
   non-existing files are assumed to have a magic hash (e.g. empty string).