Statistics
| Branch: | Tag: | Revision:

root / docs / api-guide.rst @ a1d0bacb

History | View | Annotate | Download (10.1 kB)

1
.. _api-guide:
2

    
3
Synnefo REST API Guide
4
^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is Synnefo's REST API Guide.
7

    
8
Here, we document all Synnefo REST APIs, to allow external developers write
9
independent tools that interact with Synnefo.
10

    
11
Synnefo exposes the OpenStack APIs for all of its operations. Also, extensions
12
have been written for advanced operations wherever needed, and minor changes
13
for things that were missing or change frequently.
14

    
15
Most Synnefo services have a corresponding OpenStack API:
16

    
17
| Cyclades/Compute Service -> OpenStack Compute API
18
| Cyclades/Network Service -> OpenStack Networking ("Neutron") API
19
| Cyclades/Image Service -> OpenStack Image ("Glance") API
20
| Pithos/Storage Service -> OpenStack Object Storage API
21
| Astakos/Identity Service -> OpenStack Identity ("Keystone") API
22
| Astakos/Quota Service -> Proprietary API
23
| Astakos/Resource Service -> Proprietary API
24
| Astakos/Project Service -> Proprietary API
25

    
26
Below, we will describe all Synnefo APIs with conjuction to the OpenStack APIs.
27

    
28

    
29
Identity Service API (Astakos)
30
==============================
31

    
32
The Identity Management Service of Synnefo, which is part of Astakos, exposes
33
the OpenStack Identity ("Keystone") API.
34

    
35
The current Astakos/Identity API is:
36

    
37
.. toctree::
38
   :maxdepth: 2
39

    
40
    Identity API (Keystone) <identity-api-guide>
41

    
42

    
43
Resource and Quota Service API (Astakos)
44
========================================
45

    
46
The Resource and Quota Service are implemented inside Astakos and have the
47
following Synnefo specific (proprietary) API:
48

    
49
.. toctree::
50
    :maxdepth: 2
51

    
52
    Resource and Quota API <quota-api-guide.rst>
53

    
54
Project Service API
55
===================
56

    
57
The Projects Service is implemented inside Astakos and has the following
58
Synnefo specific (proprietary) API:
59

    
60
.. toctree::
61
    :maxdepth: 2
62

    
63
    Project API <project-api-guide.rst>
64

    
65
Compute Service API (Cyclades)
66
==============================
67

    
68
The Compute part of Cyclades exposes the OpenStack Compute API with minor
69
changes wherever needed.
70

    
71
This is the Cyclades/Compute API:
72

    
73
.. toctree::
74
   :maxdepth: 2
75

    
76
   Compute API (Compute) <compute-api-guide>
77

    
78

    
79

    
80
Network Service API (Cyclades)
81
==============================
82

    
83
The Network Service is implemented inside Cyclades. It exposes the OpenStack
84
Networking ("Neutron") API.
85

    
86
This is the Cyclades/Network API:
87

    
88
.. toctree::
89
   :maxdepth: 2
90

    
91
   Network API (Neutron) <network-api-guide>
92

    
93

    
94
Image Service API (Cyclades)
95
============================
96

    
97
The Image Service is implemented inside Cyclades. It exposes the OpenStack
98
Image ("Glance") API with minor changes wherever needed.
99

    
100
This is the Cyclades/Image API:
101

    
102
.. toctree::
103
   :maxdepth: 2
104

    
105
   Image API (Glance) <image-api-guide>
106

    
107

    
108
Storage Service API (Pithos)
109
============================
110

    
111
Pithos is the Storage Service of Synnefo and it exposes the OpenStack Object
112
Storage API with extensions for advanced operations, e.g., syncing.
113

    
114
This is the Pithos Object Storage API:
115

    
116
.. toctree::
117
   :maxdepth: 2
118

    
119
   Storage API (Object Storage) <object-api-guide>
120

    
121

    
122
Implementing new clients
123
========================
124

    
125
In this section we discuss implementation guidelines, that a developer should
126
take into account before writing his own client for the above APIs. Before,
127
starting your client implementation, make sure you have thoroughly read the
128
corresponding Synnefo API.
129

    
130
Pithos clients
131
--------------
132

    
133
User Experience
134
~~~~~~~~~~~~~~~
135

    
136
Hopefully this API will allow for a multitude of client implementations, each
137
supporting a different device or operating system. All clients will be able to
138
manipulate containers and objects - even software only designed for OOS API
139
compatibility. But a Pithos interface should not be only about showing
140
containers and folders. There are some extra user interface elements and
141
functionalities that should be common to all implementations.
142

    
143
Upon entrance to the service, a user is presented with the following elements -
144
which can be represented as folders or with other related icons:
145

    
146
 * The ``home`` element, which is used as the default entry point to the user's
147
   "files". Objects under ``home`` are represented in the usual hierarchical
148
   organization of folders and files.
149
 * The ``trash`` element, which contains files that have been marked for
150
   deletion, but can still be recovered.
151
 * The ``shared by me`` element, which contains all objects shared by the
152
   user to other users of the system.
153
 * The ``shared with me`` element, which contains all objects that other users
154
   share with the user.
155
 * The ``groups`` element, which contains the names of groups the user has
156
   defined. Each group consists of a user list. Group creation, deletion, and
157
   manipulation is carried out by actions originating here.
158
.. * The ``history`` element, which allows browsing past instances of ``home``
159
..   and - optionally - ``trash``.
160

    
161
Objects in Pithos can be:
162

    
163
 * Moved to trash and then deleted.
164
 * Shared with specific permissions.
165
 * Made public (shared with non-Pithos users).
166
 * Restored from previous versions.
167

    
168
Some of these functions are performed by the client software and some by the
169
Pithos server.
170

    
171
In the first version of Pithos, objects could also be assigned custom tags.
172
This is no longer supported. Existing deployments can migrate tags into a
173
specific metadata value, i.e. ``X-Object-Meta-Tags``.
174

    
175
Implementation Guidelines
176
~~~~~~~~~~~~~~~~~~~~~~~~~
177

    
178
Pithos clients should use the ``pithos`` and ``trash`` containers for active
179
and inactive objects respectively. If any of these containers is not found, the
180
client software should create it, without interrupting the user's workflow. The
181
``home`` element corresponds to ``pithos`` and the ``trash`` element to
182
``trash``. Use ``PUT`` with the ``X-Move-From`` header, or ``MOVE`` to transfer
183
objects from one container to the other. Use ``DELETE`` to remove from
184
``pithos`` without trashing, or to remove from ``trash``. When moving objects,
185
detect naming conflicts with the ``If-Match`` or ``If-None-Match`` headers.
186
Such conflicts should be resolved by the user.
187

    
188
Object names should use the ``/`` delimiter to impose a hierarchy of folders
189
and files.
190

    
191
The ``shared`` element should be implemented as a read-only view of the
192
``pithos`` container, using the ``shared`` parameter when listing objects. The
193
``others`` element, should start with a top-level ``GET`` to retrieve the list
194
of accounts accessible to the user. It is suggested that the client software
195
hides the next step of navigation - the container - if it only includes
196
``pithos`` and forwards the user directly to the objects.
197

    
198
Public objects are not included in ``shared`` and ``others`` listings. It is
199
suggested that they are marked in a visually distinctive way in ``pithos``
200
listings (for example using an icon overlay).
201

    
202
A special application menu, or a section in application preferences, should be
203
devoted to managing groups (the ``groups`` element). All group-related actions
204
are implemented at the account level.
205

    
206
Browsing past versions of objects should be available both at the object and
207
the container level. At the object level, a list of past versions can be
208
included in the screen showing details or more information on the object
209
(metadata, permissions, etc.). At the container level, it is suggested that
210
clients use a ``history`` element, which presents to the user a read-only,
211
time-variable view of ``pithos`` contents. This can be accomplished via the
212
``until`` parameter in listings. Optionally, ``history`` may include ``trash``.
213

    
214
Uploading and downloading data
215
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
216

    
217
By using hashmaps to upload and download objects the corresponding operations
218
can complete much faster.
219

    
220
In the case of an upload, only the missing blocks will be submitted to the
221
server:
222

    
223
 * Calculate the hash value for each block of the object to be uploaded. Use
224
   the hash algorithm and block size of the destination container.
225
 * Send a hashmap ``PUT`` request for the object.
226

    
227
   * Server responds with status ``201`` (Created):
228

    
229
     * Blocks are already on the server. The object has been created. Done.
230

    
231
   * Server responds with status ``409`` (Conflict):
232

    
233
     * Server's response body contains the hashes of the blocks that do not
234
       exist on the server.
235
     * For each hash value in the server's response (or all hashes together):
236

    
237
       * Send a ``POST`` request to the destination container with the
238
         corresponding data.
239

    
240
 * Repeat hashmap ``PUT``. Fail if the server's response is not ``201``.
241

    
242
Consulting hashmaps when downloading allows for resuming partially transferred
243
objects. The client should retrieve the hashmap from the server and compare it
244
with the hashmap computed from the respective local file. Any missing parts can
245
be downloaded with ``GET`` requests with the additional ``Range`` header.
246

    
247
Syncing
248
~~~~~~~
249

    
250
Consider the following algorithm for synchronizing a local folder with the
251
server. The "state" is the complete object listing, with the corresponding
252
attributes.
253

    
254
.. code-block:: python
255

    
256
  # L: Local State, the last synced state of the object.
257
  # Stored locally (e.g. in an SQLite database)
258

    
259
  # C: Current State, the current local state of the object
260
  # Returned by the filesystem
261

    
262
  # S: Server State, the current server state of the object
263
  # Returned by the server (HTTP request)
264

    
265
  def sync(path):
266
      L = get_local_state(path)   # Database action
267
      C = get_current_state(path) # Filesystem action
268
      S = get_server_state(path)  # Network action
269

    
270
      if C == L:
271
          # No local changes
272
          if S == L:
273
              # No remote changes, nothing to do
274
              return
275
          else:
276
              # Update local state to match that of the server
277
              download(path)
278
              update_local_state(path, S)
279
      else:
280
          # Local changes exist
281
          if S == L:
282
              # No remote changes, update the server and the local state
283
              upload(path)
284
              update_local_state(path, C)
285
          else:
286
              # Both local and server changes exist
287
              if C == S:
288
                  # We were lucky, both did the same
289
                  update_local_state(path, C)
290
              else:
291
                  # Conflicting changes exist
292
                  conflict()
293

    
294

    
295
Notes:
296

    
297
 * States represent file hashes (it is suggested to use Merkle). Deleted or
298
   non-existing files are assumed to have a magic hash (e.g. empty string).