Statistics
| Branch: | Tag: | Revision:

root / docs / dev-guide.rst @ 991d0507

History | View | Annotate | Download (9.7 kB)

1
.. _dev-guide:
2

    
3
Synnefo Developer's Guide
4
^^^^^^^^^^^^^^^^^^^^^^^^^
5

    
6
This is the complete Synnefo Developer's Guide. Here, we document all Synnefo
7
REST APIs, to allow external developers write independent tools that interact
8
with Synnefo.
9

    
10
Synnefo exposes the OpenStack APIs for all its operations. Also, extensions
11
have been written for advanced operations wherever needed, and minor changes
12
for things that were missing or change frequently.
13

    
14
All Synnefo services have the analogous OpenStack API:
15

    
16
| Cyclades/Compute Service -> OpenStack Compute API
17
| Cyclades/Network Service -> OpenStack Compute/Network API (not Quantum yet)
18
| Cyclades/Image Service -> OpenStack Compute/Image API
19
| Cyclades/Plankton/Image Service -> OpenStack Glance API
20
| Pithos/Storage Service -> OpenStack Object Store API
21
| Astakos/Identity Service -> Proprietary, moving to OpenStack Keystone API
22

    
23
Below, we will describe all Synnefo APIs with conjuction to the OpenStack APIs.
24

    
25

    
26
Identity Service API (Astakos)
27
==============================
28

    
29
Currently, Astakos which is the Identity Management Service of Synnefo, has a
30
proprietary API, but we are moving to the OpenStack Keystone API.
31

    
32
The current Identity Management API is:
33

    
34
.. toctree::
35
   :maxdepth: 2
36

    
37
    Identity API <astakos-api-guide>
38

    
39

    
40
Compute Service API (Cyclades)
41
==============================
42

    
43
The Compute part of Cyclades exposes the OpenStack Compute API with minor
44
changes wherever needed.
45

    
46
This is the Cyclades/Compute API:
47

    
48
.. toctree::
49
   :maxdepth: 2
50

    
51
   Compute API <cyclades-api-guide>
52

    
53

    
54
Network Service API (Cyclades)
55
==============================
56

    
57
The Network Service is implemented inside Cyclades. It exposes the part of the
58
OpenStack Compute API that has to do with Networks. The OpenStack Quantum API
59
is not implemented yet.
60

    
61
Please consult the :ref:`Cyclades/Network API <cyclades-api-guide>` for more
62
details.
63

    
64

    
65
Images Service API (Cyclades/Plankton)
66
======================================
67

    
68
Plankton is the Image Service of Synnefo, currently implemented inside
69
Cyclades. Plankton exposes the OpenStack Glance API with minor changes wherever
70
needed.
71

    
72
This is the Cyclades/Plankton Image API:
73

    
74
.. toctree::
75
   :maxdepth: 2
76

    
77
   Image API <plankton-api-guide>
78

    
79

    
80
Storage Service API (Pithos)
81
============================
82

    
83
Pithos is the Storage Service of Synnefo and it exposes the OpenStack Object
84
Storage API with extensions for advanced operations, e.g., syncing.
85

    
86
This is the Pithos Object Storage API:
87

    
88
.. toctree::
89
   :maxdepth: 2
90

    
91
   Object Storage API <pithos-api-guide>
92

    
93

    
94
Implementing new clients
95
========================
96

    
97
In this section we discuss implementation guidelines, that a developer should
98
take into account before writing his own client for the above APIs. Before,
99
starting your client implementation, make sure you have thoroughly read the
100
corresponding Synnefo API.
101

    
102
Pithos clients
103
--------------
104

    
105
User Experience
106
~~~~~~~~~~~~~~~
107

    
108
Hopefully this API will allow for a multitude of client implementations, each
109
supporting a different device or operating system. All clients will be able to
110
manipulate containers and objects - even software only designed for OOS API
111
compatibility. But a Pithos interface should not be only about showing
112
containers and folders. There are some extra user interface elements and
113
functionalities that should be common to all implementations.
114

    
115
Upon entrance to the service, a user is presented with the following elements -
116
which can be represented as folders or with other related icons:
117

    
118
 * The ``home`` element, which is used as the default entry point to the user's
119
   "files". Objects under ``home`` are represented in the usual hierarchical
120
   organization of folders and files.
121
 * The ``trash`` element, which contains files that have been marked for
122
   deletion, but can still be recovered.
123
 * The ``shared`` element, which contains all objects shared by the user to
124
   other users of the system.
125
 * The ``others`` element, which contains all objects that other users share
126
   with the user.
127
 * The ``groups`` element, which contains the names of groups the user has
128
   defined. Each group consists of a user list. Group creation, deletion, and
129
   manipulation is carried out by actions originating here.
130
 * The ``history`` element, which allows browsing past instances of ``home``
131
   and - optionally - ``trash``.
132

    
133
Objects in Pithos can be:
134

    
135
 * Moved to trash and then deleted.
136
 * Shared with specific permissions.
137
 * Made public (shared with non-Pithos users).
138
 * Restored from previous versions.
139

    
140
Some of these functions are performed by the client software and some by the
141
Pithos server.
142

    
143
In the first version of Pithos, objects could also be assigned custom tags.
144
This is no longer supported. Existing deployments can migrate tags into a
145
specific metadata value, i.e. ``X-Object-Meta-Tags``.
146

    
147
Implementation Guidelines
148
~~~~~~~~~~~~~~~~~~~~~~~~~
149

    
150
Pithos clients should use the ``pithos`` and ``trash`` containers for active
151
and inactive objects respectively. If any of these containers is not found, the
152
client software should create it, without interrupting the user's workflow. The
153
``home`` element corresponds to ``pithos`` and the ``trash`` element to
154
``trash``. Use ``PUT`` with the ``X-Move-From`` header, or ``MOVE`` to transfer
155
objects from one container to the other. Use ``DELETE`` to remove from
156
``pithos`` without trashing, or to remove from ``trash``. When moving objects,
157
detect naming conflicts with the ``If-Match`` or ``If-None-Match`` headers.
158
Such conflicts should be resolved by the user.
159

    
160
Object names should use the ``/`` delimiter to impose a hierarchy of folders
161
and files.
162

    
163
The ``shared`` element should be implemented as a read-only view of the
164
``pithos`` container, using the ``shared`` parameter when listing objects. The
165
``others`` element, should start with a top-level ``GET`` to retrieve the list
166
of accounts accessible to the user. It is suggested that the client software
167
hides the next step of navigation - the container - if it only includes
168
``pithos`` and forwards the user directly to the objects.
169

    
170
Public objects are not included in ``shared`` and ``others`` listings. It is
171
suggested that they are marked in a visually distinctive way in ``pithos``
172
listings (for example using an icon overlay).
173

    
174
A special application menu, or a section in application preferences, should be
175
devoted to managing groups (the ``groups`` element). All group-related actions
176
are implemented at the account level.
177

    
178
Browsing past versions of objects should be available both at the object and
179
the container level. At the object level, a list of past versions can be
180
included in the screen showing details or more information on the object
181
(metadata, permissions, etc.). At the container level, it is suggested that
182
clients use a ``history`` element, which presents to the user a read-only,
183
time-variable view of ``pithos`` contents. This can be accomplished via the
184
``until`` parameter in listings. Optionally, ``history`` may include ``trash``.
185

    
186
Uploading and downloading data
187
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
188

    
189
By using hashmaps to upload and download objects the corresponding operations
190
can complete much faster.
191

    
192
In the case of an upload, only the missing blocks will be submitted to the
193
server:
194

    
195
 * Calculate the hash value for each block of the object to be uploaded. Use
196
   the hash algorithm and block size of the destination container.
197
 * Send a hashmap ``PUT`` request for the object.
198

    
199
   * Server responds with status ``201`` (Created):
200

    
201
     * Blocks are already on the server. The object has been created. Done.
202

    
203
   * Server responds with status ``409`` (Conflict):
204

    
205
     * Server's response body contains the hashes of the blocks that do not
206
       exist on the server.
207
     * For each hash value in the server's response (or all hashes together):
208

    
209
       * Send a ``POST`` request to the destination container with the
210
         corresponding data.
211

    
212
 * Repeat hashmap ``PUT``. Fail if the server's response is not ``201``.
213

    
214
Consulting hashmaps when downloading allows for resuming partially transferred
215
objects. The client should retrieve the hashmap from the server and compare it
216
with the hashmap computed from the respective local file. Any missing parts can
217
be downloaded with ``GET`` requests with the additional ``Range`` header.
218

    
219
Syncing
220
~~~~~~~
221

    
222
Consider the following algorithm for synchronizing a local folder with the
223
server. The "state" is the complete object listing, with the corresponding
224
attributes.
225
 
226
.. code-block:: python
227

    
228
  # L: Local State, the last synced state of the object.
229
  # Stored locally (e.g. in an SQLite database)
230
  
231
  # C: Current State, the current local state of the object
232
  # Returned by the filesystem
233
  
234
  # S: Server State, the current server state of the object
235
  # Returned by the server (HTTP request)
236
  
237
  def sync(path):
238
      L = get_local_state(path)   # Database action
239
      C = get_current_state(path) # Filesystem action
240
      S = get_server_state(path)  # Network action
241
  
242
      if C == L:
243
          # No local changes
244
          if S == L:
245
              # No remote changes, nothing to do
246
              return
247
          else:
248
              # Update local state to match that of the server
249
              download(path)
250
              update_local_state(path, S)
251
      else:
252
          # Local changes exist
253
          if S == L:
254
              # No remote changes, update the server and the local state
255
              upload(path)
256
              update_local_state(path, C)
257
          else:
258
              # Both local and server changes exist
259
              if C == S:
260
                  # We were lucky, both did the same
261
                  update_local_state(path, C)
262
              else:
263
                  # Conflicting changes exist
264
                  conflict()
265
  
266

    
267
Notes:
268

    
269
 * States represent file hashes (it is suggested to use Merkle). Deleted or
270
   non-existing files are assumed to have a magic hash (e.g. empty string).