Remove the use of queue lock in Python job queue
Since now each process only creates a 1-job queue, trying to use filelocks only causes job deadlock.
Also reduce the number of threads running in a job queue to 1.
Later the job queue will be removed completely....
Set process ID field when starting up a job
The ID of the current process is stored in the job file.
Signed-off-by: Petr Pudlak <pudlak@google.com>Reviewed-by: Klaus Aehlig <aehlig@google.com>
Add optional fields for job livelocks and process IDs
This will allow to check if a particular job is alive, and send signalsto it when it's running.
The fields aren't serialized, if missing, for backwards compatibility.
Signed-off-by: Petr Pudlak <pudlak@google.com>...
Add Haskell and Python modules for running jobs as processes
They will be used by Luxi daemon to spawn jobs as separate processes.
The communication protocol between the Luxi daemon and a spawned processis described in the documentation of module Ganeti.Query.Exec....
Add a method for checking if a particular job has ended
This will be used by job processes temporarily, until they get rid ofusing job queue completely.
Create a Python submodule for jqueue
.. so that we can add new code into separate files, instead of adding itto jqueue.py, which has already grown too large.
View revisions
Also available in: Atom