test/ganeti.hooks_unittest.py \
test/ganeti.http_unittest.py \
test/ganeti.locking_unittest.py \
+ test/ganeti.luxi_unittest.py \
test/ganeti.mcpu_unittest.py \
test/ganeti.objects_unittest.py \
+ test/ganeti.opcodes_unittest.py \
test/ganeti.rapi.resources_unittest.py \
test/ganeti.serializer_unittest.py \
test/ganeti.ssh_unittest.py \
News
====
+Version 2.2.0
+-------------
+
+- RAPI now requires a Content-Type header for requests with a body (e.g.
+ ``PUT`` or ``POST``) which must be set to ``application/json`` (see
+ RFC2616 (HTTP/1.1), section 7.2.1)
+
+ Version 2.1.1
+ -------------
+
+ During the 2.1.0 long release candidate cycle, a lot of improvements and
+ changes have accumulated with were released later as 2.1.1.
+
+ Major changes
+ ~~~~~~~~~~~~~
+
+ The node evacuate command (``gnt-node evacuate``) was significantly
+ rewritten, and as such the IAllocator protocol was changed - a new
+ request type has been added. This unfortunate change during a stable
+ series is designed to improve performance of node evacuations; on
+ clusters with more than about five nodes and which are well-balanced,
+ evacuation should proceed in parallel for all instances of the node
+ being evacuated. As such, any existing IAllocator scripts need to be
+ updated, otherwise the above command will fail due to the unknown
+ request. The provided "dumb" allocator has not been updated; but the
+ ganeti-htools package supports the new protocol since version 0.2.4.
+
+ Another important change is increased validation of node and instance
+ names. This might create problems in special cases, if invalid host
+ names are being used.
+
+ Also, a new layer of hypervisor parameters has been added, that sits at
+ OS level between the cluster defaults and the instance ones. This allows
+ customisation of virtualization parameters depending on the installed
+ OS. For example instances with OS 'X' may have a different KVM kernel
+ (or any other parameter) than the cluster defaults. This is intended to
+ help managing a multiple OSes on the same cluster, without manual
+ modification of each instance's parameters.
+
+ A tool for merging clusters, ``cluster-merge``, has been added in the
+ tools sub-directory.
+
+ Bug fixes
+ ~~~~~~~~~
+
+ - Improved the int/float conversions that should make the code more
+ robust in face of errors from the node daemons
+ - Fixed the remove node code in case of internal configuration errors
+ - Fixed the node daemon behaviour in face of inconsistent queue
+ directory (e.g. read-only file-system where we can't open the files
+ read-write, etc.)
+ - Fixed the behaviour of gnt-node modify for master candidate demotion;
+ now it either aborts cleanly or, if given the new “auto_promote”
+ parameter, will automatically promote other nodes as needed
+ - Fixed compatibility with (unreleased yet) Python 2.6.5 that would
+ completely prevent Ganeti from working
+ - Fixed bug for instance export when not all disks were successfully
+ exported
+ - Fixed behaviour of node add when the new node is slow in starting up
+ the node daemon
+ - Fixed handling of signals in the LUXI client, which should improve
+ behaviour of command-line scripts
+ - Added checks for invalid node/instance names in the configuration (now
+ flagged during cluster verify)
+ - Fixed watcher behaviour for disk activation errors
+ - Fixed two potentially endless loops in http library, which led to the
+ RAPI daemon hanging and consuming 100% CPU in some cases
+ - Fixed bug in RAPI daemon related to hashed passwords
+ - Fixed bug for unintended qemu-level bridging of multi-NIC KVM
+ instances
+ - Enhanced compatibility with non-Debian OSes, but not using absolute
+ path in some commands and allowing customisation of the ssh
+ configuration directory
+ - Fixed possible future issue with new Python versions by abiding to the
+ proper use of ``__slots__`` attribute on classes
+ - Added checks that should prevent directory traversal attacks
+ - Many documentation fixes based on feedback from users
+
+ New features
+ ~~~~~~~~~~~~
+
+ - Added an “early_release” more for instance replace disks and node
+ evacuate, where we release locks earlier and thus allow higher
+ parallelism within the cluster
+ - Added watcher hooks, intended to allow the watcher to restart other
+ daemons (e.g. from the ganeti-nbma project), but they can be used of
+ course for any other purpose
+ - Added a compile-time disable for DRBD barriers, to increase
+ performance if the administrator trusts the power supply or the
+ storage system to not lose writes
+ - Added the option of using syslog for logging instead of, or in
+ addition to, Ganeti's own log files
+ - Removed boot restriction for paravirtual NICs for KVM, recent versions
+ can indeed boot from a paravirtual NIC
+ - Added a generic debug level for many operations; while this is not
+ used widely yet, it allows one to pass the debug value all the way to
+ the OS scripts
+ - Enhanced the hooks environment for instance moves (failovers,
+ migrations) where the primary/secondary nodes changed during the
+ operation, by adding {NEW,OLD}_{PRIMARY,SECONDARY} vars
+ - Enhanced data validations for many user-supplied values; one important
+ item is the restrictions imposed on instance and node names, which
+ might reject some (invalid) host names
+ - Add a configure-time option to disable file-based storage, if it's not
+ needed; this allows greater security separation between the master
+ node and the other nodes from the point of view of the inter-node RPC
+ protocol
+ - Added user notification in interactive tools if job is waiting in the
+ job queue or trying to acquire locks
+ - Added log messages when a job is waiting for locks
+ - Added filtering by node tags in instance operations which admit
+ multiple instances (start, stop, reboot, reinstall)
+ - Added a new tool for cluster mergers, ``cluster-merge``
+ - Parameters from command line which are of the form ``a=b,c=d`` can now
+ use backslash escapes to pass in values which contain commas,
+ e.g. ``a=b\\c,d=e`` where the 'a' parameter would get the value
+ ``b,c``
+ - For KVM, the instance name is the first parameter passed to KVM, so
+ that it's more visible in the process list
+
+
Version 2.1.0
-------------
if ctx.handler_access is None:
raise AssertionError("Permissions definition missing")
+ # This is only made available in HandleRequest
+ ctx.body_data = None
+
req.private = ctx
+ # Check for expected attributes
+ assert req.private.handler
+ assert req.private.handler_fn
+ assert req.private.handler_access is not None
+
return req.private
- def GetAuthRealm(self, req):
- """Override the auth realm for queries.
+ def AuthenticationRequired(self, req):
+ """Determine whether authentication is required.
"""
- ctx = self._GetRequestContext(req)
- if ctx.handler_access:
- return self.AUTH_REALM
- else:
- return None
+ return bool(self._GetRequestContext(req).handler_access)
def Authenticate(self, req, username, password):
"""Checks whether a user can access a resource.
@param file_name: Path to output file
"""
- utils.WriteFile(file_name, data="%s\n" % utils.GenerateSecret(), mode=0400)
+ utils.WriteFile(file_name, data="%s\n" % utils.GenerateSecret(), mode=0400,
+ backup=True)
+
+
+ def GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_hmac_key,
+ rapi_cert_pem=None):
+ """Updates the cluster certificates, keys and secrets.
+
+ @type new_cluster_cert: bool
+ @param new_cluster_cert: Whether to generate a new cluster certificate
+ @type new_rapi_cert: bool
+ @param new_rapi_cert: Whether to generate a new RAPI certificate
+ @type new_hmac_key: bool
+ @param new_hmac_key: Whether to generate a new HMAC key
+ @type rapi_cert_pem: string
+ @param rapi_cert_pem: New RAPI certificate in PEM format
+
+ """
+ # SSL certificate
+ cluster_cert_exists = os.path.exists(constants.SSL_CERT_FILE)
+ if new_cluster_cert or not cluster_cert_exists:
+ if cluster_cert_exists:
+ utils.CreateBackup(constants.SSL_CERT_FILE)
+
+ logging.debug("Generating new cluster certificate at %s",
+ constants.SSL_CERT_FILE)
- GenerateSelfSignedSslCert(constants.SSL_CERT_FILE)
++ utils.GenerateSelfSignedSslCert(constants.SSL_CERT_FILE)
+
+ # HMAC key
+ if new_hmac_key or not os.path.exists(constants.HMAC_CLUSTER_KEY):
+ logging.debug("Writing new HMAC key to %s", constants.HMAC_CLUSTER_KEY)
+ GenerateHmacKey(constants.HMAC_CLUSTER_KEY)
+
+ # RAPI
+ rapi_cert_exists = os.path.exists(constants.RAPI_CERT_FILE)
+
+ if rapi_cert_pem:
+ # Assume rapi_pem contains a valid PEM-formatted certificate and key
+ logging.debug("Writing RAPI certificate at %s",
+ constants.RAPI_CERT_FILE)
+ utils.WriteFile(constants.RAPI_CERT_FILE, data=rapi_cert_pem, backup=True)
+
+ elif new_rapi_cert or not rapi_cert_exists:
+ if rapi_cert_exists:
+ utils.CreateBackup(constants.RAPI_CERT_FILE)
+
+ logging.debug("Generating new RAPI certificate at %s",
+ constants.RAPI_CERT_FILE)
- GenerateSelfSignedSslCert(constants.RAPI_CERT_FILE)
++ utils.GenerateSelfSignedSslCert(constants.RAPI_CERT_FILE)
def _InitGanetiServerSetup(master_name):
"OFFLINE_OPT",
"OS_OPT",
"OS_SIZE_OPT",
+ "RAPI_CERT_OPT",
"READD_OPT",
"REBOOT_TYPE_OPT",
+ "REMOVE_INSTANCE_OPT",
"SECONDARY_IP_OPT",
"SELECT_OS_OPT",
"SEP_OPT",
string_file_storage_dir = self.op.file_storage_dir
# build the full file storage dir path
- file_storage_dir = os.path.normpath(os.path.join(
- self.cfg.GetFileStorageDir(),
- string_file_storage_dir, instance))
+ file_storage_dir = utils.PathJoin(self.cfg.GetFileStorageDir(),
+ string_file_storage_dir, instance)
-
disks = _GenerateDiskTemplate(self,
self.op.disk_template,
instance, pnode_name,
CONFD = "ganeti-confd"
RAPI = "ganeti-rapi"
MASTERD = "ganeti-masterd"
+ # used in the ganeti-nbma project
+ NLD = "ganeti-nld"
-MULTITHREADED_DAEMONS = frozenset([MASTERD])
-
-DAEMONS_SSL = {
- # daemon-name: (default-cert-path, default-key-path)
- NODED: (SSL_CERT_FILE, SSL_CERT_FILE),
- RAPI: (RAPI_CERT_FILE, RAPI_CERT_FILE),
-}
-
DAEMONS_PORTS = {
# daemon-name: ("proto", "default-port")
NODED: ("tcp", 1811),
CONFD: LOG_DIR + "conf-daemon.log",
RAPI: LOG_DIR + "rapi-daemon.log",
MASTERD: LOG_DIR + "master-daemon.log",
+ # used in the ganeti-nbma project
+ NLD: LOG_DIR + "nl-daemon.log",
}
+
LOG_OS_DIR = LOG_DIR + "os"
LOG_WATCHER = LOG_DIR + "watcher.log"
LOG_COMMANDS = LOG_DIR + "commands.log"
"""Calls the handler function for the current request.
"""
- handler_context = _HttpServerRequest(self.request_msg)
+ handler_context = _HttpServerRequest(self.request_msg.start_line.method,
+ self.request_msg.start_line.path,
+ self.request_msg.headers,
- self.request_msg.decoded_body)
++ self.request_msg.body)
+
+ logging.debug("Handling request %r", handler_context)
try:
try:
from ganeti import serializer
from ganeti import constants
from ganeti import errors
+ from ganeti import utils
-KEY_METHOD = 'method'
-KEY_ARGS = 'args'
+KEY_METHOD = "method"
+KEY_ARGS = "args"
KEY_SUCCESS = "success"
KEY_RESULT = "result"
import logging
import logging.handlers
import signal
+import OpenSSL
+ import datetime
+ import calendar
from cStringIO import StringIO
output = property(_GetOutput, None, None, "Return full output")
- def _BuildCmdEnvironment(env):
-def RunCmd(cmd, env=None, output=None, cwd='/', reset_env=False):
++def _BuildCmdEnvironment(env, reset):
+ """Builds the environment for an external program.
+
+ """
- cmd_env = os.environ.copy()
- cmd_env["LC_ALL"] = "C"
++ if reset:
++ cmd_env = {}
++ else:
++ cmd_env = os.environ.copy()
++ cmd_env["LC_ALL"] = "C"
++
+ if env is not None:
+ cmd_env.update(env)
++
+ return cmd_env
+
+
- def RunCmd(cmd, env=None, output=None, cwd="/"):
++def RunCmd(cmd, env=None, output=None, cwd="/", reset_env=False):
"""Execute a (shell) command.
The command should not read from its standard input, as it will be
if no_fork:
raise errors.ProgrammerError("utils.RunCmd() called with fork() disabled")
- if isinstance(cmd, list):
- cmd = [str(val) for val in cmd]
- strcmd = " ".join(cmd)
- shell = False
- else:
+ if isinstance(cmd, basestring):
strcmd = cmd
shell = True
- logging.debug("RunCmd '%s'", strcmd)
+ else:
+ cmd = [str(val) for val in cmd]
+ strcmd = ShellQuoteArgs(cmd)
+ shell = False
- if not reset_env:
- cmd_env = os.environ.copy()
- cmd_env["LC_ALL"] = "C"
+ if output:
+ logging.debug("RunCmd %s, output file '%s'", strcmd, output)
else:
- cmd_env = {}
+ logging.debug("RunCmd %s", strcmd)
- cmd_env = _BuildCmdEnvironment(env)
- if env is not None:
- cmd_env.update(env)
++ cmd_env = _BuildCmdEnvironment(env, reset_env)
try:
if output is None:
return RunResult(exitcode, signal_, out, err, strcmd)
+def StartDaemon(cmd, env=None, cwd="/", output=None, output_fd=None,
+ pidfile=None):
+ """Start a daemon process after forking twice.
+
+ @type cmd: string or list
+ @param cmd: Command to run
+ @type env: dict
+ @param env: Additional environment variables
+ @type cwd: string
+ @param cwd: Working directory for the program
+ @type output: string
+ @param output: Path to file in which to save the output
+ @type output_fd: int
+ @param output_fd: File descriptor for output
+ @type pidfile: string
+ @param pidfile: Process ID file
+ @rtype: int
+ @return: Daemon process ID
+ @raise errors.ProgrammerError: if we call this when forks are disabled
+
+ """
+ if no_fork:
+ raise errors.ProgrammerError("utils.StartDaemon() called with fork()"
+ " disabled")
+
+ if output and not (bool(output) ^ (output_fd is not None)):
+ raise errors.ProgrammerError("Only one of 'output' and 'output_fd' can be"
+ " specified")
+
+ if isinstance(cmd, basestring):
+ cmd = ["/bin/sh", "-c", cmd]
+
+ strcmd = ShellQuoteArgs(cmd)
+
+ if output:
+ logging.debug("StartDaemon %s, output file '%s'", strcmd, output)
+ else:
+ logging.debug("StartDaemon %s", strcmd)
+
- cmd_env = _BuildCmdEnvironment(env)
++ cmd_env = _BuildCmdEnvironment(env, False)
+
+ # Create pipe for sending PID back
+ (pidpipe_read, pidpipe_write) = os.pipe()
+ try:
+ try:
+ # Create pipe for sending error messages
+ (errpipe_read, errpipe_write) = os.pipe()
+ try:
+ try:
+ # First fork
+ pid = os.fork()
+ if pid == 0:
+ try:
+ # Child process, won't return
+ _StartDaemonChild(errpipe_read, errpipe_write,
+ pidpipe_read, pidpipe_write,
+ cmd, cmd_env, cwd,
+ output, output_fd, pidfile)
+ finally:
+ # Well, maybe child process failed
+ os._exit(1) # pylint: disable-msg=W0212
+ finally:
+ _CloseFDNoErr(errpipe_write)
+
+ # Wait for daemon to be started (or an error message to arrive) and read
+ # up to 100 KB as an error message
+ errormsg = RetryOnSignal(os.read, errpipe_read, 100 * 1024)
+ finally:
+ _CloseFDNoErr(errpipe_read)
+ finally:
+ _CloseFDNoErr(pidpipe_write)
+
+ # Read up to 128 bytes for PID
+ pidtext = RetryOnSignal(os.read, pidpipe_read, 128)
+ finally:
+ _CloseFDNoErr(pidpipe_read)
+
+ # Try to avoid zombies by waiting for child process
+ try:
+ os.waitpid(pid, 0)
+ except OSError:
+ pass
+
+ if errormsg:
+ raise errors.OpExecError("Error when starting daemon process: %r" %
+ errormsg)
+
+ try:
+ return int(pidtext)
+ except (ValueError, TypeError), err:
+ raise errors.OpExecError("Error while trying to parse PID %r: %s" %
+ (pidtext, err))
+
+
+def _StartDaemonChild(errpipe_read, errpipe_write,
+ pidpipe_read, pidpipe_write,
+ args, env, cwd,
+ output, fd_output, pidfile):
+ """Child process for starting daemon.
+
+ """
+ try:
+ # Close parent's side
+ _CloseFDNoErr(errpipe_read)
+ _CloseFDNoErr(pidpipe_read)
+
+ # First child process
+ os.chdir("/")
+ os.umask(077)
+ os.setsid()
+
+ # And fork for the second time
+ pid = os.fork()
+ if pid != 0:
+ # Exit first child process
+ os._exit(0) # pylint: disable-msg=W0212
+
+ # Make sure pipe is closed on execv* (and thereby notifies original process)
+ SetCloseOnExecFlag(errpipe_write, True)
+
+ # List of file descriptors to be left open
+ noclose_fds = [errpipe_write]
+
+ # Open PID file
+ if pidfile:
+ try:
+ # TODO: Atomic replace with another locked file instead of writing into
+ # it after creating
+ fd_pidfile = os.open(pidfile, os.O_WRONLY | os.O_CREAT, 0600)
+
+ # Lock the PID file (and fail if not possible to do so). Any code
+ # wanting to send a signal to the daemon should try to lock the PID
+ # file before reading it. If acquiring the lock succeeds, the daemon is
+ # no longer running and the signal should not be sent.
+ LockFile(fd_pidfile)
+
+ os.write(fd_pidfile, "%d\n" % os.getpid())
+ except Exception, err:
+ raise Exception("Creating and locking PID file failed: %s" % err)
+
+ # Keeping the file open to hold the lock
+ noclose_fds.append(fd_pidfile)
+
+ SetCloseOnExecFlag(fd_pidfile, False)
+ else:
+ fd_pidfile = None
+
+ # Open /dev/null
+ fd_devnull = os.open(os.devnull, os.O_RDWR)
+
+ assert not output or (bool(output) ^ (fd_output is not None))
+
+ if fd_output is not None:
+ pass
+ elif output:
+ # Open output file
+ try:
+ # TODO: Implement flag to set append=yes/no
+ fd_output = os.open(output, os.O_WRONLY | os.O_CREAT, 0600)
+ except EnvironmentError, err:
+ raise Exception("Opening output file failed: %s" % err)
+ else:
+ fd_output = fd_devnull
+
+ # Redirect standard I/O
+ os.dup2(fd_devnull, 0)
+ os.dup2(fd_output, 1)
+ os.dup2(fd_output, 2)
+
+ # Send daemon PID to parent
+ RetryOnSignal(os.write, pidpipe_write, str(os.getpid()))
+
+ # Close all file descriptors except stdio and error message pipe
+ CloseFDs(noclose_fds=noclose_fds)
+
+ # Change working directory
+ os.chdir(cwd)
+
+ if env is None:
+ os.execvp(args[0], args)
+ else:
+ os.execvpe(args[0], args, env)
+ except: # pylint: disable-msg=W0702
+ try:
+ # Report errors to original process
+ buf = str(sys.exc_info()[1])
+
+ RetryOnSignal(os.write, errpipe_write, buf)
+ except: # pylint: disable-msg=W0702
+ # Ignore errors in error handling
+ pass
+
+ os._exit(1) # pylint: disable-msg=W0212
+
+
def _RunCmdPipe(cmd, env, via_shell, cwd):
"""Run a command and return its output.
return status
+def SetCloseOnExecFlag(fd, enable):
+ """Sets or unsets the close-on-exec flag on a file descriptor.
+
+ @type fd: int
+ @param fd: File descriptor
+ @type enable: bool
+ @param enable: Whether to set or unset it.
+
+ """
+ flags = fcntl.fcntl(fd, fcntl.F_GETFD)
+
+ if enable:
+ flags |= fcntl.FD_CLOEXEC
+ else:
+ flags &= ~fcntl.FD_CLOEXEC
+
+ fcntl.fcntl(fd, fcntl.F_SETFD, flags)
+
+
+def SetNonblockFlag(fd, enable):
+ """Sets or unsets the O_NONBLOCK flag on on a file descriptor.
+
+ @type fd: int
+ @param fd: File descriptor
+ @type enable: bool
+ @param enable: Whether to set or unset it
+
+ """
+ flags = fcntl.fcntl(fd, fcntl.F_GETFL)
+
+ if enable:
+ flags |= os.O_NONBLOCK
+ else:
+ flags &= ~os.O_NONBLOCK
+
+ fcntl.fcntl(fd, fcntl.F_SETFL, flags)
+
+
+def RetryOnSignal(fn, *args, **kwargs):
+ """Calls a function again if it failed due to EINTR.
+
+ """
+ while True:
+ try:
+ return fn(*args, **kwargs)
+ except EnvironmentError, err:
+ if err.errno != errno.EINTR:
+ raise
+ except select.error, err:
+ if not (err.args and err.args[0] == errno.EINTR):
+ raise
+
+
+ def RunParts(dir_name, env=None, reset_env=False):
+ """Run Scripts or programs in a directory
+
+ @type dir_name: string
+ @param dir_name: absolute path to a directory
+ @type env: dict
+ @param env: The environment to use
+ @type reset_env: boolean
+ @param reset_env: whether to reset or keep the default os environment
+ @rtype: list of tuples
+ @return: list of (name, (one of RUNDIR_STATUS), RunResult)
+
+ """
+ rr = []
+
+ try:
+ dir_contents = ListVisibleFiles(dir_name)
+ except OSError, err:
+ logging.warning("RunParts: skipping %s (cannot list: %s)", dir_name, err)
+ return rr
+
+ for relname in sorted(dir_contents):
+ fname = PathJoin(dir_name, relname)
+ if not (os.path.isfile(fname) and os.access(fname, os.X_OK) and
+ constants.EXT_PLUGIN_MASK.match(relname) is not None):
+ rr.append((relname, constants.RUNPARTS_SKIP, None))
+ else:
+ try:
+ result = RunCmd([fname], env=env, reset_env=reset_env)
+ except Exception, err: # pylint: disable-msg=W0703
+ rr.append((relname, constants.RUNPARTS_ERR, str(err)))
+ else:
+ rr.append((relname, constants.RUNPARTS_RUN, result))
+
+ return rr
+
+
def RemoveFile(filename):
"""Remove a file ignoring some errors.
import tempfile
+ from ganeti import constants
-from ganeti import bootstrap
from ganeti import utils
import qa_config
utils.ShellQuoteArgs(cmd)).wait(), 0)
+ def TestClusterRenewCrypto():
+ """gnt-cluster renew-crypto"""
+ master = qa_config.GetMasterNode()
+
+ # Conflicting options
+ cmd = ["gnt-cluster", "renew-crypto", "--force",
+ "--new-cluster-certificate", "--new-hmac-key",
+ "--new-rapi-certificate", "--rapi-certificate=/dev/null"]
+ AssertNotEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+ # Invalid RAPI certificate
+ cmd = ["gnt-cluster", "renew-crypto", "--force",
+ "--rapi-certificate=/dev/null"]
+ AssertNotEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+ # Custom RAPI certificate
+ fh = tempfile.NamedTemporaryFile()
+
+ # Ensure certificate doesn't cause "gnt-cluster verify" to complain
+ validity = constants.SSL_CERT_EXPIRATION_WARN * 3
+
- bootstrap.GenerateSelfSignedSslCert(fh.name, validity=validity)
++ utils.GenerateSelfSignedSslCert(fh.name, validity=validity)
+
+ tmpcert = qa_utils.UploadFile(master["primary"], fh.name)
+ try:
+ cmd = ["gnt-cluster", "renew-crypto", "--force",
+ "--rapi-certificate=%s" % tmpcert]
+ AssertEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+ finally:
+ cmd = ["rm", "-f", tmpcert]
+ AssertEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+ # Normal case
+ cmd = ["gnt-cluster", "renew-crypto", "--force",
+ "--new-cluster-certificate", "--new-hmac-key",
+ "--new-rapi-certificate"]
+ AssertEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
def TestClusterBurnin():
"""Burnin"""
master = qa_config.GetMasterNode()
import re
import select
import string
+import fcntl
import OpenSSL
+ import warnings
+ import distutils.version
+ import glob
import ganeti
import testutils
cwd = os.getcwd()
self.failUnlessEqual(RunCmd(["pwd"], cwd=cwd).stdout.strip(), cwd)
+ def testResetEnv(self):
+ """Test environment reset functionality"""
+ self.failUnlessEqual(RunCmd(["env"], reset_env=True).stdout.strip(), "")
+
+
+ class TestRunParts(unittest.TestCase):
+ """Testing case for the RunParts function"""
+
+ def setUp(self):
+ self.rundir = tempfile.mkdtemp(prefix="ganeti-test", suffix=".tmp")
+
+ def tearDown(self):
+ shutil.rmtree(self.rundir)
+
+ def testEmpty(self):
+ """Test on an empty dir"""
+ self.failUnlessEqual(RunParts(self.rundir, reset_env=True), [])
+
+ def testSkipWrongName(self):
+ """Test that wrong files are skipped"""
+ fname = os.path.join(self.rundir, "00test.dot")
+ utils.WriteFile(fname, data="")
+ os.chmod(fname, stat.S_IREAD | stat.S_IEXEC)
+ relname = os.path.basename(fname)
+ self.failUnlessEqual(RunParts(self.rundir, reset_env=True),
+ [(relname, constants.RUNPARTS_SKIP, None)])
+
+ def testSkipNonExec(self):
+ """Test that non executable files are skipped"""
+ fname = os.path.join(self.rundir, "00test")
+ utils.WriteFile(fname, data="")
+ relname = os.path.basename(fname)
+ self.failUnlessEqual(RunParts(self.rundir, reset_env=True),
+ [(relname, constants.RUNPARTS_SKIP, None)])
+
+ def testError(self):
+ """Test error on a broken executable"""
+ fname = os.path.join(self.rundir, "00test")
+ utils.WriteFile(fname, data="")
+ os.chmod(fname, stat.S_IREAD | stat.S_IEXEC)
+ (relname, status, error) = RunParts(self.rundir, reset_env=True)[0]
+ self.failUnlessEqual(relname, os.path.basename(fname))
+ self.failUnlessEqual(status, constants.RUNPARTS_ERR)
+ self.failUnless(error)
+
+ def testSorted(self):
+ """Test executions are sorted"""
+ files = []
+ files.append(os.path.join(self.rundir, "64test"))
+ files.append(os.path.join(self.rundir, "00test"))
+ files.append(os.path.join(self.rundir, "42test"))
+
+ for fname in files:
+ utils.WriteFile(fname, data="")
+
+ results = RunParts(self.rundir, reset_env=True)
+
+ for fname in sorted(files):
+ self.failUnlessEqual(os.path.basename(fname), results.pop(0)[0])
+
+ def testOk(self):
+ """Test correct execution"""
+ fname = os.path.join(self.rundir, "00test")
+ utils.WriteFile(fname, data="#!/bin/sh\n\necho -n ciao")
+ os.chmod(fname, stat.S_IREAD | stat.S_IEXEC)
+ (relname, status, runresult) = RunParts(self.rundir, reset_env=True)[0]
+ self.failUnlessEqual(relname, os.path.basename(fname))
+ self.failUnlessEqual(status, constants.RUNPARTS_RUN)
+ self.failUnlessEqual(runresult.stdout, "ciao")
+
+ def testRunFail(self):
+ """Test correct execution, with run failure"""
+ fname = os.path.join(self.rundir, "00test")
+ utils.WriteFile(fname, data="#!/bin/sh\n\nexit 1")
+ os.chmod(fname, stat.S_IREAD | stat.S_IEXEC)
+ (relname, status, runresult) = RunParts(self.rundir, reset_env=True)[0]
+ self.failUnlessEqual(relname, os.path.basename(fname))
+ self.failUnlessEqual(status, constants.RUNPARTS_RUN)
+ self.failUnlessEqual(runresult.exit_code, 1)
+ self.failUnless(runresult.failed)
+
+ def testRunMix(self):
+ files = []
+ files.append(os.path.join(self.rundir, "00test"))
+ files.append(os.path.join(self.rundir, "42test"))
+ files.append(os.path.join(self.rundir, "64test"))
+ files.append(os.path.join(self.rundir, "99test"))
+
+ files.sort()
+
+ # 1st has errors in execution
+ utils.WriteFile(files[0], data="#!/bin/sh\n\nexit 1")
+ os.chmod(files[0], stat.S_IREAD | stat.S_IEXEC)
+
+ # 2nd is skipped
+ utils.WriteFile(files[1], data="")
+
+ # 3rd cannot execute properly
+ utils.WriteFile(files[2], data="")
+ os.chmod(files[2], stat.S_IREAD | stat.S_IEXEC)
+
+ # 4th execs
+ utils.WriteFile(files[3], data="#!/bin/sh\n\necho -n ciao")
+ os.chmod(files[3], stat.S_IREAD | stat.S_IEXEC)
+
+ results = RunParts(self.rundir, reset_env=True)
+
+ (relname, status, runresult) = results[0]
+ self.failUnlessEqual(relname, os.path.basename(files[0]))
+ self.failUnlessEqual(status, constants.RUNPARTS_RUN)
+ self.failUnlessEqual(runresult.exit_code, 1)
+ self.failUnless(runresult.failed)
+
+ (relname, status, runresult) = results[1]
+ self.failUnlessEqual(relname, os.path.basename(files[1]))
+ self.failUnlessEqual(status, constants.RUNPARTS_SKIP)
+ self.failUnlessEqual(runresult, None)
+
+ (relname, status, runresult) = results[2]
+ self.failUnlessEqual(relname, os.path.basename(files[2]))
+ self.failUnlessEqual(status, constants.RUNPARTS_ERR)
+ self.failUnless(runresult)
+
+ (relname, status, runresult) = results[3]
+ self.failUnlessEqual(relname, os.path.basename(files[3]))
+ self.failUnlessEqual(status, constants.RUNPARTS_RUN)
+ self.failUnlessEqual(runresult.output, "ciao")
+ self.failUnlessEqual(runresult.exit_code, 0)
+ self.failUnless(not runresult.failed)
+
+class TestStartDaemon(testutils.GanetiTestCase):
+ def setUp(self):
+ self.tmpdir = tempfile.mkdtemp(prefix="ganeti-test")
+ self.tmpfile = os.path.join(self.tmpdir, "test")
+
+ def tearDown(self):
+ shutil.rmtree(self.tmpdir)
+
+ def testShell(self):
+ utils.StartDaemon("echo Hello World > %s" % self.tmpfile)
+ self._wait(self.tmpfile, 60.0, "Hello World")
+
+ def testShellOutput(self):
+ utils.StartDaemon("echo Hello World", output=self.tmpfile)
+ self._wait(self.tmpfile, 60.0, "Hello World")
+
+ def testNoShellNoOutput(self):
+ utils.StartDaemon(["pwd"])
+
+ def testNoShellNoOutputTouch(self):
+ testfile = os.path.join(self.tmpdir, "check")
+ self.failIf(os.path.exists(testfile))
+ utils.StartDaemon(["touch", testfile])
+ self._wait(testfile, 60.0, "")
+
+ def testNoShellOutput(self):
+ utils.StartDaemon(["pwd"], output=self.tmpfile)
+ self._wait(self.tmpfile, 60.0, "/")
+
+ def testNoShellOutputCwd(self):
+ utils.StartDaemon(["pwd"], output=self.tmpfile, cwd=os.getcwd())
+ self._wait(self.tmpfile, 60.0, os.getcwd())
+
+ def testShellEnv(self):
+ utils.StartDaemon("echo \"$GNT_TEST_VAR\"", output=self.tmpfile,
+ env={ "GNT_TEST_VAR": "Hello World", })
+ self._wait(self.tmpfile, 60.0, "Hello World")
+
+ def testNoShellEnv(self):
+ utils.StartDaemon(["printenv", "GNT_TEST_VAR"], output=self.tmpfile,
+ env={ "GNT_TEST_VAR": "Hello World", })
+ self._wait(self.tmpfile, 60.0, "Hello World")
+
+ def testOutputFd(self):
+ fd = os.open(self.tmpfile, os.O_WRONLY | os.O_CREAT)
+ try:
+ utils.StartDaemon(["pwd"], output_fd=fd, cwd=os.getcwd())
+ finally:
+ os.close(fd)
+ self._wait(self.tmpfile, 60.0, os.getcwd())
+
+ def testPid(self):
+ pid = utils.StartDaemon("echo $$ > %s" % self.tmpfile)
+ self._wait(self.tmpfile, 60.0, str(pid))
+
+ def testPidFile(self):
+ pidfile = os.path.join(self.tmpdir, "pid")
+ checkfile = os.path.join(self.tmpdir, "abort")
+
+ pid = utils.StartDaemon("while sleep 5; do :; done", pidfile=pidfile,
+ output=self.tmpfile)
+ try:
+ fd = os.open(pidfile, os.O_RDONLY)
+ try:
+ # Check file is locked
+ self.assertRaises(errors.LockError, utils.LockFile, fd)
+
+ pidtext = os.read(fd, 100)
+ finally:
+ os.close(fd)
+
+ self.assertEqual(int(pidtext.strip()), pid)
+
+ self.assert_(utils.IsProcessAlive(pid))
+ finally:
+ # No matter what happens, kill daemon
+ utils.KillProcess(pid, timeout=5.0, waitpid=False)
+ self.failIf(utils.IsProcessAlive(pid))
+
+ self.assertEqual(utils.ReadFile(self.tmpfile), "")
+
+ def _wait(self, path, timeout, expected):
+ # Due to the asynchronous nature of daemon processes, polling is necessary.
+ # A timeout makes sure the test doesn't hang forever.
+ def _CheckFile():
+ if not (os.path.isfile(path) and
+ utils.ReadFile(path).strip() == expected):
+ raise utils.RetryAgain()
+
+ try:
+ utils.Retry(_CheckFile, (0.01, 1.5, 1.0), timeout)
+ except utils.RetryTimeout:
+ self.fail("Apparently the daemon didn't run in %s seconds and/or"
+ " didn't write the correct output" % timeout)
+
+ def testError(self):
+ self.assertRaises(errors.OpExecError, utils.StartDaemon,
+ ["./does-NOT-EXIST/here/0123456789"])
+ self.assertRaises(errors.OpExecError, utils.StartDaemon,
+ ["./does-NOT-EXIST/here/0123456789"],
+ output=os.path.join(self.tmpdir, "DIR/NOT/EXIST"))
+ self.assertRaises(errors.OpExecError, utils.StartDaemon,
+ ["./does-NOT-EXIST/here/0123456789"],
+ cwd=os.path.join(self.tmpdir, "DIR/NOT/EXIST"))
+ self.assertRaises(errors.OpExecError, utils.StartDaemon,
+ ["./does-NOT-EXIST/here/0123456789"],
+ output=os.path.join(self.tmpdir, "DIR/NOT/EXIST"))
+
+ fd = os.open(self.tmpfile, os.O_WRONLY | os.O_CREAT)
+ try:
+ self.assertRaises(errors.ProgrammerError, utils.StartDaemon,
+ ["./does-NOT-EXIST/here/0123456789"],
+ output=self.tmpfile, output_fd=fd)
+ finally:
+ os.close(fd)
+
+
+class TestSetCloseOnExecFlag(unittest.TestCase):
+ """Tests for SetCloseOnExecFlag"""
+
+ def setUp(self):
+ self.tmpfile = tempfile.TemporaryFile()
+
+ def testEnable(self):
+ utils.SetCloseOnExecFlag(self.tmpfile.fileno(), True)
+ self.failUnless(fcntl.fcntl(self.tmpfile.fileno(), fcntl.F_GETFD) &
+ fcntl.FD_CLOEXEC)
+
+ def testDisable(self):
+ utils.SetCloseOnExecFlag(self.tmpfile.fileno(), False)
+ self.failIf(fcntl.fcntl(self.tmpfile.fileno(), fcntl.F_GETFD) &
+ fcntl.FD_CLOEXEC)
+
+
+class TestSetNonblockFlag(unittest.TestCase):
+ def setUp(self):
+ self.tmpfile = tempfile.TemporaryFile()
+
+ def testEnable(self):
+ utils.SetNonblockFlag(self.tmpfile.fileno(), True)
+ self.failUnless(fcntl.fcntl(self.tmpfile.fileno(), fcntl.F_GETFL) &
+ os.O_NONBLOCK)
+
+ def testDisable(self):
+ utils.SetNonblockFlag(self.tmpfile.fileno(), False)
+ self.failIf(fcntl.fcntl(self.tmpfile.fileno(), fcntl.F_GETFL) &
+ os.O_NONBLOCK)
+
+
class TestRemoveFile(unittest.TestCase):
"""Test case for the RemoveFile function"""
self.failUnlessEqual(UnescapeAndSplit(sep.join(a), sep=sep), b)
+class TestGenerateSelfSignedX509Cert(unittest.TestCase):
+ def setUp(self):
+ self.tmpdir = tempfile.mkdtemp()
+
+ def tearDown(self):
+ shutil.rmtree(self.tmpdir)
+
+ def _checkRsaPrivateKey(self, key):
+ lines = key.splitlines()
+ return ("-----BEGIN RSA PRIVATE KEY-----" in lines and
+ "-----END RSA PRIVATE KEY-----" in lines)
+
+ def _checkCertificate(self, cert):
+ lines = cert.splitlines()
+ return ("-----BEGIN CERTIFICATE-----" in lines and
+ "-----END CERTIFICATE-----" in lines)
+
+ def test(self):
+ for common_name in [None, ".", "Ganeti", "node1.example.com"]:
+ (key_pem, cert_pem) = utils.GenerateSelfSignedX509Cert(common_name, 300)
+ self._checkRsaPrivateKey(key_pem)
+ self._checkCertificate(cert_pem)
+
+ key = OpenSSL.crypto.load_privatekey(OpenSSL.crypto.FILETYPE_PEM,
+ key_pem)
+ self.assert_(key.bits() >= 1024)
+ self.assertEqual(key.bits(), constants.RSA_KEY_BITS)
+ self.assertEqual(key.type(), OpenSSL.crypto.TYPE_RSA)
+
+ x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM,
+ cert_pem)
+ self.failIf(x509.has_expired())
+ self.assertEqual(x509.get_issuer().CN, common_name)
+ self.assertEqual(x509.get_subject().CN, common_name)
+ self.assertEqual(x509.get_pubkey().bits(), constants.RSA_KEY_BITS)
+
+ def testLegacy(self):
+ cert1_filename = os.path.join(self.tmpdir, "cert1.pem")
+
+ utils.GenerateSelfSignedSslCert(cert1_filename, validity=1)
+
+ cert1 = utils.ReadFile(cert1_filename)
+
+ self.assert_(self._checkRsaPrivateKey(cert1))
+ self.assert_(self._checkCertificate(cert1))
+
+
+ class TestPathJoin(unittest.TestCase):
+ """Testing case for PathJoin"""
+
+ def testBasicItems(self):
+ mlist = ["/a", "b", "c"]
+ self.failUnlessEqual(PathJoin(*mlist), "/".join(mlist))
+
+ def testNonAbsPrefix(self):
+ self.failUnlessRaises(ValueError, PathJoin, "a", "b")
+
+ def testBackTrack(self):
+ self.failUnlessRaises(ValueError, PathJoin, "/a", "b/../c")
+
+ def testMultiAbs(self):
+ self.failUnlessRaises(ValueError, PathJoin, "/a", "/b")
+
+
+ class TestHostInfo(unittest.TestCase):
+ """Testing case for HostInfo"""
+
+ def testUppercase(self):
+ data = "AbC.example.com"
+ self.failUnlessEqual(HostInfo.NormalizeName(data), data.lower())
+
+ def testTooLongName(self):
+ data = "a.b." + "c" * 255
+ self.failUnlessRaises(OpPrereqError, HostInfo.NormalizeName, data)
+
+ def testTrailingDot(self):
+ data = "a.b.c"
+ self.failUnlessEqual(HostInfo.NormalizeName(data + "."), data)
+
+ def testInvalidName(self):
+ data = [
+ "a b",
+ "a/b",
+ ".a.b",
+ "a..b",
+ ]
+ for value in data:
+ self.failUnlessRaises(OpPrereqError, HostInfo.NormalizeName, value)
+
+ def testValidName(self):
+ data = [
+ "a.b",
+ "a-b",
+ "a_b",
+ "a.b.c",
+ ]
+ for value in data:
+ HostInfo.NormalizeName(value)
+
+
+ class TestParseAsn1Generalizedtime(unittest.TestCase):
+ def test(self):
+ # UTC
+ self.assertEqual(utils._ParseAsn1Generalizedtime("19700101000000Z"), 0)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20100222174152Z"),
+ 1266860512)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20380119031407Z"),
+ (2**31) - 1)
+
+ # With offset
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20100222174152+0000"),
+ 1266860512)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20100223131652+0000"),
+ 1266931012)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20100223051808-0800"),
+ 1266931088)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("20100224002135+1100"),
+ 1266931295)
+ self.assertEqual(utils._ParseAsn1Generalizedtime("19700101000000-0100"),
+ 3600)
+
+ # Leap seconds are not supported by datetime.datetime
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime,
+ "19841231235960+0000")
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime,
+ "19920630235960+0000")
+
+ # Errors
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime, "")
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime, "invalid")
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime,
+ "20100222174152")
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime,
+ "Mon Feb 22 17:47:02 UTC 2010")
+ self.assertRaises(ValueError, utils._ParseAsn1Generalizedtime,
+ "2010-02-22 17:42:02")
+
+
+ class TestGetX509CertValidity(testutils.GanetiTestCase):
+ def setUp(self):
+ testutils.GanetiTestCase.setUp(self)
+
+ pyopenssl_version = distutils.version.LooseVersion(OpenSSL.__version__)
+
+ # Test whether we have pyOpenSSL 0.7 or above
+ self.pyopenssl0_7 = (pyopenssl_version >= "0.7")
+
+ if not self.pyopenssl0_7:
+ warnings.warn("This test requires pyOpenSSL 0.7 or above to"
+ " function correctly")
+
+ def _LoadCert(self, name):
+ return OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM,
+ self._ReadTestData(name))
+
+ def test(self):
+ validity = utils.GetX509CertValidity(self._LoadCert("cert1.pem"))
+ if self.pyopenssl0_7:
+ self.assertEqual(validity, (1266919967, 1267524767))
+ else:
+ self.assertEqual(validity, (None, None))
+
+
if __name__ == '__main__':
testutils.GanetiTestProgram()