find . -path './lib/*.py' -o -path './scripts/gnt-*' -o \
-path './daemons/ganeti-*' -o -path './tools/*' -o \
-path './qa/*.py' | \
- etags -
+ etags -l python -
.PHONY: coverage
coverage: $(BUILT_SOURCES) $(python_tests)
- Added per-request RPC timeout
- RAPI now requires a Content-Type header for requests with a body (e.g.
``PUT`` or ``POST``) which must be set to ``application/json`` (see
- RFC2616 (HTTP/1.1), section 7.2.1)
+ :rfc:`2616` (HTTP/1.1), section 7.2.1)
- ``ganeti-watcher`` attempts to restart ``ganeti-rapi`` if RAPI is not
reachable
- Implemented initial support for running Ganeti daemons as separate
not know the secret, it can't forge the certificates or redirect the
data. Unless disabled by a new cluster parameter, verifying the HMAC
signatures must be mandatory. The HMAC signature for X509 certificates
-will be prepended to the certificate similar to an RFC822 header and
+will be prepended to the certificate similar to an :rfc:`822` header and
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to
``-----END CERTIFICATE-----``). The header name will be
``X-Ganeti-Signature`` and its value will have the format
.. [#pwhash] Using the MD5 hash of username, realm and password is
- described in RFC2617_ ("HTTP Authentication"), sections 3.2.2.2 and
+ described in :rfc:`2617` ("HTTP Authentication"), sections 3.2.2.2 and
3.3. The reason for using it over another algorithm is forward
compatibility. If ``ganeti-rapi`` were to implement HTTP Digest
authentication in the future, the same hash could be used.
--------
The protocol used is JSON_ over HTTP designed after the REST_ principle.
-HTTP Basic authentication as per RFC2617_ is supported.
+HTTP Basic authentication as per :rfc:`2617` is supported.
.. _JSON: http://www.json.org/
.. _REST: http://en.wikipedia.org/wiki/Representational_State_Transfer
-.. _RFC2617: http://tools.ietf.org/rfc/rfc2617.txt
+
+
+A note on JSON as used by RAPI
+++++++++++++++++++++++++++++++
+
+JSON_ as used by Ganeti RAPI does not conform to the specification in
+:rfc:`4627`. Section 2 defines a JSON text to be either an object
+(``{"key": "value", …}``) or an array (``[1, 2, 3, …]``). In violation
+of this RAPI uses plain strings (``"master-candidate"``, ``"1234"``) for
+some requests or responses. Changing this now would likely break
+existing clients and cause a lot of trouble.
+
+.. highlight:: ruby
+
+Unlike Python's `JSON encoder and decoder
+<http://docs.python.org/library/json.html>`_, other programming
+languages or libraries may only provide a strict implementation, not
+allowing plain values. For those, responses can usually be wrapped in an
+array whose first element is then used, e.g. the response ``"1234"``
+becomes ``["1234"]``. This works equally well for more complex values.
+Example in Ruby::
+
+ require "json"
+
+ # Insert code to get response here
+ response = "\"1234\""
+
+ decoded = JSON.parse("[#{response}]").first
+
+Short of modifying the encoder to allow encoding to a less strict
+format, requests will have to be formatted by hand. Newer RAPI requests
+already use a dictionary as their input data and shouldn't cause any
+problems.
PUT or POST?
------------
-According to RFC2616 the main difference between PUT and POST is that
-POST can create new resources but PUT can only create the resource the
-URI was pointing to on the PUT request.
+According to :rfc:`2616` the main difference between PUT and POST is
+that POST can create new resources but PUT can only create the resource
+the URI was pointing to on the PUT request.
Unfortunately, due to historic reasons, the Ganeti RAPI library is not
consistent with this usage, so just use the methods as documented below
----------
Starting with Ganeti 2.0, Remote API traffic is encrypted using SSL/TLS
-by default. It supports Basic authentication as per RFC2617.
+by default. It supports Basic authentication as per :rfc:`2617`.
Paths for certificate, private key and CA files required for SSL/TLS
will be set at source configure time. Symlinks or command line
"DISK_OPT",
"DISK_TEMPLATE_OPT",
"DRAINED_OPT",
+ "DRY_RUN_OPT",
"DRBD_HELPER_OPT",
"EARLY_RELEASE_OPT",
"ENABLED_HV_OPT",
help=("Grab locks while doing the queries"
" in order to ensure more consistent results"))
-_DRY_RUN_OPT = cli_option("--dry-run", default=False,
- action="store_true",
- help=("Do not execute the operation, just run the"
- " check steps and verify it it could be"
- " executed"))
+DRY_RUN_OPT = cli_option("--dry-run", default=False,
+ action="store_true",
+ help=("Do not execute the operation, just run the"
+ " check steps and verify it it could be"
+ " executed"))
VERBOSE_OPT = cli_option("-v", "--verbose", default=False,
action="store_true",
cmd = aliases[cmd]
func, args_def, parser_opts, usage, description = commands[cmd]
- parser = OptionParser(option_list=parser_opts + [_DRY_RUN_OPT, DEBUG_OPT],
+ parser = OptionParser(option_list=parser_opts + [DEBUG_OPT],
description=description,
formatter=TitledHelpFormatter(),
usage="%%prog %s %s" % (cmd, usage))
if not options:
return
for op in opcode_list:
- op.dry_run = options.dry_run
+ if hasattr(options, "dry_run"):
+ op.dry_run = options.dry_run
op.debug_level = options.debug
### Node/iallocator related checks
_CheckIAllocatorOrNode(self, "iallocator", "pnode")
+ if self.op.pnode is not None:
+ if self.op.disk_template in constants.DTS_NET_MIRROR:
+ if self.op.snode is None:
+ raise errors.OpPrereqError("The networked disk templates need"
+ " a mirror node", errors.ECODE_INVAL)
+ elif self.op.snode:
+ self.LogWarning("Secondary node will be ignored on non-mirrored disk"
+ " template")
+ self.op.snode = None
+
self._cds = _GetClusterDomainSecret()
if self.op.mode == constants.INSTANCE_IMPORT:
# mirror node verification
if self.op.disk_template in constants.DTS_NET_MIRROR:
- if self.op.snode is None:
- raise errors.OpPrereqError("The networked disk templates need"
- " a mirror node", errors.ECODE_INVAL)
if self.op.snode == pnode.name:
raise errors.OpPrereqError("The secondary node cannot be the"
" primary node.", errors.ECODE_INVAL)
@ivar received_timestamp: the timestamp for when the job was received
@ivar start_timestmap: the timestamp for start of execution
@ivar end_timestamp: the timestamp for end of execution
- @ivar lock_status: In-memory locking information for debugging
"""
# pylint: disable-msg=W0212
__slots__ = ["queue", "id", "ops", "log_serial",
"received_timestamp", "start_timestamp", "end_timestamp",
- "lock_status", "change",
"__weakref__"]
def __init__(self, queue, job_id, ops):
self.start_timestamp = None
self.end_timestamp = None
- # In-memory attributes
- self.lock_status = None
-
def __repr__(self):
status = ["%s.%s" % (self.__class__.__module__, self.__class__.__name__),
"id=%s" % self.id,
obj.start_timestamp = state.get("start_timestamp", None)
obj.end_timestamp = state.get("end_timestamp", None)
- # In-memory attributes
- obj.lock_status = None
-
obj.ops = []
obj.log_serial = 0
for op_state in state["ops"]:
row.append(self.start_timestamp)
elif fname == "end_ts":
row.append(self.end_timestamp)
- elif fname == "lock_status":
- row.append(self.lock_status)
elif fname == "summary":
row.append([op.input.Summary() for op in self.ops])
else:
Processor.ExecOpCode) set to OP_STATUS_WAITLOCK.
"""
+ assert self._op in self._job.ops
assert self._op.status in (constants.OP_STATUS_WAITLOCK,
constants.OP_STATUS_CANCELING)
- # All locks are acquired by now
- self._job.lock_status = None
-
# Cancel here if we were asked to
self._CheckCancel()
logging.debug("Opcode is now running")
+
self._op.status = constants.OP_STATUS_RUNNING
self._op.exec_timestamp = TimeStampNow()
assert self._op.status in (constants.OP_STATUS_WAITLOCK,
constants.OP_STATUS_CANCELING)
- # Not getting the queue lock because this is a single assignment
- self._job.lock_status = msg
-
# Cancel here if we were asked to
self._CheckCancel()
op.result = result
op.end_timestamp = TimeStampNow()
if idx == count - 1:
- job.lock_status = None
job.end_timestamp = TimeStampNow()
# Consistency check
errors.GetEncodedError(i.result)
for i in job.ops[idx:])
finally:
- job.lock_status = None
job.end_timestamp = TimeStampNow()
queue.UpdateJobUnlocked(job)
finally:
try:
job.MarkUnfinishedOps(constants.OP_STATUS_CANCELED,
"Job canceled by request")
- job.lock_status = None
job.end_timestamp = TimeStampNow()
queue.UpdateJobUnlocked(job)
finally:
if node.startswith(remotehostname + "."):
msg = "hostname not FQDN"
else:
- msg = "hostname mistmatch"
+ msg = "hostname mismatch"
return False, ("%s: expected %s but got %s" %
(msg, node, remotehostname))
if qa_config.TestEnabled('create-cluster'):
RunTest(qa_cluster.TestClusterInit, rapi_user, rapi_secret)
RunTest(qa_node.TestNodeAddAll)
+ RunTest(qa_cluster.TestJobqueue)
else:
# consider the nodes are already there
qa_node.MarkNodeAddedAll()
AssertEqual(StartSSH(master['primary'],
utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+def TestJobqueue():
+ """gnt-debug test-jobqueue"""
+ master = qa_config.GetMasterNode()
+
+ cmd = ["gnt-debug", "test-jobqueue"]
+ AssertEqual(StartSSH(master["primary"],
+ utils.ShellQuoteArgs(cmd)).wait(), 0)
+
+
def TestClusterReservedLvs():
"""gnt-cluster reserved lvs"""
master = qa_config.GetMasterNode()
#!/usr/bin/python
#
-# Copyright (C) 2006, 2007 Google Inc.
+# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
SRC_DIR_OPT,
SRC_NODE_OPT,
SUBMIT_OPT,
+ DRY_RUN_OPT,
]
'export': (
ExportInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, SINGLE_NODE_OPT, NOSHUTDOWN_OPT, SHUTDOWN_TIMEOUT_OPT,
- REMOVE_INSTANCE_OPT, IGNORE_REMOVE_FAILURES_OPT],
+ REMOVE_INSTANCE_OPT, IGNORE_REMOVE_FAILURES_OPT, DRY_RUN_OPT],
"-n <target_node> [opts...] <name>",
"Exports an instance to an image"),
'import': (
"[...] -t disk-type -n node[:secondary-node] <name>",
"Imports an instance from an exported image"),
'remove': (
- RemoveExport, [ArgUnknown(min=1, max=1)], [],
+ RemoveExport, [ArgUnknown(min=1, max=1)], [DRY_RUN_OPT],
"<name>", "Remove exports of named instance from the filesystem."),
}
op = opcodes.OpRenameCluster(name=new_name)
result = SubmitOpCode(op, opts=opts, cl=cl)
- ToStdout("Cluster renamed from '%s' to '%s'", cluster_name, result)
+ if result:
+ ToStdout("Cluster renamed from '%s' to '%s'", cluster_name, result)
return 0
"", "Destroy cluster"),
'rename': (
RenameCluster, [ArgHost(min=1, max=1)],
- [FORCE_OPT],
+ [FORCE_OPT, DRY_RUN_OPT],
"<new_name>",
"Renames the cluster"),
'redist-conf': (
- RedistributeConfig, ARGS_NONE, [SUBMIT_OPT],
+ RedistributeConfig, ARGS_NONE, [SUBMIT_OPT, DRY_RUN_OPT],
"", "Forces a push of the configuration file and ssconf files"
" to the nodes in the cluster"),
'verify': (
VerifyCluster, ARGS_NONE,
- [VERBOSE_OPT, DEBUG_SIMERR_OPT, ERROR_CODES_OPT, NONPLUS1_OPT],
+ [VERBOSE_OPT, DEBUG_SIMERR_OPT, ERROR_CODES_OPT, NONPLUS1_OPT,
+ DRY_RUN_OPT],
"", "Does a check on the cluster configuration"),
'verify-disks': (
VerifyDisks, ARGS_NONE, [],
"", "Does a check on the cluster disk status"),
'repair-disk-sizes': (
- RepairDiskSizes, ARGS_MANY_INSTANCES, [],
+ RepairDiskSizes, ARGS_MANY_INSTANCES, [DRY_RUN_OPT],
"", "Updates mismatches in recorded disk sizes"),
'master-failover': (
MasterFailover, ARGS_NONE, [NOVOTING_OPT],
[BACKEND_OPT, CP_SIZE_OPT, ENABLED_HV_OPT, HVLIST_OPT,
NIC_PARAMS_OPT, NOLVM_STORAGE_OPT, VG_NAME_OPT, MAINTAIN_NODE_HEALTH_OPT,
UIDPOOL_OPT, ADD_UIDS_OPT, REMOVE_UIDS_OPT, DRBD_HELPER_OPT,
- NODRBD_STORAGE_OPT, DEFAULT_IALLOCATOR_OPT, RESERVED_LVS_OPT],
+ NODRBD_STORAGE_OPT, DEFAULT_IALLOCATOR_OPT, RESERVED_LVS_OPT,
+ DRY_RUN_OPT],
"[opts...]",
"Alters the parameters of the cluster"),
"renew-crypto": (
#!/usr/bin/python
#
-# Copyright (C) 2006, 2007 Google Inc.
+# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
action="append", help="Select nodes to sleep on"),
cli_option("-r", "--repeat", type="int", default="0", dest="repeat",
help="Number of times to repeat the sleep"),
+ DRY_RUN_OPT,
],
"[opts...] <duration>", "Executes a TestDelay OpCode"),
'submit-job': (
action="store_true", help="Show timing stats"),
cli_option("--each", default=False, action="store_true",
help="Submit each job separately"),
+ DRY_RUN_OPT,
],
"<op_list_file...>", "Submits jobs built from json files"
" containing a list of serialized opcodes"),
help="Select number of VCPUs for the instance"),
cli_option("--tags", default=None,
help="Comma separated list of tags"),
+ DRY_RUN_OPT,
],
"{opts...} <instance>", "Executes a TestAllocator OpCode"),
"test-jobqueue": (
name_check=opts.name_check)
result = SubmitOrSend(op, opts)
- ToStdout("Instance '%s' renamed to '%s'", args[0], result)
+ if result:
+ ToStdout("Instance '%s' renamed to '%s'", args[0], result)
return 0
NO_INSTALL_OPT,
OS_SIZE_OPT,
SUBMIT_OPT,
+ DRY_RUN_OPT,
]
commands = {
"[...] -t disk-type -n node[:secondary-node] -o os-type <name>",
"Creates and adds a new instance to the cluster"),
'batch-create': (
- BatchCreate, [ArgFile(min=1, max=1)], [],
+ BatchCreate, [ArgFile(min=1, max=1)], [DRY_RUN_OPT],
"<instances.json>",
"Create a bunch of instances based on specs in the file."),
'console': (
"[--show-cmd] <instance>", "Opens a console on the specified instance"),
'failover': (
FailoverInstance, ARGS_ONE_INSTANCE,
- [FORCE_OPT, IGNORE_CONSIST_OPT, SUBMIT_OPT, SHUTDOWN_TIMEOUT_OPT],
+ [FORCE_OPT, IGNORE_CONSIST_OPT, SUBMIT_OPT, SHUTDOWN_TIMEOUT_OPT,
+ DRY_RUN_OPT],
"[-f] <instance>", "Stops the instance and starts it on the backup node,"
" using the remote mirror (only for instances of type drbd)"),
'migrate': (
MigrateInstance, ARGS_ONE_INSTANCE,
- [FORCE_OPT, NONLIVE_OPT, MIGRATION_MODE_OPT, CLEANUP_OPT],
+ [FORCE_OPT, NONLIVE_OPT, MIGRATION_MODE_OPT, CLEANUP_OPT, DRY_RUN_OPT],
"[-f] <instance>", "Migrate instance to its secondary node"
" (only for instances of type drbd)"),
'move': (
MoveInstance, ARGS_ONE_INSTANCE,
- [FORCE_OPT, SUBMIT_OPT, SINGLE_NODE_OPT, SHUTDOWN_TIMEOUT_OPT],
+ [FORCE_OPT, SUBMIT_OPT, SINGLE_NODE_OPT, SHUTDOWN_TIMEOUT_OPT,
+ DRY_RUN_OPT],
"[-f] <instance>", "Move instance to an arbitrary node"
" (only for instances of type file and lv)"),
'info': (
[FORCE_OPT, OS_OPT, FORCE_VARIANT_OPT, m_force_multi, m_node_opt,
m_pri_node_opt, m_sec_node_opt, m_clust_opt, m_inst_opt, m_node_tags_opt,
m_pri_node_tags_opt, m_sec_node_tags_opt, m_inst_tags_opt, SELECT_OS_OPT,
- SUBMIT_OPT],
+ SUBMIT_OPT, DRY_RUN_OPT],
"[-f] <instance>", "Reinstall a stopped instance"),
'remove': (
RemoveInstance, ARGS_ONE_INSTANCE,
- [FORCE_OPT, SHUTDOWN_TIMEOUT_OPT, IGNORE_FAILURES_OPT, SUBMIT_OPT],
+ [FORCE_OPT, SHUTDOWN_TIMEOUT_OPT, IGNORE_FAILURES_OPT, SUBMIT_OPT,
+ DRY_RUN_OPT],
"[-f] <instance>", "Shuts down the instance and removes it"),
'rename': (
RenameInstance,
[ArgInstance(min=1, max=1), ArgHost(min=1, max=1)],
- [NOIPCHECK_OPT, NONAMECHECK_OPT, SUBMIT_OPT],
+ [NOIPCHECK_OPT, NONAMECHECK_OPT, SUBMIT_OPT, DRY_RUN_OPT],
"<instance> <new_name>", "Rename the instance"),
'replace-disks': (
ReplaceDisks, ARGS_ONE_INSTANCE,
[AUTO_REPLACE_OPT, DISKIDX_OPT, IALLOCATOR_OPT, EARLY_RELEASE_OPT,
- NEW_SECONDARY_OPT, ON_PRIMARY_OPT, ON_SECONDARY_OPT, SUBMIT_OPT],
+ NEW_SECONDARY_OPT, ON_PRIMARY_OPT, ON_SECONDARY_OPT, SUBMIT_OPT,
+ DRY_RUN_OPT],
"[-s|-p|-n NODE|-I NAME] <instance>",
"Replaces all disks for the instance"),
'modify': (
SetInstanceParams, ARGS_ONE_INSTANCE,
[BACKEND_OPT, DISK_OPT, FORCE_OPT, HVOPTS_OPT, NET_OPT, SUBMIT_OPT,
DISK_TEMPLATE_OPT, SINGLE_NODE_OPT, OS_OPT, FORCE_VARIANT_OPT,
- OSPARAMS_OPT],
+ OSPARAMS_OPT, DRY_RUN_OPT],
"<instance>", "Alters the parameters of an instance"),
'shutdown': (
GenericManyOps("shutdown", _ShutdownInstance), [ArgInstance()],
[m_node_opt, m_pri_node_opt, m_sec_node_opt, m_clust_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
- m_inst_tags_opt, m_inst_opt, m_force_multi, TIMEOUT_OPT, SUBMIT_OPT],
+ m_inst_tags_opt, m_inst_opt, m_force_multi, TIMEOUT_OPT, SUBMIT_OPT,
+ DRY_RUN_OPT],
"<instance>", "Stops an instance"),
'startup': (
GenericManyOps("startup", _StartupInstance), [ArgInstance()],
[FORCE_OPT, m_force_multi, m_node_opt, m_pri_node_opt, m_sec_node_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, m_clust_opt, m_inst_opt, SUBMIT_OPT, HVOPTS_OPT,
- BACKEND_OPT],
+ BACKEND_OPT, DRY_RUN_OPT],
"<instance>", "Starts an instance"),
'reboot': (
GenericManyOps("reboot", _RebootInstance), [ArgInstance()],
[m_force_multi, REBOOT_TYPE_OPT, IGNORE_SECONDARIES_OPT, m_node_opt,
m_pri_node_opt, m_sec_node_opt, m_clust_opt, m_inst_opt, SUBMIT_OPT,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
- m_inst_tags_opt, SHUTDOWN_TIMEOUT_OPT],
+ m_inst_tags_opt, SHUTDOWN_TIMEOUT_OPT, DRY_RUN_OPT],
"<instance>", "Reboots an instance"),
'activate-disks': (
- ActivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, IGNORE_SIZE_OPT],
+ ActivateDisks, ARGS_ONE_INSTANCE,
+ [SUBMIT_OPT, IGNORE_SIZE_OPT],
"<instance>", "Activate an instance's disks"),
'deactivate-disks': (
- DeactivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT],
+ DeactivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DRY_RUN_OPT],
"<instance>", "Deactivate an instance's disks"),
'recreate-disks': (
- RecreateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DISKIDX_OPT],
+ RecreateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DISKIDX_OPT, DRY_RUN_OPT],
"<instance>", "Recreate an instance's disks"),
'grow-disk': (
GrowDisk,
[ArgInstance(min=1, max=1), ArgUnknown(min=1, max=1),
ArgUnknown(min=1, max=1)],
- [SUBMIT_OPT, NWSYNC_OPT],
+ [SUBMIT_OPT, NWSYNC_OPT, DRY_RUN_OPT],
"<instance> <disk> <size>", "Grow an instance's disk"),
'list-tags': (
ListTags, ARGS_ONE_INSTANCE, [],
op = opcodes.OpPowercycleNode(node_name=node, force=opts.force)
result = SubmitOpCode(op, opts=opts)
- ToStderr(result)
+ if result:
+ ToStderr(result)
return 0
'modify': (
SetNodeParams, ARGS_ONE_NODE,
[FORCE_OPT, SUBMIT_OPT, MC_OPT, DRAINED_OPT, OFFLINE_OPT,
- AUTO_PROMOTE_OPT],
+ AUTO_PROMOTE_OPT, DRY_RUN_OPT],
"<node_name>", "Alters the parameters of a node"),
'powercycle': (
PowercycleNode, ARGS_ONE_NODE,
- [FORCE_OPT, CONFIRM_OPT],
+ [FORCE_OPT, CONFIRM_OPT, DRY_RUN_OPT],
"<node_name>", "Tries to forcefully powercycle a node"),
'remove': (
- RemoveNode, ARGS_ONE_NODE, [],
+ RemoveNode, ARGS_ONE_NODE, [DRY_RUN_OPT],
"<node_name>", "Removes a node from the cluster"),
'volumes': (
ListVolumes, [ArgNode()],
[ArgNode(min=1, max=1),
ArgChoice(min=1, max=1, choices=_MODIFIABLE_STORAGE_TYPES),
ArgFile(min=1, max=1)],
- [ALLOCATABLE_OPT],
+ [ALLOCATABLE_OPT, DRY_RUN_OPT],
"<node_name> <storage_type> <name>", "Modify storage volume on a node"),
'repair-storage': (
RepairStorage,
[ArgNode(min=1, max=1),
ArgChoice(min=1, max=1, choices=_REPAIRABLE_STORAGE_TYPES),
ArgFile(min=1, max=1)],
- [IGNORE_CONSIST_OPT],
+ [IGNORE_CONSIST_OPT, DRY_RUN_OPT],
"<node_name> <storage_type> <name>",
"Repairs a storage volume on a node"),
'list-tags': (
#!/usr/bin/python
#
-# Copyright (C) 2006, 2007 Google Inc.
+# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
candidate_pool_size=None,
os_hvp=os_hvp,
osparams=osp)
- SubmitOpCode(op)
+ SubmitOpCode(op, opts=opts)
return 0
ShowOSInfo, [ArgOs()], [], "", "Show detailed information about "
"operating systems"),
'modify': (
- ModifyOS, ARGS_ONE_OS, [HVLIST_OPT, OSPARAMS_OPT], "",
+ ModifyOS, ARGS_ONE_OS, [HVLIST_OPT, OSPARAMS_OPT, DRY_RUN_OPT], "",
"Modify the OS parameters"),
}
except IOError:
# Sadly paramiko doesn't provide errno or similiar
# so we can just assume that the path already exists
- logging.info("Path %s seems already to exist on remote node. Ignore.",
+ logging.info("Path %s seems already to exist on remote node. Ignoring.",
auth_path)
for name, (data, perm) in filemap.iteritems():
authorized_keys = sftp.open(auth_keys, "a+")
try:
+ # Due to the way SFTPFile and BufferedFile are implemented,
+ # opening in a+ mode and then issuing a read(), readline() or
+ # iterating over the file (which uses read() internally) will see
+ # an empty file, since the paramiko internal file position and the
+ # OS-level file-position are desynchronized; therefore, we issue
+ # an explicit seek to resynchronize these; writes should (note
+ # should) still go to the right place
+ authorized_keys.seek(0, 0)
# We don't have to close, as the close happened already in AddAuthorizedKey
utils.AddAuthorizedKey(authorized_keys, filemap[pub_key][0])
finally:
" <node...>"), prog=program)
parser.add_option(cli.DEBUG_OPT)
parser.add_option(cli.VERBOSE_OPT)
+ default_key = ssh.GetUserFiles(constants.GANETI_RUNAS)[0]
+ parser.add_option(optparse.Option("-f", dest="private_key",
+ default=default_key,
+ help="The private key to (try to) use for"
+ "authentication "))
+ parser.add_option(optparse.Option("--key-type", dest="key_type",
+ choices=("rsa", "dsa"), default="dsa",
+ help="The private key type (rsa or dsa)"))
(options, args) = parser.parse_args()
stderr_handler = logging.StreamHandler()
stderr_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
- file_handler.setLevel(logging.DEBUG)
+ file_handler.setLevel(logging.INFO)
if options.debug:
- stderr_handler.setLevel(logging.NOTSET)
+ stderr_handler.setLevel(logging.DEBUG)
elif options.verbose:
stderr_handler.setLevel(logging.INFO)
else:
- stderr_handler.setLevel(logging.ERROR)
+ stderr_handler.setLevel(logging.WARNING)
- # This is the paramiko logger instance
- paramiko_logger = logging.getLogger("paramiko")
root_logger = logging.getLogger("")
root_logger.setLevel(logging.NOTSET)
root_logger.addHandler(stderr_handler)
root_logger.addHandler(file_handler)
+
+ # This is the paramiko logger instance
+ paramiko_logger = logging.getLogger("paramiko")
paramiko_logger.addHandler(file_handler)
+ # We don't want to debug Paramiko, so filter anything below warning
+ paramiko_logger.setLevel(logging.WARNING)
+
+
+def LoadPrivateKeys(options):
+ """Load the list of available private keys
+
+ It loads the standard ssh key from disk and then tries to connect to
+ the ssh agent too.
+
+ @rtype: list
+ @return: a list of C{paramiko.PKey}
+
+ """
+ if options.key_type == "rsa":
+ pkclass = paramiko.RSAKey
+ elif options.key_type == "dsa":
+ pkclass = paramiko.DSSKey
+ else:
+ logging.critical("Unknown key type %s selected (choose either rsa or dsa)",
+ options.key_type)
+ sys.exit(1)
+
+ try:
+ private_key = pkclass.from_private_key_file(options.private_key)
+ except (paramiko.SSHException, EnvironmentError), err:
+ logging.critical("Can't load private key %s: %s", options.private_key, err)
+ sys.exit(1)
+
+ try:
+ agent = paramiko.Agent()
+ agent_keys = agent.get_keys()
+ except paramiko.SSHException, err:
+ # this will only be seen when the agent is broken/uses invalid
+ # protocol; for non-existing agent, get_keys() will just return an
+ # empty tuple
+ logging.warning("Can't connect to the ssh agent: %s; skipping its use",
+ err)
+ agent_keys = []
+
+ return [private_key] + list(agent_keys)
+
+
+def LoginViaKeys(transport, username, keys):
+ """Try to login on the given transport via a list of keys.
+
+ @param transport: the transport to use
+ @param username: the username to login as
+ @type keys: list
+ @param keys: list of C{paramiko.PKey} to use for authentication
+ @rtype: boolean
+ @return: True or False depending on whether the login was
+ successfull or not
+
+ """
+ for private_key in keys:
+ try:
+ transport.auth_publickey(username, private_key)
+ fpr = ":".join("%02x" % ord(i) for i in private_key.get_fingerprint())
+ if isinstance(private_key, paramiko.AgentKey):
+ logging.debug("Authentication via the ssh-agent key %s", fpr)
+ else:
+ logging.debug("Authenticated via public key %s", fpr)
+ return True
+ except paramiko.SSHException:
+ continue
+ else:
+ # all keys exhausted
+ return False
def main():
SetupLogging(options)
- passwd = getpass.getpass(prompt="%s password:" % constants.GANETI_RUNAS)
+ all_keys = LoadPrivateKeys(options)
+
+ passwd = None
+ username = constants.GANETI_RUNAS
+ ssh_port = netutils.GetDaemonPort("ssh")
+
+ # Below, we need to join() the transport objects, as otherwise the
+ # following happens:
+ # - the main thread finishes
+ # - the atexit functions run (in the main thread), and cause the
+ # logging file to be closed
+ # - a tiny bit later, the transport thread is finally ending, and
+ # wants to log one more message, which fails as the file is closed
+ # now
for host in args:
- transport = paramiko.Transport((host, netutils.GetDaemonPort("ssh")))
- transport.connect(username=constants.GANETI_RUNAS, password=passwd)
+ transport = paramiko.Transport((host, ssh_port))
+ transport.start_client()
+ try:
+ if LoginViaKeys(transport, username, all_keys):
+ logging.info("Authenticated to %s via public key", host)
+ else:
+ logging.warning("Authentication to %s via public key failed, trying"
+ " password", host)
+ if passwd is None:
+ passwd = getpass.getpass(prompt="%s password:" % username)
+ transport.auth_password(username=username, password=passwd)
+ logging.info("Authenticated to %s via password", host)
+ except paramiko.SSHException, err:
+ logging.error("Connection or authentication failed to host %s: %s",
+ host, err)
+ transport.close()
+ # this is needed for compatibility with older Paramiko or Python
+ # versions
+ transport.join()
+ continue
try:
try:
SetupSSH(transport)
SetupNodeDaemon(transport)
except errors.GenericError, err:
- logging.fatal("While doing setup on host %s an error occured: %s", host,
- err)
+ logging.error("While doing setup on host %s an error occured: %s",
+ host, err)
finally:
transport.close()
+ # this is needed for compatibility with older Paramiko or Python
+ # versions
+ transport.join()
if __name__ == "__main__":