mirror of
https://github.com/ansible-collections/community.general.git
synced 2025-05-31 05:19:09 -07:00
Merge remote-tracking branch 'upstream/devel' into devel
Merge the latest code from upstream
This commit is contained in:
commit
f4f1075e86
36 changed files with 722 additions and 225 deletions
|
@ -15,8 +15,6 @@ matrix:
|
|||
python: 2.6
|
||||
- env: TARGET=sanity TOXENV=py27
|
||||
python: 2.7
|
||||
- env: TARGET=sanity TOXENV=py34
|
||||
python: 3.4
|
||||
- env: TARGET=sanity TOXENV=py35
|
||||
python: 3.5
|
||||
- env: TARGET=sanity TOXENV=py24
|
||||
|
|
|
@ -32,6 +32,11 @@ Ansible Changes By Release
|
|||
- cloudstack
|
||||
* cs_router
|
||||
* cs_snapshot_policy
|
||||
- f5:
|
||||
* bigip_device_dns
|
||||
* bigip_device_ntp
|
||||
* bigip_device_sshd
|
||||
* bigip_irule
|
||||
- github_key
|
||||
- google
|
||||
* gcdns_record
|
||||
|
@ -39,6 +44,8 @@ Ansible Changes By Release
|
|||
- ipmi
|
||||
* ipmi_boot
|
||||
* ipmi_power
|
||||
- include_role
|
||||
- jenkins_plugin
|
||||
- letsencrypt
|
||||
- logicmonitor
|
||||
- logicmonitor_facts
|
||||
|
@ -59,6 +66,8 @@ Ansible Changes By Release
|
|||
* smartos_image_facts
|
||||
- systemd
|
||||
- telegram
|
||||
- univention
|
||||
* udm_user
|
||||
- vmware
|
||||
* vmware_guest
|
||||
* vmware_local_user_manager
|
||||
|
|
2
Makefile
2
Makefile
|
@ -93,7 +93,7 @@ MOCK_CFG ?=
|
|||
|
||||
NOSETESTS ?= nosetests
|
||||
|
||||
NOSETESTS3 ?= nosetests-3.4
|
||||
NOSETESTS3 ?= nosetests-3.5
|
||||
|
||||
########################################################
|
||||
|
||||
|
|
|
@ -13,42 +13,41 @@ factors that make it harder to port them than most code:
|
|||
Which version of Python-3.x and which version of Python-2.x are our minimums?
|
||||
=============================================================================
|
||||
|
||||
The short answer is Python-3.4 and Python-2.4 but please read on for more
|
||||
The short answer is Python-3.5 and Python-2.4 but please read on for more
|
||||
information.
|
||||
|
||||
For Python-3 we are currently using Python-3.4 as a minimum. However, no long
|
||||
term supported Linux distributions currently ship with Python-3. When that
|
||||
occurs, we will probably take that as our minimum Python-3 version rather than
|
||||
Python-3.4. Thus far, Python-3 has been adding small changes that make it
|
||||
more compatible with Python-2 in its newer versions (For instance, Python-3.5
|
||||
added the ability to use percent-formatted byte strings.) so it should be more
|
||||
pleasant to use a newer version of Python-3 if it's available. At some point
|
||||
this will change but we'll just have to cross that bridge when we get to it.
|
||||
For Python-3 we are currently using Python-3.5 as a minimum on both the
|
||||
controller and the managed nodes. This was chosen as it's the version of
|
||||
Python3 in Ubuntu-16.04, the first long-term support (LTS) distribution to
|
||||
ship with Python3 and not Python2. Much of our code would still work with
|
||||
Python-3.4 but there are always bugfixes and new features in any new upstream
|
||||
release. Taking advantage of this relatively new version allows us not to
|
||||
worry about workarounds for problems and missing features in that older
|
||||
version.
|
||||
|
||||
For Python-2 the default is for modules to run on Python-2.4. This allows
|
||||
users with older distributions that are stuck on Python-2.4 to manage their
|
||||
machines. Modules are allowed to drop support for Python-2.4 when one of
|
||||
their dependent libraries require a higher version of python. This is not an
|
||||
invitation to add unnecessary dependent libraries in order to force your
|
||||
module to be usable only with a newer version of Python. Instead it is an
|
||||
acknowledgment that some libraries (for instance, boto3 and docker-py) will
|
||||
only function with newer Python.
|
||||
For Python-2, the default is for the controller to run on Python-2.6 and
|
||||
modules to run on Python-2.4. This allows users with older distributions that
|
||||
are stuck on Python-2.4 to manage their machines. Modules are allowed to drop
|
||||
support for Python-2.4 when one of their dependent libraries require a higher
|
||||
version of python. This is not an invitation to add unnecessary dependent
|
||||
libraries in order to force your module to be usable only with a newer version
|
||||
of Python. Instead it is an acknowledgment that some libraries (for instance,
|
||||
boto3 and docker-py) will only function with newer Python.
|
||||
|
||||
.. note:: When will we drop support for Python-2.4?
|
||||
|
||||
The only long term supported distro that we know of with Python-2.4 is
|
||||
RHEL5 (and its rebuilds like CentOS5) which is supported until April of
|
||||
2017. We will likely end our support for Python-2.4 in modules in an
|
||||
Ansible release around that time. We know of no long term supported
|
||||
distributions with Python-2.5 so the new minimum Python-2 version will
|
||||
likely be Python-2.6. This will let us take advantage of the
|
||||
forwards-compat features of Python-2.6 so porting and maintainance of
|
||||
Python-2/Python-3 code will be easier after that.
|
||||
2017. Whatever major release we make in or after April of 2017 (probably
|
||||
2.4.0) will no longer have support for Python-2.4 on the managed machines.
|
||||
Previous major release series's that we support (2.3.x) will continue to
|
||||
support Python-2.4 on the managed nodes.
|
||||
|
||||
.. note:: Ubuntu 16 LTS ships with Python 3.5
|
||||
We know of no long term supported distributions with Python-2.5 so the new
|
||||
minimum Python-2 version will be Python-2.6. This will let us take
|
||||
advantage of the forwards-compat features of Python-2.6 so porting and
|
||||
maintainance of Python-2/Python-3 code will be easier after that.
|
||||
|
||||
We have ongoing discussions now about taking Python3-3.5 as our minimum
|
||||
Python3 version.
|
||||
|
||||
Supporting only Python-2 or only Python-3
|
||||
=========================================
|
||||
|
@ -131,6 +130,29 @@ modules should create their octals like this::
|
|||
# Can't use 0755 on Python-3 and can't use 0o755 on Python-2.4
|
||||
EXECUTABLE_PERMS = int('0755', 8)
|
||||
|
||||
Outputting octal numbers may also need to be changed. In python2 we often did
|
||||
this to return file permissions::
|
||||
|
||||
mode = int('0775', 8)
|
||||
result['mode'] = oct(mode)
|
||||
|
||||
This would give the user ``result['mode'] == '0755'`` in their playbook. In
|
||||
python3, :func:`oct` returns the format with the lowercase ``o`` in it like:
|
||||
``result['mode'] == '0o755'``. If a user had a conditional in their playbook
|
||||
or was using the mode in a template the new format might break things. We
|
||||
need to return the old form of mode for backwards compatibility. You can do
|
||||
it like this::
|
||||
|
||||
mode = int('0775', 8)
|
||||
result['mode'] = '0%03o' % mode
|
||||
|
||||
You should use this wherever backwards compatibility is a concern or you are
|
||||
dealing with file permissions. (With file permissions a user may be feeding
|
||||
the mode into another program or to another module which doesn't understand
|
||||
the python syntax for octal numbers. ``[zero][digit][digit][digit]`` is
|
||||
understood by most everything and therefore the right way to express octals in
|
||||
these circumstances.
|
||||
|
||||
Bundled six
|
||||
-----------
|
||||
|
||||
|
|
|
@ -283,13 +283,15 @@ class DocCLI(CLI):
|
|||
choices = ''
|
||||
if 'choices' in opt:
|
||||
choices = "(Choices: " + ", ".join(str(i) for i in opt['choices']) + ")"
|
||||
default = ''
|
||||
if 'default' in opt or not required:
|
||||
default = "[Default: " + str(opt.get('default', '(null)')) + "]"
|
||||
text.append(textwrap.fill(CLI.tty_ify(choices + default), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
|
||||
|
||||
if 'notes' in doc and doc['notes'] and len(doc['notes']) > 0:
|
||||
notes = " ".join(doc['notes'])
|
||||
text.append("Notes:%s\n" % textwrap.fill(CLI.tty_ify(notes), limit-6, initial_indent=" ", subsequent_indent=opt_indent))
|
||||
text.append("Notes:")
|
||||
for note in doc['notes']:
|
||||
text.append(textwrap.fill(CLI.tty_ify(note), limit-6, initial_indent=" * ", subsequent_indent=opt_indent))
|
||||
|
||||
if 'requirements' in doc and doc['requirements'] is not None and len(doc['requirements']) > 0:
|
||||
req = ", ".join(doc['requirements'])
|
||||
|
|
|
@ -404,6 +404,15 @@ class TaskExecutor:
|
|||
include_file = templar.template(include_file)
|
||||
return dict(include=include_file, include_variables=include_variables)
|
||||
|
||||
#TODO: not needed?
|
||||
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
|
||||
elif self._task.action == 'include_role':
|
||||
include_variables = self._task.args.copy()
|
||||
role = include_variables.pop('name')
|
||||
if not role:
|
||||
return dict(failed=True, msg="No role was specified to include")
|
||||
return dict(name=role, include_variables=include_variables)
|
||||
|
||||
# Now we do final validation on the task, which sets all fields to their final values.
|
||||
self._task.post_validate(templar=templar)
|
||||
if '_variable_params' in self._task.args:
|
||||
|
|
|
@ -224,10 +224,10 @@ class TaskQueueManager:
|
|||
num_hosts = len(self._inventory.get_hosts(new_play.hosts))
|
||||
|
||||
max_serial = 0
|
||||
if play.serial:
|
||||
if new_play.serial:
|
||||
# the play has not been post_validated here, so we may need
|
||||
# to convert the scalar value to a list at this point
|
||||
serial_items = play.serial
|
||||
serial_items = new_play.serial
|
||||
if not isinstance(serial_items, list):
|
||||
serial_items = [serial_items]
|
||||
max_serial = max([pct_to_int(x, num_hosts) for x in serial_items])
|
||||
|
|
|
@ -139,7 +139,7 @@ from ansible.module_utils.pycompat24 import get_exception, literal_eval
|
|||
from ansible.module_utils.six import (PY2, PY3, b, binary_type, integer_types,
|
||||
iteritems, text_type, string_types)
|
||||
from ansible.module_utils.six.moves import map, reduce
|
||||
from ansible.module_utils._text import to_native
|
||||
from ansible.module_utils._text import to_native, to_bytes, to_text
|
||||
|
||||
_NUMBERTYPES = tuple(list(integer_types) + [float])
|
||||
|
||||
|
@ -808,7 +808,7 @@ class AnsibleModule(object):
|
|||
if not HAVE_SELINUX or not self.selinux_enabled():
|
||||
return context
|
||||
try:
|
||||
ret = selinux.matchpathcon(to_native(path, 'strict'), mode)
|
||||
ret = selinux.matchpathcon(to_native(path, errors='strict'), mode)
|
||||
except OSError:
|
||||
return context
|
||||
if ret[0] == -1:
|
||||
|
@ -823,7 +823,7 @@ class AnsibleModule(object):
|
|||
if not HAVE_SELINUX or not self.selinux_enabled():
|
||||
return context
|
||||
try:
|
||||
ret = selinux.lgetfilecon_raw(to_native(path, 'strict'))
|
||||
ret = selinux.lgetfilecon_raw(to_native(path, errors='strict'))
|
||||
except OSError:
|
||||
e = get_exception()
|
||||
if e.errno == errno.ENOENT:
|
||||
|
@ -984,8 +984,9 @@ class AnsibleModule(object):
|
|||
return changed
|
||||
|
||||
def set_mode_if_different(self, path, mode, changed, diff=None):
|
||||
path = os.path.expanduser(path)
|
||||
path_stat = os.lstat(path)
|
||||
b_path = to_bytes(path)
|
||||
b_path = os.path.expanduser(b_path)
|
||||
path_stat = os.lstat(b_path)
|
||||
|
||||
if mode is None:
|
||||
return changed
|
||||
|
@ -1013,10 +1014,10 @@ class AnsibleModule(object):
|
|||
if diff is not None:
|
||||
if 'before' not in diff:
|
||||
diff['before'] = {}
|
||||
diff['before']['mode'] = oct(prev_mode)
|
||||
diff['before']['mode'] = '0%03o' % prev_mode
|
||||
if 'after' not in diff:
|
||||
diff['after'] = {}
|
||||
diff['after']['mode'] = oct(mode)
|
||||
diff['after']['mode'] = '0%03o' % mode
|
||||
|
||||
if self.check_mode:
|
||||
return True
|
||||
|
@ -1024,22 +1025,22 @@ class AnsibleModule(object):
|
|||
# every time
|
||||
try:
|
||||
if hasattr(os, 'lchmod'):
|
||||
os.lchmod(path, mode)
|
||||
os.lchmod(b_path, mode)
|
||||
else:
|
||||
if not os.path.islink(path):
|
||||
os.chmod(path, mode)
|
||||
if not os.path.islink(b_path):
|
||||
os.chmod(b_path, mode)
|
||||
else:
|
||||
# Attempt to set the perms of the symlink but be
|
||||
# careful not to change the perms of the underlying
|
||||
# file while trying
|
||||
underlying_stat = os.stat(path)
|
||||
os.chmod(path, mode)
|
||||
new_underlying_stat = os.stat(path)
|
||||
underlying_stat = os.stat(b_path)
|
||||
os.chmod(b_path, mode)
|
||||
new_underlying_stat = os.stat(b_path)
|
||||
if underlying_stat.st_mode != new_underlying_stat.st_mode:
|
||||
os.chmod(path, stat.S_IMODE(underlying_stat.st_mode))
|
||||
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
|
||||
except OSError:
|
||||
e = get_exception()
|
||||
if os.path.islink(path) and e.errno == errno.EPERM: # Can't set mode on symbolic links
|
||||
if os.path.islink(b_path) and e.errno == errno.EPERM: # Can't set mode on symbolic links
|
||||
pass
|
||||
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
|
||||
pass
|
||||
|
@ -1049,7 +1050,7 @@ class AnsibleModule(object):
|
|||
e = get_exception()
|
||||
self.fail_json(path=path, msg='chmod failed', details=str(e))
|
||||
|
||||
path_stat = os.lstat(path)
|
||||
path_stat = os.lstat(b_path)
|
||||
new_mode = stat.S_IMODE(path_stat.st_mode)
|
||||
|
||||
if new_mode != prev_mode:
|
||||
|
@ -1902,11 +1903,13 @@ class AnsibleModule(object):
|
|||
to work around limitations, corner cases and ensure selinux context is saved if possible'''
|
||||
context = None
|
||||
dest_stat = None
|
||||
if os.path.exists(dest):
|
||||
b_src = to_bytes(src)
|
||||
b_dest = to_bytes(dest)
|
||||
if os.path.exists(b_dest):
|
||||
try:
|
||||
dest_stat = os.stat(dest)
|
||||
os.chmod(src, dest_stat.st_mode & PERM_BITS)
|
||||
os.chown(src, dest_stat.st_uid, dest_stat.st_gid)
|
||||
dest_stat = os.stat(b_dest)
|
||||
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
|
||||
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
|
||||
except OSError:
|
||||
e = get_exception()
|
||||
if e.errno != errno.EPERM:
|
||||
|
@ -1917,7 +1920,7 @@ class AnsibleModule(object):
|
|||
if self.selinux_enabled():
|
||||
context = self.selinux_default_context(dest)
|
||||
|
||||
creating = not os.path.exists(dest)
|
||||
creating = not os.path.exists(b_dest)
|
||||
|
||||
try:
|
||||
login_name = os.getlogin()
|
||||
|
@ -1933,7 +1936,7 @@ class AnsibleModule(object):
|
|||
|
||||
try:
|
||||
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
|
||||
os.rename(src, dest)
|
||||
os.rename(b_src, b_dest)
|
||||
except (IOError, OSError):
|
||||
e = get_exception()
|
||||
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY]:
|
||||
|
@ -1941,34 +1944,44 @@ class AnsibleModule(object):
|
|||
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
|
||||
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, e))
|
||||
else:
|
||||
dest_dir = os.path.dirname(dest)
|
||||
dest_file = os.path.basename(dest)
|
||||
b_dest_dir = os.path.dirname(b_dest)
|
||||
# Converting from bytes so that if py3, it will be
|
||||
# surrogateescaped. If py2, it wil be a noop. Converting
|
||||
# from text strings could mangle filenames on py2)
|
||||
native_dest_dir = to_native(b_dest_dir)
|
||||
native_suffix = to_native(os.path.basename(b_dest))
|
||||
native_prefix = '.ansible_tmp'
|
||||
try:
|
||||
tmp_dest = tempfile.NamedTemporaryFile(
|
||||
prefix=".ansible_tmp", dir=dest_dir, suffix=dest_file)
|
||||
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(
|
||||
prefix=native_prefix, dir=native_dest_dir, suffix=native_suffix)
|
||||
except (OSError, IOError):
|
||||
e = get_exception()
|
||||
self.fail_json(msg='The destination directory (%s) is not writable by the current user. Error was: %s' % (dest_dir, e))
|
||||
self.fail_json(msg='The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), e))
|
||||
b_tmp_dest_name = to_bytes(tmp_dest_name)
|
||||
|
||||
try: # leaves tmp file behind when sudo and not root
|
||||
try:
|
||||
try:
|
||||
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
|
||||
os.close(tmp_dest_fd)
|
||||
# leaves tmp file behind when sudo and not root
|
||||
if switched_user and os.getuid() != 0:
|
||||
# cleanup will happen by 'rm' of tempdir
|
||||
# copy2 will preserve some metadata
|
||||
shutil.copy2(src, tmp_dest.name)
|
||||
shutil.copy2(b_src, b_tmp_dest_name)
|
||||
else:
|
||||
shutil.move(src, tmp_dest.name)
|
||||
shutil.move(b_src, b_tmp_dest_name)
|
||||
if self.selinux_enabled():
|
||||
self.set_context_if_different(
|
||||
tmp_dest.name, context, False)
|
||||
b_tmp_dest_name, context, False)
|
||||
try:
|
||||
tmp_stat = os.stat(tmp_dest.name)
|
||||
tmp_stat = os.stat(b_tmp_dest_name)
|
||||
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
|
||||
os.chown(tmp_dest.name, dest_stat.st_uid, dest_stat.st_gid)
|
||||
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
|
||||
except OSError:
|
||||
e = get_exception()
|
||||
if e.errno != errno.EPERM:
|
||||
raise
|
||||
os.rename(tmp_dest.name, dest)
|
||||
os.rename(b_tmp_dest_name, b_dest)
|
||||
except (shutil.Error, OSError, IOError):
|
||||
e = get_exception()
|
||||
# sadly there are some situations where we cannot ensure atomicity, but only if
|
||||
|
@ -1977,8 +1990,8 @@ class AnsibleModule(object):
|
|||
#TODO: issue warning that this is an unsafe operation, but doing it cause user insists
|
||||
try:
|
||||
try:
|
||||
out_dest = open(dest, 'wb')
|
||||
in_src = open(src, 'rb')
|
||||
out_dest = open(b_dest, 'wb')
|
||||
in_src = open(b_src, 'rb')
|
||||
shutil.copyfileobj(in_src, out_dest)
|
||||
finally: # assuring closed files in 2.4 compatible way
|
||||
if out_dest:
|
||||
|
@ -1991,17 +2004,17 @@ class AnsibleModule(object):
|
|||
|
||||
else:
|
||||
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, e))
|
||||
|
||||
self.cleanup(tmp_dest.name)
|
||||
finally:
|
||||
self.cleanup(b_tmp_dest_name)
|
||||
|
||||
if creating:
|
||||
# make sure the file has the correct permissions
|
||||
# based on the current value of umask
|
||||
umask = os.umask(0)
|
||||
os.umask(umask)
|
||||
os.chmod(dest, DEFAULT_PERM & ~umask)
|
||||
os.chmod(b_dest, DEFAULT_PERM & ~umask)
|
||||
if switched_user:
|
||||
os.chown(dest, os.getuid(), os.getgid())
|
||||
os.chown(b_dest, os.getuid(), os.getgid())
|
||||
|
||||
if self.selinux_enabled():
|
||||
# rename might not preserve context
|
||||
|
@ -2024,7 +2037,7 @@ class AnsibleModule(object):
|
|||
:kw path_prefix: If given, additional path to find the command in.
|
||||
This adds to the PATH environment vairable so helper commands in
|
||||
the same directory can also be found
|
||||
:kw cwd: iIf given, working directory to run the command inside
|
||||
:kw cwd: If given, working directory to run the command inside
|
||||
:kw use_unsafe_shell: See `args` parameter. Default False
|
||||
:kw prompt_regex: Regex string (not a compiled regex) which can be
|
||||
used to detect prompts in the stdout which would otherwise cause
|
||||
|
@ -2039,13 +2052,13 @@ class AnsibleModule(object):
|
|||
shell = True
|
||||
elif isinstance(args, (binary_type, text_type)) and use_unsafe_shell:
|
||||
shell = True
|
||||
elif isinstance(args, string_types):
|
||||
elif isinstance(args, (binary_type, text_type)):
|
||||
# On python2.6 and below, shlex has problems with text type
|
||||
# On python3, shlex needs a text type.
|
||||
if PY2 and isinstance(args, text_type):
|
||||
args = args.encode('utf-8')
|
||||
elif PY3 and isinstance(args, binary_type):
|
||||
args = args.decode('utf-8', errors='surrogateescape')
|
||||
if PY2:
|
||||
args = to_bytes(args)
|
||||
elif PY3:
|
||||
args = to_text(args, errors='surrogateescape')
|
||||
args = shlex.split(args)
|
||||
else:
|
||||
msg = "Argument 'args' to run_command must be list or string"
|
||||
|
@ -2055,9 +2068,9 @@ class AnsibleModule(object):
|
|||
if prompt_regex:
|
||||
if isinstance(prompt_regex, text_type):
|
||||
if PY3:
|
||||
prompt_regex = prompt_regex.encode('utf-8', errors='surrogateescape')
|
||||
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
|
||||
elif PY2:
|
||||
prompt_regex = prompt_regex.encode('utf-8')
|
||||
prompt_regex = to_bytes(prompt_regex)
|
||||
try:
|
||||
prompt_re = re.compile(prompt_regex, re.MULTILINE)
|
||||
except re.error:
|
||||
|
@ -2065,7 +2078,7 @@ class AnsibleModule(object):
|
|||
|
||||
# expand things like $HOME and ~
|
||||
if not shell:
|
||||
args = [ os.path.expandvars(os.path.expanduser(x)) for x in args if x is not None ]
|
||||
args = [ os.path.expanduser(os.path.expandvars(x)) for x in args if x is not None ]
|
||||
|
||||
rc = 0
|
||||
msg = None
|
||||
|
|
|
@ -231,12 +231,12 @@ class Eapi(EosConfigMixin):
|
|||
return response['result']
|
||||
|
||||
def get_config(self, **kwargs):
|
||||
return self.run_commands(['show running-config'], format='text')[0]
|
||||
return self.execute(['show running-config'], format='text')[0]['output']
|
||||
|
||||
Eapi = register_transport('eapi')(Eapi)
|
||||
|
||||
|
||||
class Cli(CliBase, EosConfigMixin):
|
||||
class Cli(EosConfigMixin, CliBase):
|
||||
|
||||
CLI_PROMPTS_RE = [
|
||||
re.compile(r"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
|
||||
|
|
|
@ -3171,9 +3171,41 @@ class OpenBSDVirtual(Virtual):
|
|||
return self.facts
|
||||
|
||||
def get_virtual_facts(self):
|
||||
sysctl_path = self.module.get_bin_path('sysctl')
|
||||
|
||||
# Set empty values as default
|
||||
self.facts['virtualization_type'] = ''
|
||||
self.facts['virtualization_role'] = ''
|
||||
|
||||
if sysctl_path:
|
||||
rc, out, err = self.module.run_command("%s -n hw.product" % sysctl_path)
|
||||
if rc == 0:
|
||||
if re.match('(KVM|Bochs|SmartDC).*', out):
|
||||
self.facts['virtualization_type'] = 'kvm'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
elif re.match('.*VMware.*', out):
|
||||
self.facts['virtualization_type'] = 'VMware'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
elif out.rstrip() == 'VirtualBox':
|
||||
self.facts['virtualization_type'] = 'virtualbox'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
elif out.rstrip() == 'HVM domU':
|
||||
self.facts['virtualization_type'] = 'xen'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
elif out.rstrip() == 'Parallels':
|
||||
self.facts['virtualization_type'] = 'parallels'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
elif out.rstrip() == 'RHEV Hypervisor':
|
||||
self.facts['virtualization_type'] = 'RHEV'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
else:
|
||||
# Try harder and see if hw.vendor has anything we could use.
|
||||
rc, out, err = self.module.run_command("%s -n hw.vendor" % sysctl_path)
|
||||
if rc == 0:
|
||||
if out.rstrip() == 'QEMU':
|
||||
self.facts['virtualization_type'] = 'kvm'
|
||||
self.facts['virtualization_role'] = 'guest'
|
||||
|
||||
class HPUXVirtual(Virtual):
|
||||
"""
|
||||
This is a HP-UX specific subclass of Virtual. It defines
|
||||
|
|
|
@ -55,7 +55,8 @@ class Cli(CliBase):
|
|||
|
||||
def connect(self, params, **kwargs):
|
||||
super(Cli, self).connect(params, kickstart=False, **kwargs)
|
||||
self.shell.send('terminal length 0')
|
||||
self.shell.send(['terminal length 0',
|
||||
'terminal exec prompt no-timestamp'])
|
||||
|
||||
### implementation of netcli.Cli ###
|
||||
|
||||
|
@ -64,11 +65,14 @@ class Cli(CliBase):
|
|||
responses = self.execute(cmds)
|
||||
return responses
|
||||
|
||||
### immplementation of netcfg.Config ###
|
||||
### implementation of netcfg.Config ###
|
||||
|
||||
def configure(self, commands, **kwargs):
|
||||
cmds = ['configure terminal']
|
||||
if commands[-1] == 'end':
|
||||
commands.pop()
|
||||
cmds.extend(to_list(commands))
|
||||
cmds.extend(['commit', 'end'])
|
||||
responses = self.execute(cmds)
|
||||
return responses[1:]
|
||||
|
||||
|
@ -81,7 +85,7 @@ class Cli(CliBase):
|
|||
cmd += ' %s' % flags
|
||||
return self.execute([cmd])[0]
|
||||
|
||||
def load_config(self, config, replace=False, commit=False, **kwargs):
|
||||
def load_config(self, config, commit=False, replace=False, comment=None):
|
||||
commands = ['configure terminal']
|
||||
commands.extend(config)
|
||||
|
||||
|
@ -94,19 +98,22 @@ class Cli(CliBase):
|
|||
if commit:
|
||||
if replace:
|
||||
prompt = re.compile(r'\[no\]:\s$')
|
||||
cmd = Command('commit replace', prompt=prompt,
|
||||
response='yes')
|
||||
commit = 'commit replace'
|
||||
if comment:
|
||||
commit += ' comment %s' % comment
|
||||
cmd = Command(commit, prompt=prompt, response='yes')
|
||||
self.execute([cmd, 'end'])
|
||||
else:
|
||||
self.execute(['commit', 'end'])
|
||||
commit = 'commit'
|
||||
if comment:
|
||||
commit += ' comment %s' % comment
|
||||
self.execute([commit, 'end'])
|
||||
else:
|
||||
self.execute(['abort'])
|
||||
except NetworkError:
|
||||
self.execute(['abort'])
|
||||
diff = None
|
||||
raise
|
||||
return diff[0]
|
||||
|
||||
def save_config(self):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
Cli = register_transport('cli', default=True)(Cli)
|
||||
|
|
|
@ -102,6 +102,7 @@ class Command(object):
|
|||
|
||||
self.command = command
|
||||
self.output = output
|
||||
self.command_string = command
|
||||
|
||||
self.prompt = prompt
|
||||
self.response = response
|
||||
|
@ -110,7 +111,7 @@ class Command(object):
|
|||
self.delay = delay
|
||||
|
||||
def __str__(self):
|
||||
return self.command
|
||||
return self.command_string
|
||||
|
||||
class CommandRunner(object):
|
||||
|
||||
|
@ -145,7 +146,7 @@ class CommandRunner(object):
|
|||
return cmdobj.response
|
||||
except KeyError:
|
||||
for cmd in self.commands:
|
||||
if str(cmd) == command and cmd.output == output:
|
||||
if cmd.command == command and cmd.output == output:
|
||||
self._cache[(command, output)] = cmd
|
||||
return cmd.response
|
||||
raise ValueError("command '%s' not found" % command)
|
||||
|
|
|
@ -173,17 +173,15 @@ class Nxapi(object):
|
|||
return responses
|
||||
|
||||
|
||||
### end of netcli.Cli ###
|
||||
|
||||
### implemention of netcfg.Config ###
|
||||
|
||||
def configure(self, commands):
|
||||
commands = to_list(commands)
|
||||
return self.execute(commands, output='config')
|
||||
|
||||
def get_config(self, **kwargs):
|
||||
def get_config(self, include_defaults=False):
|
||||
cmd = 'show running-config'
|
||||
if kwargs.get('include_defaults'):
|
||||
if include_defaults:
|
||||
cmd += ' all'
|
||||
return self.execute([cmd], output='text')[0]
|
||||
|
||||
|
@ -251,7 +249,7 @@ class Cli(CliBase):
|
|||
except ValueError:
|
||||
raise NetworkError(
|
||||
msg='unable to load response from device',
|
||||
response=responses[index]
|
||||
response=responses[index], command=str(cmd)
|
||||
)
|
||||
return responses
|
||||
|
||||
|
@ -263,9 +261,9 @@ class Cli(CliBase):
|
|||
responses.pop(0)
|
||||
return responses
|
||||
|
||||
def get_config(self, include_defaults=False, **kwargs):
|
||||
def get_config(self, include_defaults=False):
|
||||
cmd = 'show running-config'
|
||||
if kwargs.get('include_defaults'):
|
||||
if include_defaults:
|
||||
cmd += ' all'
|
||||
return self.execute([cmd])[0]
|
||||
|
||||
|
@ -287,5 +285,5 @@ def prepare_commands(commands):
|
|||
jsonify = lambda x: '%s | json' % x
|
||||
for cmd in to_list(commands):
|
||||
if cmd.output == 'json':
|
||||
cmd.command = jsonify(cmd)
|
||||
cmd.command_string = jsonify(cmd)
|
||||
yield cmd
|
||||
|
|
|
@ -106,6 +106,11 @@ class Shell(object):
|
|||
raise ShellError("unable to resolve host name")
|
||||
except AuthenticationException:
|
||||
raise ShellError('Unable to authenticate to remote device')
|
||||
except socket.error:
|
||||
exc = get_exception()
|
||||
if exc.errno == 60:
|
||||
raise ShellError('timeout trying to connect to host')
|
||||
raise
|
||||
|
||||
if self.kickstart:
|
||||
self.shell.sendall("\n")
|
||||
|
|
292
lib/ansible/module_utils/univention_umc.py
Normal file
292
lib/ansible/module_utils/univention_umc.py
Normal file
|
@ -0,0 +1,292 @@
|
|||
# -*- coding: UTF-8 -*-
|
||||
|
||||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c) 2016, Adfinis SyGroup AG
|
||||
# Tobias Rueetschi <tobias.ruetschi@adfinis-sygroup.ch>
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
#
|
||||
|
||||
|
||||
"""Univention Corporate Server (UCS) access module.
|
||||
|
||||
Provides the following functions for working with an UCS server.
|
||||
|
||||
- ldap_search(filter, base=None, attr=None)
|
||||
Search the LDAP via Univention's LDAP wrapper (ULDAP)
|
||||
|
||||
- config_registry()
|
||||
Return the UCR registry object
|
||||
|
||||
- base_dn()
|
||||
Return the configured Base DN according to the UCR
|
||||
|
||||
- uldap()
|
||||
Return a handle to the ULDAP LDAP wrapper
|
||||
|
||||
- umc_module_for_add(module, container_dn, superordinate=None)
|
||||
Return a UMC module for creating a new object of the given type
|
||||
|
||||
- umc_module_for_edit(module, object_dn, superordinate=None)
|
||||
Return a UMC module for editing an existing object of the given type
|
||||
|
||||
|
||||
Any other module is not part of the "official" API and may change at any time.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
|
||||
__all__ = [
|
||||
'ldap_search',
|
||||
'config_registry',
|
||||
'base_dn',
|
||||
'uldap',
|
||||
'umc_module_for_add',
|
||||
'umc_module_for_edit',
|
||||
]
|
||||
|
||||
|
||||
_singletons = {}
|
||||
|
||||
|
||||
def ldap_module():
|
||||
import ldap as orig_ldap
|
||||
return orig_ldap
|
||||
|
||||
|
||||
def _singleton(name, constructor):
|
||||
if name in _singletons:
|
||||
return _singletons[name]
|
||||
_singletons[name] = constructor()
|
||||
return _singletons[name]
|
||||
|
||||
|
||||
def config_registry():
|
||||
|
||||
def construct():
|
||||
import univention.config_registry
|
||||
ucr = univention.config_registry.ConfigRegistry()
|
||||
ucr.load()
|
||||
return ucr
|
||||
|
||||
return _singleton('config_registry', construct)
|
||||
|
||||
|
||||
def base_dn():
|
||||
return config_registry()['ldap/base']
|
||||
|
||||
|
||||
def uldap():
|
||||
"Return a configured univention uldap object"
|
||||
|
||||
def construct():
|
||||
try:
|
||||
secret_file = open('/etc/ldap.secret', 'r')
|
||||
bind_dn = 'cn=admin,{}'.format(base_dn())
|
||||
except IOError: # pragma: no cover
|
||||
secret_file = open('/etc/machine.secret', 'r')
|
||||
bind_dn = config_registry()["ldap/hostdn"]
|
||||
pwd_line = secret_file.readline()
|
||||
pwd = re.sub('\n', '', pwd_line)
|
||||
|
||||
import univention.admin.uldap
|
||||
return univention.admin.uldap.access(
|
||||
host = config_registry()['ldap/master'],
|
||||
base = base_dn(),
|
||||
binddn = bind_dn,
|
||||
bindpw = pwd,
|
||||
start_tls = 1
|
||||
)
|
||||
|
||||
return _singleton('uldap', construct)
|
||||
|
||||
|
||||
def config():
|
||||
def construct():
|
||||
import univention.admin.config
|
||||
return univention.admin.config.config()
|
||||
return _singleton('config', construct)
|
||||
|
||||
|
||||
def init_modules():
|
||||
def construct():
|
||||
import univention.admin.modules
|
||||
univention.admin.modules.update()
|
||||
return True
|
||||
return _singleton('modules_initialized', construct)
|
||||
|
||||
|
||||
def position_base_dn():
|
||||
def construct():
|
||||
import univention.admin.uldap
|
||||
return univention.admin.uldap.position(base_dn())
|
||||
return _singleton('position_base_dn', construct)
|
||||
|
||||
|
||||
def ldap_dn_tree_parent(dn, count=1):
|
||||
dn_array = dn.split(',')
|
||||
dn_array[0:count] = []
|
||||
return ','.join(dn_array)
|
||||
|
||||
|
||||
def ldap_search(filter, base=None, attr=None):
|
||||
"""Replaces uldaps search and uses a generator.
|
||||
!! Arguments are not the same."""
|
||||
|
||||
if base is None:
|
||||
base = base_dn()
|
||||
msgid = uldap().lo.lo.search(
|
||||
base,
|
||||
ldap_module().SCOPE_SUBTREE,
|
||||
filterstr=filter,
|
||||
attrlist=attr
|
||||
)
|
||||
# I used to have a try: finally: here but there seems to be a bug in python
|
||||
# which swallows the KeyboardInterrupt
|
||||
# The abandon now doesn't make too much sense
|
||||
while True:
|
||||
result_type, result_data = uldap().lo.lo.result(msgid, all=0)
|
||||
if not result_data:
|
||||
break
|
||||
if result_type is ldap_module().RES_SEARCH_RESULT: # pragma: no cover
|
||||
break
|
||||
else:
|
||||
if result_type is ldap_module().RES_SEARCH_ENTRY:
|
||||
for res in result_data:
|
||||
yield res
|
||||
uldap().lo.lo.abandon(msgid)
|
||||
|
||||
|
||||
def module_by_name(module_name_):
|
||||
"""Returns an initialized UMC module, identified by the given name.
|
||||
|
||||
The module is a module specification according to the udm commandline.
|
||||
Example values are:
|
||||
* users/user
|
||||
* shares/share
|
||||
* groups/group
|
||||
|
||||
If the module does not exist, a KeyError is raised.
|
||||
|
||||
The modules are cached, so they won't be re-initialized
|
||||
in subsequent calls.
|
||||
"""
|
||||
|
||||
def construct():
|
||||
import univention.admin.modules
|
||||
init_modules()
|
||||
module = univention.admin.modules.get(module_name_)
|
||||
univention.admin.modules.init(uldap(), position_base_dn(), module)
|
||||
return module
|
||||
|
||||
return _singleton('module/%s' % module_name_, construct)
|
||||
|
||||
|
||||
def get_umc_admin_objects():
|
||||
"""Convenience accessor for getting univention.admin.objects.
|
||||
|
||||
This implements delayed importing, so the univention.* modules
|
||||
are not loaded until this function is called.
|
||||
"""
|
||||
import univention.admin
|
||||
return univention.admin.objects
|
||||
|
||||
|
||||
def umc_module_for_add(module, container_dn, superordinate=None):
|
||||
"""Returns an UMC module object prepared for creating a new entry.
|
||||
|
||||
The module is a module specification according to the udm commandline.
|
||||
Example values are:
|
||||
* users/user
|
||||
* shares/share
|
||||
* groups/group
|
||||
|
||||
The container_dn MUST be the dn of the container (not of the object to
|
||||
be created itself!).
|
||||
"""
|
||||
mod = module_by_name(module)
|
||||
|
||||
position = position_base_dn()
|
||||
position.setDn(container_dn)
|
||||
|
||||
# config, ldap objects from common module
|
||||
obj = mod.object(config(), uldap(), position, superordinate=superordinate)
|
||||
obj.open()
|
||||
|
||||
return obj
|
||||
|
||||
|
||||
def umc_module_for_edit(module, object_dn, superordinate=None):
|
||||
"""Returns an UMC module object prepared for editing an existing entry.
|
||||
|
||||
The module is a module specification according to the udm commandline.
|
||||
Example values are:
|
||||
* users/user
|
||||
* shares/share
|
||||
* groups/group
|
||||
|
||||
The object_dn MUST be the dn of the object itself, not the container!
|
||||
"""
|
||||
mod = module_by_name(module)
|
||||
|
||||
objects = get_umc_admin_objects()
|
||||
|
||||
position = position_base_dn()
|
||||
position.setDn(ldap_dn_tree_parent(object_dn))
|
||||
|
||||
obj = objects.get(
|
||||
mod,
|
||||
config(),
|
||||
uldap(),
|
||||
position=position,
|
||||
superordinate=superordinate,
|
||||
dn=object_dn
|
||||
)
|
||||
obj.open()
|
||||
|
||||
return obj
|
||||
|
||||
|
||||
def create_containers_and_parents(container_dn):
|
||||
"""Create a container and if needed the parents containers"""
|
||||
import univention.admin.uexceptions as uexcp
|
||||
assert container_dn.startswith("cn=")
|
||||
try:
|
||||
parent = ldap_dn_tree_parent(container_dn)
|
||||
obj = umc_module_for_add(
|
||||
'container/cn',
|
||||
parent
|
||||
)
|
||||
obj['name'] = container_dn.split(',')[0].split('=')[1]
|
||||
obj['description'] = "container created by import"
|
||||
except uexcp.ldapError:
|
||||
create_containers_and_parents(parent)
|
||||
obj = umc_module_for_add(
|
||||
'container/cn',
|
||||
parent
|
||||
)
|
||||
obj['name'] = container_dn.split(',')[0].split('=')[1]
|
||||
obj['description'] = "container created by import"
|
|
@ -1 +1 @@
|
|||
Subproject commit ef84dbbddd5d64e0860bd1f198dbd71929061d01
|
||||
Subproject commit 5310bab12f6013195ac0e770472d593552271b11
|
|
@ -1 +1 @@
|
|||
Subproject commit f29efb56264a9ad95b97765e367ef5b7915ab877
|
||||
Subproject commit 2ef4a34eee091449d2a22312e3e15171f8c6d54c
|
|
@ -264,7 +264,8 @@ class ModuleArgsParser:
|
|||
if 'action' in self._task_ds:
|
||||
# an old school 'action' statement
|
||||
thing = self._task_ds['action']
|
||||
action, args = self._normalize_parameters(thing, additional_args=additional_args)
|
||||
action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args)
|
||||
|
||||
|
||||
# local_action
|
||||
if 'local_action' in self._task_ds:
|
||||
|
@ -273,19 +274,20 @@ class ModuleArgsParser:
|
|||
raise AnsibleParserError("action and local_action are mutually exclusive", obj=self._task_ds)
|
||||
thing = self._task_ds.get('local_action', '')
|
||||
delegate_to = 'localhost'
|
||||
action, args = self._normalize_parameters(thing, additional_args=additional_args)
|
||||
action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args)
|
||||
|
||||
# module: <stuff> is the more new-style invocation
|
||||
|
||||
# walk the input dictionary to see we recognize a module name
|
||||
for (item, value) in iteritems(self._task_ds):
|
||||
if item in module_loader or item == 'meta' or item == 'include':
|
||||
if item in module_loader or item in ['meta', 'include', 'include_role']:
|
||||
# finding more than one module name is a problem
|
||||
if action is not None:
|
||||
raise AnsibleParserError("conflicting action statements", obj=self._task_ds)
|
||||
action = item
|
||||
thing = value
|
||||
action, args = self._normalize_parameters(value, action=action, additional_args=additional_args)
|
||||
action, args = self._normalize_parameters(thing, action=action, additional_args=additional_args)
|
||||
|
||||
|
||||
# if we didn't see any module in the task at all, it's not a task really
|
||||
if action is None:
|
||||
|
|
|
@ -22,8 +22,7 @@ import os
|
|||
|
||||
from ansible import constants as C
|
||||
from ansible.compat.six import string_types
|
||||
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound
|
||||
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleSequence
|
||||
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleError
|
||||
|
||||
try:
|
||||
from __main__ import display
|
||||
|
@ -81,6 +80,7 @@ def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_h
|
|||
from ansible.playbook.handler import Handler
|
||||
from ansible.playbook.task import Task
|
||||
from ansible.playbook.task_include import TaskInclude
|
||||
from ansible.playbook.role_include import IncludeRole
|
||||
from ansible.playbook.handler_task_include import HandlerTaskInclude
|
||||
from ansible.template import Templar
|
||||
|
||||
|
@ -172,7 +172,7 @@ def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_h
|
|||
if not found:
|
||||
try:
|
||||
include_target = templar.template(t.args['_raw_params'])
|
||||
except AnsibleUndefinedVariable as e:
|
||||
except AnsibleUndefinedVariable:
|
||||
raise AnsibleParserError(
|
||||
"Error when evaluating variable in include name: %s.\n\n" \
|
||||
"When using static includes, ensure that any variables used in their names are defined in vars/vars_files\n" \
|
||||
|
@ -191,14 +191,14 @@ def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_h
|
|||
if data is None:
|
||||
return []
|
||||
elif not isinstance(data, list):
|
||||
raise AnsibleError("included task files must contain a list of tasks", obj=data)
|
||||
raise AnsibleParserError("included task files must contain a list of tasks", obj=data)
|
||||
|
||||
# since we can't send callbacks here, we display a message directly in
|
||||
# the same fashion used by the on_include callback. We also do it here,
|
||||
# because the recursive nature of helper methods means we may be loading
|
||||
# nested includes, and we want the include order printed correctly
|
||||
display.display("statically included: %s" % include_file, color=C.COLOR_SKIP)
|
||||
except AnsibleFileNotFound as e:
|
||||
except AnsibleFileNotFound:
|
||||
if t.static or \
|
||||
C.DEFAULT_TASK_INCLUDES_STATIC or \
|
||||
C.DEFAULT_HANDLER_INCLUDES_STATIC and use_handlers:
|
||||
|
@ -258,11 +258,24 @@ def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_h
|
|||
task_list.extend(included_blocks)
|
||||
else:
|
||||
task_list.append(t)
|
||||
|
||||
elif 'include_role' in task_ds:
|
||||
task_list.extend(
|
||||
IncludeRole.load(
|
||||
task_ds,
|
||||
block=block,
|
||||
role=role,
|
||||
task_include=None,
|
||||
variable_manager=variable_manager,
|
||||
loader=loader
|
||||
)
|
||||
)
|
||||
else:
|
||||
if use_handlers:
|
||||
t = Handler.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
|
||||
else:
|
||||
t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
|
||||
|
||||
task_list.append(t)
|
||||
|
||||
return task_list
|
||||
|
|
|
@ -66,7 +66,7 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
_delegate_to = FieldAttribute(isa='string')
|
||||
_delegate_facts = FieldAttribute(isa='bool', default=False)
|
||||
|
||||
def __init__(self, play=None):
|
||||
def __init__(self, play=None, from_files=None):
|
||||
self._role_name = None
|
||||
self._role_path = None
|
||||
self._role_params = dict()
|
||||
|
@ -83,6 +83,10 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
self._had_task_run = dict()
|
||||
self._completed = dict()
|
||||
|
||||
if from_files is None:
|
||||
from_files = {}
|
||||
self._from_files = from_files
|
||||
|
||||
super(Role, self).__init__()
|
||||
|
||||
def __repr__(self):
|
||||
|
@ -92,7 +96,10 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
return self._role_name
|
||||
|
||||
@staticmethod
|
||||
def load(role_include, play, parent_role=None):
|
||||
def load(role_include, play, parent_role=None, from_files=None):
|
||||
|
||||
if from_files is None:
|
||||
from_files = {}
|
||||
try:
|
||||
# The ROLE_CACHE is a dictionary of role names, with each entry
|
||||
# containing another dictionary corresponding to a set of parameters
|
||||
|
@ -104,6 +111,10 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
params['when'] = role_include.when
|
||||
if role_include.tags is not None:
|
||||
params['tags'] = role_include.tags
|
||||
if from_files is not None:
|
||||
params['from_files'] = from_files
|
||||
if role_include.vars:
|
||||
params['vars'] = role_include.vars
|
||||
hashed_params = hash_params(params)
|
||||
if role_include.role in play.ROLE_CACHE:
|
||||
for (entry, role_obj) in iteritems(play.ROLE_CACHE[role_include.role]):
|
||||
|
@ -112,7 +123,7 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
role_obj.add_parent(parent_role)
|
||||
return role_obj
|
||||
|
||||
r = Role(play=play)
|
||||
r = Role(play=play, from_files=from_files)
|
||||
r._load_role_data(role_include, parent_role=parent_role)
|
||||
|
||||
if role_include.role not in play.ROLE_CACHE:
|
||||
|
@ -163,7 +174,7 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
else:
|
||||
self._metadata = RoleMetadata()
|
||||
|
||||
task_data = self._load_role_yaml('tasks')
|
||||
task_data = self._load_role_yaml('tasks', main=self._from_files.get('tasks'))
|
||||
if task_data:
|
||||
try:
|
||||
self._task_blocks = load_list_of_blocks(task_data, play=self._play, role=self, loader=self._loader, variable_manager=self._variable_manager)
|
||||
|
@ -178,35 +189,48 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
raise AnsibleParserError("The handlers/main.yml file for role '%s' must contain a list of tasks" % self._role_name , obj=handler_data)
|
||||
|
||||
# vars and default vars are regular dictionaries
|
||||
self._role_vars = self._load_role_yaml('vars')
|
||||
self._role_vars = self._load_role_yaml('vars', main=self._from_files.get('vars'))
|
||||
if self._role_vars is None:
|
||||
self._role_vars = dict()
|
||||
elif not isinstance(self._role_vars, dict):
|
||||
raise AnsibleParserError("The vars/main.yml file for role '%s' must contain a dictionary of variables" % self._role_name)
|
||||
|
||||
self._default_vars = self._load_role_yaml('defaults')
|
||||
self._default_vars = self._load_role_yaml('defaults', main=self._from_files.get('defaults'))
|
||||
if self._default_vars is None:
|
||||
self._default_vars = dict()
|
||||
elif not isinstance(self._default_vars, dict):
|
||||
raise AnsibleParserError("The defaults/main.yml file for role '%s' must contain a dictionary of variables" % self._role_name)
|
||||
|
||||
def _load_role_yaml(self, subdir):
|
||||
def _load_role_yaml(self, subdir, main=None):
|
||||
file_path = os.path.join(self._role_path, subdir)
|
||||
if self._loader.path_exists(file_path) and self._loader.is_directory(file_path):
|
||||
main_file = self._resolve_main(file_path)
|
||||
main_file = self._resolve_main(file_path, main)
|
||||
if self._loader.path_exists(main_file):
|
||||
return self._loader.load_from_file(main_file)
|
||||
return None
|
||||
|
||||
def _resolve_main(self, basepath):
|
||||
def _resolve_main(self, basepath, main=None):
|
||||
''' flexibly handle variations in main filenames '''
|
||||
|
||||
post = False
|
||||
# allow override if set, otherwise use default
|
||||
if main is None:
|
||||
main = 'main'
|
||||
post = True
|
||||
|
||||
bare_main = os.path.join(basepath, main)
|
||||
|
||||
possible_mains = (
|
||||
os.path.join(basepath, 'main.yml'),
|
||||
os.path.join(basepath, 'main.yaml'),
|
||||
os.path.join(basepath, 'main.json'),
|
||||
os.path.join(basepath, 'main'),
|
||||
os.path.join(basepath, '%s.yml' % main),
|
||||
os.path.join(basepath, '%s.yaml' % main),
|
||||
os.path.join(basepath, '%s.json' % main),
|
||||
)
|
||||
|
||||
if post:
|
||||
possible_mains = possible_mains + (bare_main,)
|
||||
else:
|
||||
possible_mains = (bare_main,) + possible_mains
|
||||
|
||||
if sum([self._loader.is_file(x) for x in possible_mains]) > 1:
|
||||
raise AnsibleError("found multiple main files at %s, only one allowed" % (basepath))
|
||||
else:
|
||||
|
@ -274,6 +298,7 @@ class Role(Base, Become, Conditional, Taggable):
|
|||
for dep in self.get_all_dependencies():
|
||||
all_vars = combine_vars(all_vars, dep.get_vars(include_params=include_params))
|
||||
|
||||
all_vars = combine_vars(all_vars, self.vars)
|
||||
all_vars = combine_vars(all_vars, self._role_vars)
|
||||
if include_params:
|
||||
all_vars = combine_vars(all_vars, self.get_role_params(dep_chain=dep_chain))
|
||||
|
|
84
lib/ansible/playbook/role_include.py
Normal file
84
lib/ansible/playbook/role_include.py
Normal file
|
@ -0,0 +1,84 @@
|
|||
# Copyright (c) 2012 Red Hat, Inc. All rights reserved.
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Make coding more python3-ish
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from os.path import basename
|
||||
|
||||
from ansible.playbook.attribute import FieldAttribute
|
||||
from ansible.playbook.task import Task
|
||||
from ansible.playbook.role import Role
|
||||
from ansible.playbook.role.include import RoleInclude
|
||||
|
||||
try:
|
||||
from __main__ import display
|
||||
except ImportError:
|
||||
from ansible.utils.display import Display
|
||||
display = Display()
|
||||
|
||||
__all__ = ['IncludeRole']
|
||||
|
||||
|
||||
class IncludeRole(Task):
|
||||
|
||||
"""
|
||||
A Role include is derived from a regular role to handle the special
|
||||
circumstances related to the `- include_role: ...`
|
||||
"""
|
||||
|
||||
# =================================================================================
|
||||
# ATTRIBUTES
|
||||
|
||||
_name = FieldAttribute(isa='string', default=None)
|
||||
_tasks_from = FieldAttribute(isa='string', default=None)
|
||||
|
||||
# these should not be changeable?
|
||||
_static = FieldAttribute(isa='bool', default=False)
|
||||
_private = FieldAttribute(isa='bool', default=True)
|
||||
|
||||
@staticmethod
|
||||
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
|
||||
|
||||
r = IncludeRole().load_data(data, variable_manager=variable_manager, loader=loader)
|
||||
args = r.preprocess_data(data).get('args', dict())
|
||||
|
||||
ri = RoleInclude.load(args.get('name'), play=block._play, variable_manager=variable_manager, loader=loader)
|
||||
ri.vars.update(r.vars)
|
||||
|
||||
# build options for roles
|
||||
from_files = {}
|
||||
for key in ['tasks', 'vars', 'defaults']:
|
||||
from_key = key + '_from'
|
||||
if args.get(from_key):
|
||||
from_files[key] = basename(args.get(from_key))
|
||||
|
||||
#build role
|
||||
actual_role = Role.load(ri, block._play, parent_role=role, from_files=from_files)
|
||||
|
||||
# compile role
|
||||
blocks = actual_role.compile(play=block._play)
|
||||
|
||||
# set parent to ensure proper inheritance
|
||||
for b in blocks:
|
||||
b._parent = block
|
||||
|
||||
# updated available handlers in play
|
||||
block._play.handlers = block._play.handlers + actual_role.get_handler_blocks(play=block._play)
|
||||
|
||||
return blocks
|
|
@ -357,16 +357,15 @@ class PluginLoader:
|
|||
return obj
|
||||
|
||||
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
|
||||
searched_msg = 'Searching for plugin type %s named \'%s\' in paths: %s' % (class_name, name, self.format_paths(searched_paths))
|
||||
loading_msg = 'Loading plugin type %s named \'%s\' from %s' % (class_name, name, path)
|
||||
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
|
||||
|
||||
if len(searched_paths) > 1:
|
||||
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
|
||||
|
||||
if found_in_cache or class_only:
|
||||
extra_msg = 'found_in_cache=%s, class_only=%s' % (found_in_cache, class_only)
|
||||
display.debug('%s %s' % (searched_msg, extra_msg))
|
||||
display.debug('%s %s' % (loading_msg, extra_msg))
|
||||
else:
|
||||
display.vvvv(searched_msg)
|
||||
display.vvv(loading_msg)
|
||||
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
|
||||
|
||||
display.debug(msg)
|
||||
|
||||
def all(self, *args, **kwargs):
|
||||
''' instantiates all plugins with the same arguments '''
|
||||
|
|
|
@ -64,7 +64,8 @@ class ActionModule(ActionBase):
|
|||
remote_checksum = None
|
||||
if not self._play_context.become:
|
||||
# calculate checksum for the remote file, don't bother if using become as slurp will be used
|
||||
remote_checksum = self._remote_checksum(source, all_vars=task_vars)
|
||||
# Force remote_checksum to follow symlinks because fetch always follows symlinks
|
||||
remote_checksum = self._remote_checksum(source, all_vars=task_vars, follow=True)
|
||||
|
||||
# use slurp if permissions are lacking or privilege escalation is needed
|
||||
remote_data = None
|
||||
|
|
|
@ -254,7 +254,7 @@ class ConnectionBase(with_metaclass(ABCMeta, object)):
|
|||
b_prompt = to_bytes(self._play_context.prompt)
|
||||
return b_output.startswith(b_prompt)
|
||||
else:
|
||||
return self._play_context.prompt(output)
|
||||
return self._play_context.prompt(b_output)
|
||||
|
||||
def check_incorrect_password(self, b_output):
|
||||
b_incorrect_password = to_bytes(gettext.dgettext(self._play_context.become_method, C.BECOME_ERROR_STRINGS[self._play_context.become_method]))
|
||||
|
|
|
@ -23,6 +23,8 @@ import grp
|
|||
import stat
|
||||
|
||||
from ansible.plugins.lookup import LookupBase
|
||||
from ansible.utils.unicode import to_str
|
||||
|
||||
from __main__ import display
|
||||
warning = display.warning
|
||||
|
||||
|
@ -33,25 +35,15 @@ try:
|
|||
except ImportError:
|
||||
pass
|
||||
|
||||
def _to_filesystem_str(path):
|
||||
'''Returns filesystem path as a str, if it wasn't already.
|
||||
|
||||
Used in selinux interactions because it cannot accept unicode
|
||||
instances, and specifying complex args in a playbook leaves
|
||||
you with unicode instances. This method currently assumes
|
||||
that your filesystem encoding is UTF-8.
|
||||
|
||||
'''
|
||||
if isinstance(path, unicode):
|
||||
path = path.encode("utf-8")
|
||||
return path
|
||||
|
||||
# If selinux fails to find a default, return an array of None
|
||||
def selinux_context(path):
|
||||
context = [None, None, None, None]
|
||||
if HAVE_SELINUX and selinux.is_selinux_enabled():
|
||||
try:
|
||||
ret = selinux.lgetfilecon_raw(_to_filesystem_str(path))
|
||||
# note: the selinux module uses byte strings on python2 and text
|
||||
# strings on python3
|
||||
ret = selinux.lgetfilecon_raw(to_str(path))
|
||||
except OSError:
|
||||
return context
|
||||
if ret[0] != -1:
|
||||
|
@ -60,6 +52,7 @@ def selinux_context(path):
|
|||
context = ret[1].split(':', 3)
|
||||
return context
|
||||
|
||||
|
||||
def file_props(root, path):
|
||||
''' Returns dictionary with file properties, or return None on failure '''
|
||||
abspath = os.path.join(root, path)
|
||||
|
@ -94,7 +87,7 @@ def file_props(root, path):
|
|||
ret['group'] = grp.getgrgid(st.st_gid).gr_name
|
||||
except KeyError:
|
||||
ret['group'] = st.st_gid
|
||||
ret['mode'] = str(oct(stat.S_IMODE(st.st_mode)))
|
||||
ret['mode'] = '0%03o' % (stat.S_IMODE(st.st_mode))
|
||||
ret['size'] = st.st_size
|
||||
ret['mtime'] = st.st_mtime
|
||||
ret['ctime'] = st.st_ctime
|
||||
|
|
|
@ -40,6 +40,7 @@ from ansible.module_utils.facts import Facts
|
|||
from ansible.playbook.helpers import load_list_of_blocks
|
||||
from ansible.playbook.included_file import IncludedFile
|
||||
from ansible.playbook.task_include import TaskInclude
|
||||
from ansible.playbook.role_include import IncludeRole
|
||||
from ansible.plugins import action_loader, connection_loader, filter_loader, lookup_loader, module_loader, test_loader
|
||||
from ansible.template import Templar
|
||||
from ansible.utils.unicode import to_unicode
|
||||
|
@ -258,7 +259,7 @@ class StrategyBase:
|
|||
|
||||
def parent_handler_match(target_handler, handler_name):
|
||||
if target_handler:
|
||||
if isinstance(target_handler, TaskInclude):
|
||||
if isinstance(target_handler, (TaskInclude, IncludeRole)):
|
||||
try:
|
||||
handler_vars = self._variable_manager.get_vars(loader=self._loader, play=iterator._play, task=target_handler)
|
||||
templar = Templar(loader=self._loader, variables=handler_vars)
|
||||
|
@ -477,7 +478,7 @@ class StrategyBase:
|
|||
|
||||
# If this is a role task, mark the parent role as being run (if
|
||||
# the task was ok or failed, but not skipped or unreachable)
|
||||
if original_task._role is not None and role_ran:
|
||||
if original_task._role is not None and role_ran and original_task.action != 'include_role':
|
||||
# lookup the role in the ROLE_CACHE to make sure we're dealing
|
||||
# with the correct object and mark it as executed
|
||||
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role._role_name]):
|
||||
|
|
|
@ -26,7 +26,7 @@ from ansible.playbook.included_file import IncludedFile
|
|||
from ansible.plugins import action_loader
|
||||
from ansible.plugins.strategy import StrategyBase
|
||||
from ansible.template import Templar
|
||||
from ansible.compat.six import text_type
|
||||
from ansible.utils.unicode import to_unicode
|
||||
|
||||
try:
|
||||
from __main__ import display
|
||||
|
@ -109,7 +109,7 @@ class StrategyModule(StrategyBase):
|
|||
display.debug("done getting variables")
|
||||
|
||||
try:
|
||||
task.name = text_type(templar.template(task.name, fail_on_undefined=False))
|
||||
task.name = to_unicode(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
|
||||
display.debug("done templating")
|
||||
except:
|
||||
# just ignore any errors during task name templating,
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from ansible.compat.six import iteritems, text_type
|
||||
from ansible.compat.six import iteritems
|
||||
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.executor.play_iterator import PlayIterator
|
||||
|
@ -238,7 +238,7 @@ class StrategyModule(StrategyBase):
|
|||
saved_name = task.name
|
||||
display.debug("done copying, going to template now")
|
||||
try:
|
||||
task.name = text_type(templar.template(task.name, fail_on_undefined=False))
|
||||
task.name = to_unicode(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
|
||||
display.debug("done templating")
|
||||
except:
|
||||
# just ignore any errors during task name templating,
|
||||
|
|
|
@ -235,7 +235,7 @@ class VariableManager:
|
|||
# if we have a task in this context, and that task has a role, make
|
||||
# sure it sees its defaults above any other roles, as we previously
|
||||
# (v1) made sure each task had a copy of its roles default vars
|
||||
if task and task._role is not None:
|
||||
if task and task._role is not None and (play or task.action == 'include_role'):
|
||||
all_vars = combine_vars(all_vars, task._role.get_default_vars(dep_chain=task.get_dep_chain()))
|
||||
|
||||
if host:
|
||||
|
|
|
@ -51,8 +51,6 @@ matrix:
|
|||
python: 2.6
|
||||
- env: TEST=sanity INSTALL_DEPS=1 TOXENV=py27
|
||||
python: 2.7
|
||||
- env: TEST=sanity INSTALL_DEPS=1 TOXENV=py34
|
||||
python: 3.4
|
||||
- env: TEST=sanity INSTALL_DEPS=1 TOXENV=py35
|
||||
python: 3.5
|
||||
|
||||
|
|
|
@ -68,7 +68,7 @@ def _check_mode_changed_to_0660(self, mode):
|
|||
with patch('os.lstat', side_effect=[self.mock_stat1, self.mock_stat2, self.mock_stat2]) as m_lstat:
|
||||
with patch('os.lchmod', return_value=None, create=True) as m_lchmod:
|
||||
self.assertEqual(self.am.set_mode_if_different('/path/to/file', mode, False), True)
|
||||
m_lchmod.assert_called_with('/path/to/file', 0o660)
|
||||
m_lchmod.assert_called_with(b'/path/to/file', 0o660)
|
||||
|
||||
def _check_mode_unchanged_when_already_0660(self, mode):
|
||||
# Note: This is for checking that all the different ways of specifying
|
||||
|
|
|
@ -741,7 +741,7 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
with patch('os.lchown', side_effect=OSError) as m:
|
||||
self.assertRaises(SystemExit, am.set_group_if_different, '/path/to/file', 'root', False)
|
||||
|
||||
@patch('tempfile.NamedTemporaryFile')
|
||||
@patch('tempfile.mkstemp')
|
||||
@patch('os.umask')
|
||||
@patch('shutil.copyfileobj')
|
||||
@patch('shutil.move')
|
||||
|
@ -755,8 +755,10 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
@patch('os.chmod')
|
||||
@patch('os.stat')
|
||||
@patch('os.path.exists')
|
||||
@patch('os.close')
|
||||
def test_module_utils_basic_ansible_module_atomic_move(
|
||||
self,
|
||||
_os_close,
|
||||
_os_path_exists,
|
||||
_os_stat,
|
||||
_os_chmod,
|
||||
|
@ -770,7 +772,7 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
_shutil_move,
|
||||
_shutil_copyfileobj,
|
||||
_os_umask,
|
||||
_tempfile_NamedTemporaryFile,
|
||||
_tempfile_mkstemp,
|
||||
):
|
||||
|
||||
from ansible.module_utils import basic
|
||||
|
@ -802,8 +804,8 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
_os_chown.reset_mock()
|
||||
am.set_context_if_different.reset_mock()
|
||||
am.atomic_move('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with('/path/to/src', '/path/to/dest')
|
||||
self.assertEqual(_os_chmod.call_args_list, [call('/path/to/dest', basic.DEFAULT_PERM & ~18)])
|
||||
_os_rename.assert_called_with(b'/path/to/src', b'/path/to/dest')
|
||||
self.assertEqual(_os_chmod.call_args_list, [call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)])
|
||||
|
||||
# same as above, except selinux_enabled
|
||||
_os_path_exists.side_effect = [False, False]
|
||||
|
@ -820,8 +822,8 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
am.set_context_if_different.reset_mock()
|
||||
am.selinux_default_context.reset_mock()
|
||||
am.atomic_move('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with('/path/to/src', '/path/to/dest')
|
||||
self.assertEqual(_os_chmod.call_args_list, [call('/path/to/dest', basic.DEFAULT_PERM & ~18)])
|
||||
_os_rename.assert_called_with(b'/path/to/src', b'/path/to/dest')
|
||||
self.assertEqual(_os_chmod.call_args_list, [call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)])
|
||||
self.assertEqual(am.selinux_default_context.call_args_list, [call('/path/to/dest')])
|
||||
self.assertEqual(am.set_context_if_different.call_args_list, [call('/path/to/dest', mock_context, False)])
|
||||
|
||||
|
@ -844,7 +846,7 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
_os_chown.reset_mock()
|
||||
am.set_context_if_different.reset_mock()
|
||||
am.atomic_move('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with(b'/path/to/src', b'/path/to/dest')
|
||||
|
||||
# dest missing, selinux enabled
|
||||
_os_path_exists.side_effect = [True, True]
|
||||
|
@ -866,7 +868,7 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
am.set_context_if_different.reset_mock()
|
||||
am.selinux_default_context.reset_mock()
|
||||
am.atomic_move('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with('/path/to/src', '/path/to/dest')
|
||||
_os_rename.assert_called_with(b'/path/to/src', b'/path/to/dest')
|
||||
self.assertEqual(am.selinux_context.call_args_list, [call('/path/to/dest')])
|
||||
self.assertEqual(am.set_context_if_different.call_args_list, [call('/path/to/dest', mock_context, False)])
|
||||
|
||||
|
@ -903,20 +905,21 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
self.assertRaises(SystemExit, am.atomic_move, '/path/to/src', '/path/to/dest')
|
||||
|
||||
# next we test with EPERM so it continues to the alternate code for moving
|
||||
# test with NamedTemporaryFile raising an error first
|
||||
# test with mkstemp raising an error first
|
||||
_os_path_exists.side_effect = [False, False]
|
||||
_os_getlogin.return_value = 'root'
|
||||
_os_getuid.return_value = 0
|
||||
_os_close.return_value = None
|
||||
_pwd_getpwuid.return_value = ('root', '', 0, 0, '', '', '')
|
||||
_os_umask.side_effect = [18, 0]
|
||||
_os_rename.side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
|
||||
_tempfile_NamedTemporaryFile.return_value = None
|
||||
_tempfile_NamedTemporaryFile.side_effect = OSError()
|
||||
_tempfile_mkstemp.return_value = None
|
||||
_tempfile_mkstemp.side_effect = OSError()
|
||||
am.selinux_enabled.return_value = False
|
||||
self.assertRaises(SystemExit, am.atomic_move, '/path/to/src', '/path/to/dest')
|
||||
|
||||
# then test with it creating a temp file
|
||||
_os_path_exists.side_effect = [False, False]
|
||||
_os_path_exists.side_effect = [False, False, False]
|
||||
_os_getlogin.return_value = 'root'
|
||||
_os_getuid.return_value = 0
|
||||
_pwd_getpwuid.return_value = ('root', '', 0, 0, '', '', '')
|
||||
|
@ -927,23 +930,20 @@ class TestModuleUtilsBasic(ModuleTestCase):
|
|||
mock_stat3 = MagicMock()
|
||||
_os_stat.return_value = [mock_stat1, mock_stat2, mock_stat3]
|
||||
_os_stat.side_effect = None
|
||||
mock_tempfile = MagicMock()
|
||||
mock_tempfile.name = '/path/to/tempfile'
|
||||
_tempfile_NamedTemporaryFile.return_value = mock_tempfile
|
||||
_tempfile_NamedTemporaryFile.side_effect = None
|
||||
_tempfile_mkstemp.return_value = (None, '/path/to/tempfile')
|
||||
_tempfile_mkstemp.side_effect = None
|
||||
am.selinux_enabled.return_value = False
|
||||
# FIXME: we don't assert anything here yet
|
||||
am.atomic_move('/path/to/src', '/path/to/dest')
|
||||
|
||||
# same as above, but with selinux enabled
|
||||
_os_path_exists.side_effect = [False, False]
|
||||
_os_path_exists.side_effect = [False, False, False]
|
||||
_os_getlogin.return_value = 'root'
|
||||
_os_getuid.return_value = 0
|
||||
_pwd_getpwuid.return_value = ('root', '', 0, 0, '', '', '')
|
||||
_os_umask.side_effect = [18, 0]
|
||||
_os_rename.side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
|
||||
mock_tempfile = MagicMock()
|
||||
_tempfile_NamedTemporaryFile.return_value = mock_tempfile
|
||||
_tempfile_mkstemp.return_value = (None, None)
|
||||
mock_context = MagicMock()
|
||||
am.selinux_default_context.return_value = mock_context
|
||||
am.selinux_enabled.return_value = True
|
||||
|
|
|
@ -9,7 +9,6 @@ test_get_url
|
|||
test_git
|
||||
test_hg
|
||||
test_iterators
|
||||
test_lineinfile
|
||||
test_lookups
|
||||
test_mysql_db
|
||||
test_mysql_user
|
||||
|
|
|
@ -1,11 +1,8 @@
|
|||
cryptography
|
||||
jinja2
|
||||
junit-xml
|
||||
ndg-httpsclient
|
||||
pyasn1
|
||||
pyopenssl
|
||||
pyyaml
|
||||
requests
|
||||
setuptools
|
||||
pywinrm
|
||||
xmltodict
|
||||
|
|
|
@ -86,6 +86,7 @@ if [ ${start_instance} ]; then
|
|||
start --id "${instance_id}" "${test_auth}" "${test_platform}" "${test_version}" ${args}
|
||||
fi
|
||||
|
||||
pip install "${source_root}" --upgrade
|
||||
pip install -r "${source_root}/test/utils/shippable/remote-requirements.txt" --upgrade
|
||||
pip list
|
||||
|
||||
|
|
6
tox.ini
6
tox.ini
|
@ -1,8 +1,5 @@
|
|||
[tox]
|
||||
envlist = py26,py27,py34,py35
|
||||
|
||||
[testenv:py34]
|
||||
deps = -r{toxinidir}/test/utils/tox/requirements-py3.txt
|
||||
envlist = py26,py27,py35
|
||||
|
||||
[testenv:py35]
|
||||
deps = -r{toxinidir}/test/utils/tox/requirements-py3.txt
|
||||
|
@ -14,7 +11,6 @@ commands =
|
|||
python --version
|
||||
py26: python -m compileall -fq -x 'test/samples|contrib/inventory/vagrant.py' lib test contrib
|
||||
py27: python -m compileall -fq -x 'test/samples' lib test contrib
|
||||
py34: python -m compileall -fq -x 'test/samples|lib/ansible/modules' lib test contrib
|
||||
py35: python -m compileall -fq -x 'test/samples|lib/ansible/modules' lib test contrib
|
||||
make tests
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue