mirror of
https://github.com/ansible-collections/community.general.git
synced 2025-04-25 20:01:25 -07:00
* [WIP] become plugins Move from hardcoded method to plugins for ease of use, expansion and overrides - load into connection as it is going to be the main consumer - play_context will also use to keep backwards compat API - ensure shell is used to construct commands when needed - migrate settings remove from base config in favor of plugin specific configs - cleanup ansible-doc - add become plugin docs - remove deprecated sudo/su code and keywords - adjust become options for cli - set plugin options from context - ensure config defs are avaialbe before instance - refactored getting the shell plugin, fixed tests - changed into regex as they were string matching, which does not work with random string generation - explicitly set flags for play context tests - moved plugin loading up front - now loads for basedir also - allow pyc/o for non m modules - fixes to tests and some plugins - migrate to play objects fro play_context - simiplify gathering - added utf8 headers - moved option setting - add fail msg to dzdo - use tuple for multiple options on fail/missing - fix relative plugin paths - shift from play context to play - all tasks already inherit this from play directly - remove obsolete 'set play' - correct environment handling - add wrap_exe option to pfexec - fix runas to noop - fixed setting play context - added password configs - removed required false - remove from doc building till they are ready future development: - deal with 'enable' and 'runas' which are not 'command wrappers' but 'state flags' and currently hardcoded in diff subsystems * cleanup remove callers to removed func removed --sudo cli doc refs remove runas become_exe ensure keyerorr on plugin also fix backwards compat, missing method is attributeerror, not ansible error get remote_user consistently ignore missing system_tmpdirs on plugin load correct config precedence add deprecation fix networking imports backwards compat for plugins using BECOME_METHODS * Port become_plugins to context.CLIARGS This is a work in progress: * Stop passing options around everywhere as we can use context.CLIARGS instead * Refactor make_become_commands as asked for by alikins * Typo in comment fix * Stop loading values from the cli in more than one place Both play and play_context were saving default values from the cli arguments directly. This changes things so that the default values are loaded into the play and then play_context takes them from there. * Rename BECOME_PLUGIN_PATH to DEFAULT_BECOME_PLUGIN_PATH As alikins said, all other plugin paths are named DEFAULT_plugintype_PLUGIN_PATH. If we're going to rename these, that should be done all at one time rather than piecemeal. * One to throw away This is a set of hacks to get setting FieldAttribute defaults to command line args to work. It's not fully done yet. After talking it over with sivel and jimi-c this should be done by fixing FieldAttributeBase and _get_parent_attribute() calls to do the right thing when there is a non-None default. What we want to be able to do ideally is something like this: class Base(FieldAttributeBase): _check_mode = FieldAttribute([..] default=lambda: context.CLIARGS['check']) class Play(Base): # lambda so that we have a chance to parse the command line args # before we get here. In the future we might be able to restructure # this so that the cli parsing code runs before these classes are # defined. class Task(Base): pass And still have a playbook like this function: --- - hosts: tasks: - command: whoami check_mode: True (The check_mode test that is added as a separate commit in this PR will let you test variations on this case). There's a few separate reasons that the code doesn't let us do this or a non-ugly workaround for this as written right now. The fix that jimi-c, sivel, and I talked about may let us do this or it may still require a workaround (but less ugly) (having one class that has the FieldAttributes with default values and one class that inherits from that but just overrides the FieldAttributes which now have defaults) * Revert "One to throw away" This reverts commit 23aa883cbed11429ef1be2a2d0ed18f83a3b8064. * Set FieldAttr defaults directly from CLIARGS * Remove dead code * Move timeout directly to PlayContext, it's never needed on Play * just for backwards compat, add a static version of BECOME_METHODS to constants * Make the become attr on the connection public, since it's used outside of the connection * Logic fix * Nuke connection testing if it supports specific become methods * Remove unused vars * Address rebase issues * Fix path encoding issue * Remove unused import * Various cleanups * Restore network_cli check in _low_level_execute_command * type improvements for cliargs_deferred_get and swap shallowcopy to default to False * minor cleanups * Allow the su plugin to work, since it doesn't define a prompt the same way * Fix up ksu become plugin * Only set prompt if build_become_command was called * Add helper to assist connection plugins in knowing they need to wait for a prompt * Fix tests and code expectations * Doc updates * Various additional minor cleanups * Make doas functional * Don't change connection signature, load become plugin from TaskExecutor * Remove unused imports * Add comment about setting the become plugin on the playcontext * Fix up tests for recent changes * Support 'Password:' natively for the doas plugin * Make default prompts raw * wording cleanups. ci_complete * Remove unrelated changes * Address spelling mistake * Restore removed test, and udpate to use new functionality * Add changelog fragment * Don't hard fail in set_attributes_from_cli on missing CLI keys * Remove unrelated change to loader * Remove internal deprecated FieldAttributes now * Emit deprecation warnings now
257 lines
13 KiB
Python
257 lines
13 KiB
Python
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
|
|
#
|
|
# This file is part of Ansible
|
|
#
|
|
# Ansible is free software: you can redistribute it and/or modify
|
|
# it under the terms of the GNU General Public License as published by
|
|
# the Free Software Foundation, either version 3 of the License, or
|
|
# (at your option) any later version.
|
|
#
|
|
# Ansible is distributed in the hope that it will be useful,
|
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
# GNU General Public License for more details.
|
|
#
|
|
# You should have received a copy of the GNU General Public License
|
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
|
# Make coding more python3-ish
|
|
from __future__ import (absolute_import, division, print_function)
|
|
__metaclass__ = type
|
|
|
|
DOCUMENTATION = '''
|
|
strategy: free
|
|
short_description: Executes tasks without waiting for all hosts
|
|
description:
|
|
- Task execution is as fast as possible per batch as defined by C(serial) (default all).
|
|
Ansible will not wait for other hosts to finish the current task before queuing more tasks for other hosts.
|
|
All hosts are still attempted for the current task, but it prevents blocking new tasks for hosts that have already finished.
|
|
- With the free strategy, unlike the default linear strategy, a host that is slow or stuck on a specific task
|
|
won't hold up the rest of the hosts and tasks.
|
|
version_added: "2.0"
|
|
author: Ansible Core Team
|
|
'''
|
|
|
|
import time
|
|
|
|
from ansible import constants as C
|
|
from ansible.errors import AnsibleError
|
|
from ansible.playbook.included_file import IncludedFile
|
|
from ansible.plugins.loader import action_loader
|
|
from ansible.plugins.strategy import StrategyBase
|
|
from ansible.template import Templar
|
|
from ansible.module_utils._text import to_text
|
|
from ansible.utils.display import Display
|
|
|
|
display = Display()
|
|
|
|
|
|
class StrategyModule(StrategyBase):
|
|
|
|
def _filter_notified_hosts(self, notified_hosts):
|
|
'''
|
|
Filter notified hosts accordingly to strategy
|
|
'''
|
|
|
|
# We act only on hosts that are ready to flush handlers
|
|
return [host for host in notified_hosts
|
|
if host in self._flushed_hosts and self._flushed_hosts[host]]
|
|
|
|
def __init__(self, tqm):
|
|
super(StrategyModule, self).__init__(tqm)
|
|
self._host_pinned = False
|
|
|
|
def run(self, iterator, play_context):
|
|
'''
|
|
The "free" strategy is a bit more complex, in that it allows tasks to
|
|
be sent to hosts as quickly as they can be processed. This means that
|
|
some hosts may finish very quickly if run tasks result in little or no
|
|
work being done versus other systems.
|
|
|
|
The algorithm used here also tries to be more "fair" when iterating
|
|
through hosts by remembering the last host in the list to be given a task
|
|
and starting the search from there as opposed to the top of the hosts
|
|
list again, which would end up favoring hosts near the beginning of the
|
|
list.
|
|
'''
|
|
|
|
# the last host to be given a task
|
|
last_host = 0
|
|
|
|
result = self._tqm.RUN_OK
|
|
|
|
# start with all workers being counted as being free
|
|
workers_free = len(self._workers)
|
|
|
|
work_to_do = True
|
|
while work_to_do and not self._tqm._terminated:
|
|
|
|
hosts_left = self.get_hosts_left(iterator)
|
|
|
|
if len(hosts_left) == 0:
|
|
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
|
|
result = False
|
|
break
|
|
|
|
work_to_do = False # assume we have no more work to do
|
|
starting_host = last_host # save current position so we know when we've looped back around and need to break
|
|
|
|
# try and find an unblocked host with a task to run
|
|
host_results = []
|
|
while True:
|
|
host = hosts_left[last_host]
|
|
display.debug("next free host: %s" % host)
|
|
host_name = host.get_name()
|
|
|
|
# peek at the next task for the host, to see if there's
|
|
# anything to do do for this host
|
|
(state, task) = iterator.get_next_task_for_host(host, peek=True)
|
|
display.debug("free host state: %s" % state, host=host_name)
|
|
display.debug("free host task: %s" % task, host=host_name)
|
|
if host_name not in self._tqm._unreachable_hosts and task:
|
|
|
|
# set the flag so the outer loop knows we've still found
|
|
# some work which needs to be done
|
|
work_to_do = True
|
|
|
|
display.debug("this host has work to do", host=host_name)
|
|
|
|
# check to see if this host is blocked (still executing a previous task)
|
|
if host_name not in self._blocked_hosts or not self._blocked_hosts[host_name]:
|
|
# pop the task, mark the host blocked, and queue it
|
|
self._blocked_hosts[host_name] = True
|
|
(state, task) = iterator.get_next_task_for_host(host)
|
|
|
|
try:
|
|
action = action_loader.get(task.action, class_only=True)
|
|
except KeyError:
|
|
# we don't care here, because the action may simply not have a
|
|
# corresponding action plugin
|
|
action = None
|
|
|
|
display.debug("getting variables", host=host_name)
|
|
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task)
|
|
self.add_tqm_variables(task_vars, play=iterator._play)
|
|
templar = Templar(loader=self._loader, variables=task_vars)
|
|
display.debug("done getting variables", host=host_name)
|
|
|
|
try:
|
|
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
|
|
display.debug("done templating", host=host_name)
|
|
except Exception:
|
|
# just ignore any errors during task name templating,
|
|
# we don't care if it just shows the raw name
|
|
display.debug("templating failed for some reason", host=host_name)
|
|
|
|
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
|
|
if run_once:
|
|
if action and getattr(action, 'BYPASS_HOST_LOOP', False):
|
|
raise AnsibleError("The '%s' module bypasses the host loop, which is currently not supported in the free strategy "
|
|
"and would instead execute for every host in the inventory list." % task.action, obj=task._ds)
|
|
else:
|
|
display.warning("Using run_once with the free strategy is not currently supported. This task will still be "
|
|
"executed for every host in the inventory list.")
|
|
|
|
# check to see if this task should be skipped, due to it being a member of a
|
|
# role which has already run (and whether that role allows duplicate execution)
|
|
if task._role and task._role.has_run(host):
|
|
# If there is no metadata, the default behavior is to not allow duplicates,
|
|
# if there is metadata, check to see if the allow_duplicates flag was set to true
|
|
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
|
|
display.debug("'%s' skipped because role has already run" % task, host=host_name)
|
|
del self._blocked_hosts[host_name]
|
|
continue
|
|
|
|
if task.action == 'meta':
|
|
self._execute_meta(task, play_context, iterator, target_host=host)
|
|
self._blocked_hosts[host_name] = False
|
|
else:
|
|
# handle step if needed, skip meta actions as they are used internally
|
|
if not self._step or self._take_step(task, host_name):
|
|
if task.any_errors_fatal:
|
|
display.warning("Using any_errors_fatal with the free strategy is not supported, "
|
|
"as tasks are executed independently on each host")
|
|
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
|
|
self._queue_task(host, task, task_vars, play_context)
|
|
# each task is counted as a worker being busy
|
|
workers_free -= 1
|
|
del task_vars
|
|
else:
|
|
display.debug("%s is blocked, skipping for now" % host_name)
|
|
|
|
# all workers have tasks to do (and the current host isn't done with the play).
|
|
# loop back to starting host and break out
|
|
if self._host_pinned and workers_free == 0 and work_to_do:
|
|
last_host = starting_host
|
|
break
|
|
|
|
# move on to the next host and make sure we
|
|
# haven't gone past the end of our hosts list
|
|
last_host += 1
|
|
if last_host > len(hosts_left) - 1:
|
|
last_host = 0
|
|
|
|
# if we've looped around back to the start, break out
|
|
if last_host == starting_host:
|
|
break
|
|
|
|
results = self._process_pending_results(iterator)
|
|
host_results.extend(results)
|
|
|
|
# each result is counted as a worker being free again
|
|
workers_free += len(results)
|
|
|
|
self.update_active_connections(results)
|
|
|
|
try:
|
|
included_files = IncludedFile.process_include_results(
|
|
host_results,
|
|
iterator=iterator,
|
|
loader=self._loader,
|
|
variable_manager=self._variable_manager
|
|
)
|
|
except AnsibleError as e:
|
|
return self._tqm.RUN_ERROR
|
|
|
|
if len(included_files) > 0:
|
|
all_blocks = dict((host, []) for host in hosts_left)
|
|
for included_file in included_files:
|
|
display.debug("collecting new blocks for %s" % included_file)
|
|
try:
|
|
if included_file._is_role:
|
|
new_ir = self._copy_included_file(included_file)
|
|
|
|
new_blocks, handler_blocks = new_ir.get_block_list(
|
|
play=iterator._play,
|
|
variable_manager=self._variable_manager,
|
|
loader=self._loader,
|
|
)
|
|
else:
|
|
new_blocks = self._load_included_file(included_file, iterator=iterator)
|
|
except AnsibleError as e:
|
|
for host in included_file._hosts:
|
|
iterator.mark_host_failed(host)
|
|
display.warning(to_text(e))
|
|
continue
|
|
|
|
for new_block in new_blocks:
|
|
task_vars = self._variable_manager.get_vars(play=iterator._play, task=new_block._parent)
|
|
final_block = new_block.filter_tagged_tasks(task_vars)
|
|
for host in hosts_left:
|
|
if host in included_file._hosts:
|
|
all_blocks[host].append(final_block)
|
|
display.debug("done collecting new blocks for %s" % included_file)
|
|
|
|
display.debug("adding all collected blocks from %d included file(s) to iterator" % len(included_files))
|
|
for host in hosts_left:
|
|
iterator.add_tasks(host, all_blocks[host])
|
|
display.debug("done adding collected blocks to iterator")
|
|
|
|
# pause briefly so we don't spin lock
|
|
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
|
|
|
|
# collect all the final results
|
|
results = self._wait_on_pending_results(iterator)
|
|
|
|
# run the base class run() method, which executes the cleanup function
|
|
# and runs any outstanding handlers which have been triggered
|
|
return super(StrategyModule, self).run(iterator, play_context, result)
|