Remove deprecated features and plugins for 11.0.0 (#10126)
Some checks failed
EOL CI / EOL Sanity (Ⓐ2.15) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.15+py2.7) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.15+py3.10) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.15+py3.5) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+alpine3+py:azp/posix/1/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+alpine3+py:azp/posix/2/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+alpine3+py:azp/posix/3/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+fedora37+py:azp/posix/1/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+fedora37+py:azp/posix/2/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.15+fedora37+py:azp/posix/3/) (push) Has been cancelled
nox / Run extra sanity tests (push) Has been cancelled

* Bump version to 11.0.0.

* Removed deprecated plugins/modules.

* Remove _init_session().

* Remove ack_venv_creation_deprecation.

* Change behavior of state.

* Remove value reading.

* Remove list_all.

* Remove various deprecated module helper things.

* Change default of proxmox's update parameter.

* Fix constructor command order.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* MH: adjust guide

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <russoz@gmail.com>
This commit is contained in:
Felix Fontein 2025-05-19 18:11:39 +02:00 committed by GitHub
parent b861850e1a
commit 9d7b3f13bd
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
74 changed files with 89 additions and 10574 deletions

13
.github/BOTMETA.yml vendored
View file

@ -127,8 +127,6 @@ files:
maintainers: $team_ansible_core
$doc_fragments/:
labels: docs_fragments
$doc_fragments/clc.py:
maintainers: clc-runner russoz
$doc_fragments/django.py:
maintainers: russoz
$doc_fragments/hpe3par.py:
@ -254,8 +252,6 @@ files:
$inventories/scaleway.py:
labels: cloud scaleway
maintainers: $team_scaleway
$inventories/stackpath_compute.py:
maintainers: shayrybak
$inventories/virtualbox.py: {}
$inventories/xen_orchestra.py:
maintainers: ddelnano shinuza
@ -299,9 +295,6 @@ files:
$lookups/lastpass.py: {}
$lookups/lmdb_kv.py:
maintainers: jpmens
$lookups/manifold.py:
labels: manifold
maintainers: galanoff
$lookups/merge_variables.py:
maintainers: rlenferink m-a-r-k-e alpex8
$lookups/onepass:
@ -510,8 +503,6 @@ files:
maintainers: NickatEpic
$modules/cisco_webex.py:
maintainers: drew-russell
$modules/clc_:
maintainers: clc-runner
$modules/cloud_init_data_facts.py:
maintainers: resmo
$modules/cloudflare_dns.py:
@ -670,8 +661,6 @@ files:
maintainers: marns93
$modules/hg.py:
maintainers: yeukhon
$modules/hipchat.py:
maintainers: pb8226 shirou
$modules/homebrew.py:
ignore: ryansb
keywords: brew cask darwin homebrew macosx macports osx
@ -1145,8 +1134,6 @@ files:
maintainers: $team_bsd berenddeboer
$modules/pritunl_:
maintainers: Lowess
$modules/profitbricks:
maintainers: baldwinSPC
$modules/proxmox:
keywords: kvm libvirt proxmox qemu
labels: proxmox virt

View file

@ -0,0 +1,19 @@
removed_features:
- "stackpath_compute inventory plugin - the plugin was removed since the company and the service were sunset in June 2024 (https://github.com/ansible-collections/community.general/pull/10126)."
- "manifold lookup plugin - the plugin was removed since the company was acquired in 2021 and service was ceased afterwards (https://github.com/ansible-collections/community.general/pull/10126)."
- "clc_* modules and doc fragment - the modules were removed since CenturyLink Cloud services went EOL in September 2023 (https://github.com/ansible-collections/community.general/pull/10126)."
- "hipchat - the module was removed since the hipchat service has been discontinued and the self-hosted variant has been End of Life since 2020 (https://github.com/ansible-collections/community.general/pull/10126)."
- "profitbrick* modules - the modules were removed since the supporting library is unsupported since 2021 (https://github.com/ansible-collections/community.general/pull/10126)."
- "redfish_utils module utils - the ``_init_session`` method has been removed (https://github.com/ansible-collections/community.general/pull/10126)."
- "django_manage - the ``ack_venv_creation_deprecation`` option has been removed. It had no effect anymore anyway (https://github.com/ansible-collections/community.general/pull/10126)."
- "apt_rpm - the ``present`` and ``installed`` states are no longer equivalent to ``latest``, but to ``present_not_latest`` (https://github.com/ansible-collections/community.general/pull/10126)."
- "git_config - it is no longer allowed to use ``state=present`` with no value to read the config value. Use the ``community.general.git_config_info`` module instead (https://github.com/ansible-collections/community.general/pull/10126)."
- "git_config - the ``list_all`` option has been removed. Use the ``community.general.git_config_info`` module instead (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.mixins.deps module utils - this module utils has been removed. Use the ``deps`` module utils instead (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.mixins.vars module utils - this module utils has been removed. Use ``VarDict`` from the ``vardict`` module utils instead (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.module_helper module utils - ``VarDict`` is now imported from the ``vardict`` module utils and no longer from the removed ``mh.mixins.vars`` module utils (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.module_helper module utils - ``AnsibleModule`` and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.module_helper module utils - the attributes ``use_old_vardict`` and ``mute_vardict_deprecation`` from ``ModuleHelper`` have been removed. We suggest to remove them from your modules if you no longer support community.general < 11.0.0 (https://github.com/ansible-collections/community.general/pull/10126)."
- "module_helper module utils - ``StateMixin``, ``DependencyCtxMgr``, ``VarMeta``, ``VarDict``, and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)."
breaking_changes:
- "proxmox - the default of ``update`` changed from ``false`` to ``true`` (https://github.com/ansible-collections/community.general/pull/10126)."

View file

@ -38,7 +38,6 @@ But bear in mind that it does not showcase all of MH's features:
),
supports_check_mode=True,
)
use_old_vardict = False
def __run__(self):
self.vars.original_message = ''
@ -84,10 +83,6 @@ section above, but there are more elements that will take part in it.
facts_name = None # used if generating facts, from parameters or otherwise
# transitional variables for the new VarDict implementation, see information below
use_old_vardict = True
mute_vardict_deprecation = False
module = dict(
argument_spec=dict(...),
# ...
@ -207,28 +202,14 @@ By using ``self.vars``, you get a central mechanism to access the parameters but
As described in :ref:`ansible_collections.community.general.docsite.guide_vardict`, variables in ``VarDict`` have metadata associated to them.
One of the attributes in that metadata marks the variable for output, and MH makes use of that to generate the module's return values.
.. important::
.. note::
The ``VarDict`` feature described was introduced in community.general 7.1.0, but there was a first
implementation of it embedded within ``ModuleHelper``.
That older implementation is now deprecated and will be removed in community.general 11.0.0.
After community.general 7.1.0, MH modules generate a deprecation message about *using the old VarDict*.
There are two ways to prevent that from happening:
The ``VarDict`` class was introduced in community.general 7.1.0, as part of ``ModuleHelper`` itself.
However, it has been factored out to become an utility on its own, described in :ref:`ansible_collections.community.general.docsite.guide_vardict`,
and the older implementation was removed in community.general 11.0.0.
#. Set ``mute_vardict_deprecation = True`` and the deprecation will be silenced. If the module still uses the old ``VarDict``,
it will not be able to update to community.general 11.0.0 (Spring 2025) upon its release.
#. Set ``use_old_vardict = False`` to make the MH module use the new ``VarDict`` immediately.
We strongly recommend you use the new ``VarDict``, for that you make sure to consult its documentation at
:ref:`ansible_collections.community.general.docsite.guide_vardict`.
.. code-block:: python
class MyTest(ModuleHelper):
use_old_vardict = False
mute_vardict_deprecation = True
...
These two settings are mutually exclusive, but that is not enforced and the behavior when setting both is not specified.
Some code might still refer to the class variables ``use_old_vardict`` and ``mute_vardict_deprecation``, used for the transtition to the new
implementation but from community.general 11.0.0 onwards they are no longer used and can be safely removed from the code.
Contrary to new variables created in ``VarDict``, module parameters are not set for output by default.
If you want to include some module parameters in the output, list them in the ``output_params`` class variable.
@ -410,7 +391,6 @@ By using ``StateModuleHelper`` you can make your code like the excerpt from the
module = dict(
...
)
use_old_vardict = False
def __init_module__(self):
self.runner = gconftool2_runner(self.module, check_rc=True)

View file

@ -5,7 +5,7 @@
namespace: community
name: general
version: 10.7.0
version: 11.0.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)

View file

@ -100,7 +100,7 @@ plugin_routing:
hashi_vault:
redirect: community.hashi_vault.hashi_vault
manifold:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Company was acquired in 2021 and service was ceased afterwards.
nios:
@ -129,39 +129,39 @@ plugin_routing:
cisco_spark:
redirect: community.general.cisco_webex
clc_alert_policy:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_blueprint_package:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_firewall_policy:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_group:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_loadbalancer:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_modify_server:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_publicip:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_server:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
clc_server_snapshot:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: CenturyLink Cloud services went EOL in September 2023.
consul_acl:
@ -320,7 +320,7 @@ plugin_routing:
hetzner_firewall_info:
redirect: community.hrobot.firewall_info
hipchat:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: The hipchat service has been discontinued and the self-hosted variant has been End of Life since 2020.
hpilo_facts:
@ -645,23 +645,23 @@ plugin_routing:
postgresql_user_obj_stat_info:
redirect: community.postgresql.postgresql_user_obj_stat_info
profitbricks:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021.
profitbricks_datacenter:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021.
profitbricks_nic:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021.
profitbricks_volume:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021.
profitbricks_volume_attachments:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021.
purefa_facts:
@ -970,7 +970,7 @@ plugin_routing:
kubevirt:
redirect: community.kubevirt.kubevirt
stackpath_compute:
deprecation:
tombstone:
removal_version: 11.0.0
warning_text: The company and the service were sunset in June 2024.
filter:

View file

@ -1,28 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, Alexei Znamensky <russoz@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard documentation fragment
DOCUMENTATION = r"""
options: {}
requirements:
- requests >= 2.5.0
- clc-sdk
notes:
- To use this module, it is required to set the below environment variables which enables access to the Centurylink Cloud.
- E(CLC_V2_API_USERNAME), the account login ID for the Centurylink Cloud.
- E(CLC_V2_API_PASSWORD), the account password for the Centurylink Cloud.
- Alternatively, the module accepts the API token and account alias. The API token can be generated using the CLC account
login and password using the HTTP API call @ U(https://api.ctl.io/v2/authentication/login).
- E(CLC_V2_API_TOKEN), the API token generated from U(https://api.ctl.io/v2/authentication/login).
- E(CLC_ACCT_ALIAS), the account alias associated with the Centurylink Cloud.
- Users can set E(CLC_V2_API_URL) to specify an endpoint for pointing to a different CLC environment.
"""

View file

@ -1,285 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020 Shay Rybak <shay.rybak@stackpath.com>
# Copyright (c) 2020 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = r"""
name: stackpath_compute
short_description: StackPath Edge Computing inventory source
version_added: 1.2.0
author:
- UNKNOWN (@shayrybak)
deprecated:
removed_in: 11.0.0
why: Stackpath (the company) ceased its operations in June 2024. The API URL this plugin relies on is not found in DNS.
alternative: There is none.
extends_documentation_fragment:
- inventory_cache
- constructed
description:
- Get inventory hosts from StackPath Edge Computing.
- Uses a YAML configuration file that ends with stackpath_compute.(yml|yaml).
options:
plugin:
description:
- A token that ensures this is a source file for the plugin.
required: true
type: string
choices: ['community.general.stackpath_compute']
client_id:
description:
- An OAuth client ID generated from the API Management section of the StackPath customer portal U(https://control.stackpath.net/api-management).
required: true
type: str
client_secret:
description:
- An OAuth client secret generated from the API Management section of the StackPath customer portal U(https://control.stackpath.net/api-management).
required: true
type: str
stack_slugs:
description:
- A list of Stack slugs to query instances in. If no entry then get instances in all stacks on the account.
type: list
elements: str
use_internal_ip:
description:
- Whether or not to use internal IP addresses, If false, uses external IP addresses, internal otherwise.
- If an instance doesn't have an external IP it will not be returned when this option is set to false.
type: bool
"""
EXAMPLES = r"""
plugin: community.general.stackpath_compute
client_id: my_client_id
client_secret: my_client_secret
stack_slugs:
- my_first_stack_slug
- my_other_stack_slug
use_internal_ip: false
"""
import traceback
import json
from ansible.errors import AnsibleError
from ansible.module_utils.urls import open_url
from ansible.plugins.inventory import (
BaseInventoryPlugin,
Constructable,
Cacheable
)
from ansible.utils.display import Display
from ansible_collections.community.general.plugins.plugin_utils.unsafe import make_unsafe
display = Display()
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'community.general.stackpath_compute'
def __init__(self):
super(InventoryModule, self).__init__()
# credentials
self.client_id = None
self.client_secret = None
self.stack_slug = None
self.api_host = "https://gateway.stackpath.com"
self.group_keys = [
"stackSlug",
"workloadId",
"cityCode",
"countryCode",
"continent",
"target",
"name",
"workloadSlug"
]
def _validate_config(self, config):
if config['plugin'] != 'community.general.stackpath_compute':
raise AnsibleError("plugin doesn't match this plugin")
try:
client_id = config['client_id']
if len(client_id) != 32:
raise AnsibleError("client_id must be 32 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")
try:
client_secret = config['client_secret']
if len(client_secret) != 64:
raise AnsibleError("client_secret must be 64 characters long")
except KeyError:
raise AnsibleError("config missing client_id, a required option")
return True
def _set_credentials(self):
'''
:param config_data: contents of the inventory config file
'''
self.client_id = self.get_option('client_id')
self.client_secret = self.get_option('client_secret')
def _authenticate(self):
payload = json.dumps(
{
"client_id": self.client_id,
"client_secret": self.client_secret,
"grant_type": "client_credentials",
}
)
headers = {
"Content-Type": "application/json",
}
resp = open_url(
f"{self.api_host}/identity/v1/oauth2/token",
headers=headers,
data=payload,
method="POST"
)
status_code = resp.code
if status_code == 200:
body = resp.read()
self.auth_token = json.loads(body)["access_token"]
def _query(self):
results = []
workloads = []
self._authenticate()
for stack_slug in self.stack_slugs:
try:
workloads = self._stackpath_query_get_list(f"{self.api_host}/workload/v1/stacks/{stack_slug}/workloads")
except Exception:
raise AnsibleError(f"Failed to get workloads from the StackPath API: {traceback.format_exc()}")
for workload in workloads:
try:
workload_instances = self._stackpath_query_get_list(
f"{self.api_host}/workload/v1/stacks/{stack_slug}/workloads/{workload['id']}/instances"
)
except Exception:
raise AnsibleError(f"Failed to get workload instances from the StackPath API: {traceback.format_exc()}")
for instance in workload_instances:
if instance["phase"] == "RUNNING":
instance["stackSlug"] = stack_slug
instance["workloadId"] = workload["id"]
instance["workloadSlug"] = workload["slug"]
instance["cityCode"] = instance["location"]["cityCode"]
instance["countryCode"] = instance["location"]["countryCode"]
instance["continent"] = instance["location"]["continent"]
instance["target"] = instance["metadata"]["labels"]["workload.platform.stackpath.net/target-name"]
try:
if instance[self.hostname_key]:
results.append(instance)
except KeyError:
pass
return results
def _populate(self, instances):
for instance in instances:
for group_key in self.group_keys:
group = f"{group_key}_{instance[group_key]}"
group = group.lower().replace(" ", "_").replace("-", "_")
self.inventory.add_group(group)
self.inventory.add_host(instance[self.hostname_key],
group=group)
def _stackpath_query_get_list(self, url):
self._authenticate()
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.auth_token}",
}
next_page = True
result = []
cursor = '-1'
while next_page:
resp = open_url(
f"{url}?page_request.first=10&page_request.after={cursor}",
headers=headers,
method="GET"
)
status_code = resp.code
if status_code == 200:
body = resp.read()
body_json = json.loads(body)
result.extend(body_json["results"])
next_page = body_json["pageInfo"]["hasNextPage"]
if next_page:
cursor = body_json["pageInfo"]["endCursor"]
return result
def _get_stack_slugs(self, stacks):
self.stack_slugs = [stack["slug"] for stack in stacks]
def verify_file(self, path):
'''
:param loader: an ansible.parsing.dataloader.DataLoader object
:param path: the path to the inventory config file
:return the contents of the config file
'''
if super(InventoryModule, self).verify_file(path):
if path.endswith(('stackpath_compute.yml', 'stackpath_compute.yaml')):
return True
display.debug(
"stackpath_compute inventory filename must end with \
'stackpath_compute.yml' or 'stackpath_compute.yaml'"
)
return False
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
config = self._read_config_data(path)
self._validate_config(config)
self._set_credentials()
# get user specifications
self.use_internal_ip = self.get_option('use_internal_ip')
if self.use_internal_ip:
self.hostname_key = "ipAddress"
else:
self.hostname_key = "externalIpAddress"
self.stack_slugs = self.get_option('stack_slugs')
if not self.stack_slugs:
try:
stacks = self._stackpath_query_get_list(f"{self.api_host}/stack/v1/stacks")
self._get_stack_slugs(stacks)
except Exception:
raise AnsibleError(f"Failed to get stack IDs from the Stackpath API: {traceback.format_exc()}")
cache_key = self.get_cache_key(path)
# false when refresh_cache or --flush-cache is used
if cache:
# get the user-specified directive
cache = self.get_option('cache')
# Generate inventory
cache_needs_update = False
if cache:
try:
results = self._cache[cache_key]
except KeyError:
# if cache expires or cache file doesn't exist
cache_needs_update = True
if not cache or cache_needs_update:
results = self._query()
self._populate(make_unsafe(results))
# If the cache has expired/doesn't exist or
# if refresh_inventory/flush cache is used
# when the user is using caching, update the cached inventory
try:
if cache_needs_update or (not cache and self.get_option('cache')):
self._cache[cache_key] = results
except Exception:
raise AnsibleError(f"Failed to populate data: {traceback.format_exc()}")

View file

@ -1,282 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Arigato Machine Inc.
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
author:
- Kyrylo Galanov (!UNKNOWN) <galanoff@gmail.com>
name: manifold
short_description: get credentials from Manifold.co
description:
- Retrieves resources' credentials from Manifold.co
deprecated:
removed_in: 11.0.0
why: Manifold (the company) has been acquired in 2021 and the services used by this plugin are no longer operational.
alternative: There is none.
options:
_terms:
description:
- Optional list of resource labels to lookup on Manifold.co. If no resources are specified, all
matched resources will be returned.
type: list
elements: string
required: false
api_token:
description:
- manifold API token
type: string
required: true
env:
- name: MANIFOLD_API_TOKEN
project:
description:
- The project label you want to get the resource for.
type: string
required: false
team:
description:
- The team label you want to get the resource for.
type: string
required: false
'''
EXAMPLES = '''
- name: all available resources
ansible.builtin.debug:
msg: "{{ lookup('community.general.manifold', api_token='SecretToken') }}"
- name: all available resources for a specific project in specific team
ansible.builtin.debug:
msg: "{{ lookup('community.general.manifold', api_token='SecretToken', project='poject-1', team='team-2') }}"
- name: two specific resources
ansible.builtin.debug:
msg: "{{ lookup('community.general.manifold', 'resource-1', 'resource-2') }}"
'''
RETURN = '''
_raw:
description:
- dictionary of credentials ready to be consumed as environment variables. If multiple resources define
the same environment variable(s), the last one returned by the Manifold API will take precedence.
type: dict
'''
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
from ansible.module_utils.urls import open_url, ConnectionError, SSLValidationError
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils import six
from ansible.utils.display import Display
from traceback import format_exception
import json
import sys
display = Display()
class ApiError(Exception):
pass
class ManifoldApiClient(object):
http_agent = 'python-manifold-ansible-1.0.0'
def __init__(self, token):
self._token = token
def _make_url(self, api, endpoint):
return f'https://api.{api}.manifold.co/v1/{endpoint}'
def request(self, api, endpoint, *args, **kwargs):
"""
Send a request to API backend and pre-process a response.
:param api: API to send a request to
:type api: str
:param endpoint: API endpoint to fetch data from
:type endpoint: str
:param args: other args for open_url
:param kwargs: other kwargs for open_url
:return: server response. JSON response is automatically deserialized.
:rtype: dict | list | str
"""
default_headers = {
'Authorization': f"Bearer {self._token}",
'Accept': "*/*" # Otherwise server doesn't set content-type header
}
url = self._make_url(api, endpoint)
headers = default_headers
arg_headers = kwargs.pop('headers', None)
if arg_headers:
headers.update(arg_headers)
try:
display.vvvv(f'manifold lookup connecting to {url}')
response = open_url(url, headers=headers, http_agent=self.http_agent, *args, **kwargs)
data = response.read()
if response.headers.get('content-type') == 'application/json':
data = json.loads(data)
return data
except ValueError:
raise ApiError(f'JSON response can\'t be parsed while requesting {url}:\n{data}')
except HTTPError as e:
raise ApiError(f'Server returned: {e} while requesting {url}:\n{e.read()}')
except URLError as e:
raise ApiError(f'Failed lookup url for {url} : {e}')
except SSLValidationError as e:
raise ApiError(f'Error validating the server\'s certificate for {url}: {e}')
except ConnectionError as e:
raise ApiError(f'Error connecting to {url}: {e}')
def get_resources(self, team_id=None, project_id=None, label=None):
"""
Get resources list
:param team_id: ID of the Team to filter resources by
:type team_id: str
:param project_id: ID of the project to filter resources by
:type project_id: str
:param label: filter resources by a label, returns a list with one or zero elements
:type label: str
:return: list of resources
:rtype: list
"""
api = 'marketplace'
endpoint = 'resources'
query_params = {}
if team_id:
query_params['team_id'] = team_id
if project_id:
query_params['project_id'] = project_id
if label:
query_params['label'] = label
if query_params:
endpoint += f"?{urlencode(query_params)}"
return self.request(api, endpoint)
def get_teams(self, label=None):
"""
Get teams list
:param label: filter teams by a label, returns a list with one or zero elements
:type label: str
:return: list of teams
:rtype: list
"""
api = 'identity'
endpoint = 'teams'
data = self.request(api, endpoint)
# Label filtering is not supported by API, however this function provides uniform interface
if label:
data = list(filter(lambda x: x['body']['label'] == label, data))
return data
def get_projects(self, label=None):
"""
Get projects list
:param label: filter projects by a label, returns a list with one or zero elements
:type label: str
:return: list of projects
:rtype: list
"""
api = 'marketplace'
endpoint = 'projects'
query_params = {}
if label:
query_params['label'] = label
if query_params:
endpoint += f"?{urlencode(query_params)}"
return self.request(api, endpoint)
def get_credentials(self, resource_id):
"""
Get resource credentials
:param resource_id: ID of the resource to filter credentials by
:type resource_id: str
:return:
"""
api = 'marketplace'
endpoint = f"credentials?{urlencode({'resource_id': resource_id})}"
return self.request(api, endpoint)
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
"""
:param terms: a list of resources lookups to run.
:param variables: ansible variables active at the time of the lookup
:param api_token: API token
:param project: optional project label
:param team: optional team label
:return: a dictionary of resources credentials
"""
self.set_options(var_options=variables, direct=kwargs)
api_token = self.get_option('api_token')
project = self.get_option('project')
team = self.get_option('team')
try:
labels = terms
client = ManifoldApiClient(api_token)
if team:
team_data = client.get_teams(team)
if len(team_data) == 0:
raise AnsibleError(f"Team '{team}' does not exist")
team_id = team_data[0]['id']
else:
team_id = None
if project:
project_data = client.get_projects(project)
if len(project_data) == 0:
raise AnsibleError(f"Project '{project}' does not exist")
project_id = project_data[0]['id']
else:
project_id = None
if len(labels) == 1: # Use server-side filtering if one resource is requested
resources_data = client.get_resources(team_id=team_id, project_id=project_id, label=labels[0])
else: # Get all resources and optionally filter labels
resources_data = client.get_resources(team_id=team_id, project_id=project_id)
if labels:
resources_data = list(filter(lambda x: x['body']['label'] in labels, resources_data))
if labels and len(resources_data) < len(labels):
fetched_labels = [r['body']['label'] for r in resources_data]
not_found_labels = [label for label in labels if label not in fetched_labels]
raise AnsibleError(f"Resource(s) {', '.join(not_found_labels)} do not exist")
credentials = {}
cred_map = {}
for resource in resources_data:
resource_credentials = client.get_credentials(resource['id'])
if len(resource_credentials) and resource_credentials[0]['body']['values']:
for cred_key, cred_val in six.iteritems(resource_credentials[0]['body']['values']):
label = resource['body']['label']
if cred_key in credentials:
display.warning(f"'{cred_key}' with label '{cred_map[cred_key]}' was replaced by resource data with label '{label}'")
credentials[cred_key] = cred_val
cred_map[cred_key] = label
ret = [credentials]
return ret
except ApiError as e:
raise AnsibleError(f'API Error: {e}')
except AnsibleError as e:
raise e
except Exception:
exc_type, exc_value, exc_traceback = sys.exc_info()
raise AnsibleError(format_exception(exc_type, exc_value, exc_traceback))

View file

@ -67,11 +67,9 @@ class _DjangoRunner(PythonRunner):
class DjangoModuleHelper(ModuleHelper):
module = {}
use_old_vardict = False
django_admin_cmd = None
arg_formats = {}
django_admin_arg_order = ()
use_old_vardict = False
_django_args = []
_check_mode_arg = ""

View file

@ -1,38 +0,0 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright (c) 2020, Ansible Project
# Simplified BSD License (see LICENSES/BSD-2-Clause.txt or https://opensource.org/licenses/BSD-2-Clause)
# SPDX-License-Identifier: BSD-2-Clause
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class DependencyCtxMgr(object):
"""
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 11.0.0
Modules should use plugins/module_utils/deps.py instead.
"""
def __init__(self, name, msg=None):
self.name = name
self.msg = msg
self.has_it = False
self.exc_type = None
self.exc_val = None
self.exc_tb = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.has_it = exc_type is None
self.exc_type = exc_type
self.exc_val = exc_val
self.exc_tb = exc_tb
return not self.has_it
@property
def text(self):
return self.msg or str(self.exc_val)

View file

@ -1,153 +0,0 @@
# -*- coding: utf-8 -*-
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# Copyright (c) 2020, Ansible Project
# Simplified BSD License (see LICENSES/BSD-2-Clause.txt or https://opensource.org/licenses/BSD-2-Clause)
# SPDX-License-Identifier: BSD-2-Clause
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import copy
class VarMeta(object):
"""
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 11.0.0
Modules should use the VarDict from plugins/module_utils/vardict.py instead.
"""
NOTHING = object()
def __init__(self, diff=False, output=True, change=None, fact=False):
self.init = False
self.initial_value = None
self.value = None
self.diff = diff
self.change = diff if change is None else change
self.output = output
self.fact = fact
def set(self, diff=None, output=None, change=None, fact=None, initial_value=NOTHING):
if diff is not None:
self.diff = diff
if output is not None:
self.output = output
if change is not None:
self.change = change
if fact is not None:
self.fact = fact
if initial_value is not self.NOTHING:
self.initial_value = copy.deepcopy(initial_value)
def set_value(self, value):
if not self.init:
self.initial_value = copy.deepcopy(value)
self.init = True
self.value = value
return self
@property
def has_changed(self):
return self.change and (self.initial_value != self.value)
@property
def diff_result(self):
return None if not (self.diff and self.has_changed) else {
'before': self.initial_value,
'after': self.value,
}
def __str__(self):
return "<VarMeta: value={0}, initial={1}, diff={2}, output={3}, change={4}>".format(
self.value, self.initial_value, self.diff, self.output, self.change
)
class VarDict(object):
"""
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 11.0.0
Modules should use the VarDict from plugins/module_utils/vardict.py instead.
"""
def __init__(self):
self._data = dict()
self._meta = dict()
def __getitem__(self, item):
return self._data[item]
def __setitem__(self, key, value):
self.set(key, value)
def __getattr__(self, item):
try:
return self._data[item]
except KeyError:
return getattr(self._data, item)
def __setattr__(self, key, value):
if key in ('_data', '_meta'):
super(VarDict, self).__setattr__(key, value)
else:
self.set(key, value)
def meta(self, name):
return self._meta[name]
def set_meta(self, name, **kwargs):
self.meta(name).set(**kwargs)
def set(self, name, value, **kwargs):
if name in ('_data', '_meta'):
raise ValueError("Names _data and _meta are reserved for use by ModuleHelper")
self._data[name] = value
if name in self._meta:
meta = self.meta(name)
else:
meta = VarMeta(**kwargs)
meta.set_value(value)
self._meta[name] = meta
def output(self):
return {k: v for k, v in self._data.items() if self.meta(k).output}
def diff(self):
diff_results = [(k, self.meta(k).diff_result) for k in self._data]
diff_results = [dr for dr in diff_results if dr[1] is not None]
if diff_results:
before = dict((dr[0], dr[1]['before']) for dr in diff_results)
after = dict((dr[0], dr[1]['after']) for dr in diff_results)
return {'before': before, 'after': after}
return None
def facts(self):
facts_result = {k: v for k, v in self._data.items() if self._meta[k].fact}
return facts_result if facts_result else None
def change_vars(self):
return [v for v in self._data if self.meta(v).change]
def has_changed(self, v):
return self._meta[v].has_changed
class VarsMixin(object):
"""
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 11.0.0
Modules should use the VarDict from plugins/module_utils/vardict.py instead.
"""
def __init__(self, module=None):
self.vars = VarDict()
super(VarsMixin, self).__init__(module)
def update_vars(self, meta=None, **kwargs):
if meta is None:
meta = {}
for k, v in kwargs.items():
self.vars.set(k, v, **meta)

View file

@ -10,13 +10,9 @@ __metaclass__ = type
from ansible.module_utils.common.dict_transformations import dict_merge
from ansible_collections.community.general.plugins.module_utils.vardict import VarDict as _NewVarDict # remove "as NewVarDict" in 11.0.0
# (TODO: remove AnsibleModule!) pylint: disable-next=unused-import
from ansible_collections.community.general.plugins.module_utils.mh.base import AnsibleModule # noqa: F401 DEPRECATED, remove in 11.0.0
from ansible_collections.community.general.plugins.module_utils.vardict import VarDict
from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin
# (TODO: remove mh.mixins.vars!) pylint: disable-next=unused-import
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarsMixin, VarDict as _OldVarDict # noqa: F401 remove in 11.0.0
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deprecate_attrs import DeprecateAttrsMixin
@ -26,24 +22,11 @@ class ModuleHelper(DeprecateAttrsMixin, ModuleHelperBase):
diff_params = ()
change_params = ()
facts_params = ()
use_old_vardict = True # remove in 11.0.0
mute_vardict_deprecation = False
def __init__(self, module=None):
if self.use_old_vardict: # remove first half of the if in 11.0.0
self.vars = _OldVarDict()
super(ModuleHelper, self).__init__(module)
if not self.mute_vardict_deprecation:
self.module.deprecate(
"This class is using the old VarDict from ModuleHelper, which is deprecated. "
"Set the class variable use_old_vardict to False and make the necessary adjustments."
"The old VarDict class will be removed in community.general 11.0.0",
version="11.0.0", collection_name="community.general"
)
else:
self.vars = _NewVarDict()
super(ModuleHelper, self).__init__(module)
super(ModuleHelper, self).__init__(module)
self.vars = VarDict()
for name, value in self.module.params.items():
self.vars.set(
name, value,
@ -66,9 +49,6 @@ class ModuleHelper(DeprecateAttrsMixin, ModuleHelperBase):
self.update_vars(meta={"fact": True}, **kwargs)
def _vars_changed(self):
if self.use_old_vardict:
return any(self.vars.has_changed(v) for v in self.vars.change_vars())
return self.vars.has_changed
def has_changed(self):

View file

@ -11,12 +11,8 @@ __metaclass__ = type
from ansible_collections.community.general.plugins.module_utils.mh.module_helper import (
ModuleHelper, StateModuleHelper,
AnsibleModule # remove in 11.0.0
)
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin # noqa: F401 remove in 11.0.0
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyCtxMgr # noqa: F401 remove in 11.0.0
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException # noqa: F401
from ansible_collections.community.general.plugins.module_utils.mh.deco import (
cause_changes, module_fails_on_exception, check_mode_skip, check_mode_skip_returns,
)
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarMeta, VarDict, VarsMixin # noqa: F401 remove in 11.0.0

View file

@ -451,9 +451,6 @@ class RedfishUtils(object):
pass
return msg, data
def _init_session(self):
self.module.deprecate("Method _init_session is deprecated and will be removed.", version="11.0.0", collection_name="community.general")
def _get_vendor(self):
# If we got the vendor info once, don't get it again
if self._vendor is not None:

View file

@ -150,7 +150,6 @@ class AndroidSdk(StateModuleHelper):
),
supports_check_mode=True
)
use_old_vardict = False
def __init_module__(self):
self.sdkmanager = AndroidSdkManager(self.module)

View file

@ -220,7 +220,6 @@ class AnsibleGalaxyInstall(ModuleHelper):
required_if=[('type', 'both', ['requirements_file'])],
supports_check_mode=False,
)
use_old_vardict = False
command = 'ansible-galaxy'
command_args_formats = dict(

View file

@ -382,7 +382,6 @@ class ApacheModProxy(ModuleHelper):
),
supports_check_mode=True
)
use_old_vardict = False
def __init_module__(self):
deps.validate(self.module)

View file

@ -35,9 +35,9 @@ options:
state:
description:
- Indicates the desired package state.
- Please note that V(present) and V(installed) are equivalent to V(latest) right now. This will change in the future.
To simply ensure that a package is installed, without upgrading it, use the V(present_not_latest) state.
- The states V(latest) and V(present_not_latest) have been added in community.general 8.6.0.
- Please note before community.general 11.0.0, V(present) and V(installed) were equivalent to V(latest).
This changed in community.general 11.0.0. Now they are equivalent to V(present_not_latest).
choices:
- absent
- present
@ -307,17 +307,6 @@ def main():
module.fail_json(msg="cannot find /usr/bin/apt-get and/or /usr/bin/rpm")
p = module.params
if p['state'] in ['installed', 'present']:
module.deprecate(
'state=%s currently behaves unexpectedly by always upgrading to the latest version if'
' the package is already installed. This behavior is deprecated and will change in'
' community.general 11.0.0. You can use state=latest to explicitly request this behavior'
' or state=present_not_latest to explicitly request the behavior that state=%s will have'
' in community.general 11.0.0, namely that the package will not be upgraded if it is'
' already installed.' % (p['state'], p['state']),
version='11.0.0',
collection_name='community.general',
)
modified = False
output = ""
@ -341,7 +330,7 @@ def main():
packages = p['package']
if p['state'] in ['installed', 'present', 'present_not_latest', 'latest']:
(m, out) = install_packages(module, packages, allow_upgrade=p['state'] != 'present_not_latest')
(m, out) = install_packages(module, packages, allow_upgrade=p['state'] == 'latest')
modified = modified or m
output += out

View file

@ -1,338 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_aa_policy
short_description: Create or Delete Anti-Affinity Policies at CenturyLink Cloud
description:
- An Ansible module to Create or Delete Anti-Affinity Policies at CenturyLink Cloud.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the Anti-Affinity Policy.
type: str
required: true
location:
description:
- Datacenter in which the policy lives/should live.
type: str
required: true
state:
description:
- Whether to create or delete the policy.
type: str
required: false
default: present
choices: ['present', 'absent']
"""
EXAMPLES = r"""
- name: Create AA Policy
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create an Anti Affinity Policy
community.general.clc_aa_policy:
name: Hammer Time
location: UK3
state: present
register: policy
- name: Debug
ansible.builtin.debug:
var: policy
- name: Delete AA Policy
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Delete an Anti Affinity Policy
community.general.clc_aa_policy:
name: Hammer Time
location: UK3
state: absent
register: policy
- name: Debug
ansible.builtin.debug:
var: policy
"""
RETURN = r"""
policy:
description: The anti-affinity policy information.
returned: success
type: dict
sample:
{
"id":"1a28dd0988984d87b9cd61fa8da15424",
"name":"test_aa_policy",
"location":"UC1",
"links":[
{
"rel":"self",
"href":"/v2/antiAffinityPolicies/wfad/1a28dd0988984d87b9cd61fa8da15424",
"verbs":[
"GET",
"DELETE",
"PUT"
]
},
{
"rel":"location",
"href":"/v2/datacenters/wfad/UC1",
"id":"uc1",
"name":"UC1 - US West (Santa Clara)"
}
]
}
"""
__version__ = '${version}'
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk:
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcAntiAffinityPolicy:
clc = clc_sdk
module = None
def __init__(self, module):
"""
Construct module
"""
self.module = module
self.policy_dict = {}
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'),
exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(required=True),
location=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
)
return argument_spec
# Module Behavior Goodness
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
p = self.module.params
self._set_clc_credentials_from_env()
self.policy_dict = self._get_policies_for_datacenter(p)
if p['state'] == "absent":
changed, policy = self._ensure_policy_is_absent(p)
else:
changed, policy = self._ensure_policy_is_present(p)
if hasattr(policy, 'data'):
policy = policy.data
elif hasattr(policy, '__dict__'):
policy = policy.__dict__
self.module.exit_json(changed=changed, policy=policy)
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _get_policies_for_datacenter(self, p):
"""
Get the Policies for a datacenter by calling the CLC API.
:param p: datacenter to get policies from
:return: policies in the datacenter
"""
response = {}
policies = self.clc.v2.AntiAffinity.GetAll(location=p['location'])
for policy in policies:
response[policy.name] = policy
return response
def _create_policy(self, p):
"""
Create an Anti Affinity Policy using the CLC API.
:param p: datacenter to create policy in
:return: response dictionary from the CLC API.
"""
try:
return self.clc.v2.AntiAffinity.Create(
name=p['name'],
location=p['location'])
except CLCException as ex:
self.module.fail_json(msg='Failed to create anti affinity policy : {0}. {1}'.format(
p['name'], ex.response_text
))
def _delete_policy(self, p):
"""
Delete an Anti Affinity Policy using the CLC API.
:param p: datacenter to delete a policy from
:return: none
"""
try:
policy = self.policy_dict[p['name']]
policy.Delete()
except CLCException as ex:
self.module.fail_json(msg='Failed to delete anti affinity policy : {0}. {1}'.format(
p['name'], ex.response_text
))
def _policy_exists(self, policy_name):
"""
Check to see if an Anti Affinity Policy exists
:param policy_name: name of the policy
:return: boolean of if the policy exists
"""
if policy_name in self.policy_dict:
return self.policy_dict.get(policy_name)
return False
def _ensure_policy_is_absent(self, p):
"""
Makes sure that a policy is absent
:param p: dictionary of policy name
:return: tuple of if a deletion occurred and the name of the policy that was deleted
"""
changed = False
if self._policy_exists(policy_name=p['name']):
changed = True
if not self.module.check_mode:
self._delete_policy(p)
return changed, None
def _ensure_policy_is_present(self, p):
"""
Ensures that a policy is present
:param p: dictionary of a policy name
:return: tuple of if an addition occurred and the name of the policy that was added
"""
changed = False
policy = self._policy_exists(policy_name=p['name'])
if not policy:
changed = True
policy = None
if not self.module.check_mode:
policy = self._create_policy(p)
return changed, policy
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcAntiAffinityPolicy._define_module_argument_spec(),
supports_check_mode=True)
clc_aa_policy = ClcAntiAffinityPolicy(module)
clc_aa_policy.process_request()
if __name__ == '__main__':
main()

View file

@ -1,522 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_alert_policy
short_description: Create or Delete Alert Policies at CenturyLink Cloud
description:
- An Ansible module to Create or Delete Alert Policies at CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
alias:
description:
- The alias of your CLC Account.
type: str
required: true
name:
description:
- The name of the alert policy. This is mutually exclusive with O(id).
type: str
id:
description:
- The alert policy ID. This is mutually exclusive with O(name).
type: str
alert_recipients:
description:
- A list of recipient email IDs to notify the alert. This is required for O(state=present).
type: list
elements: str
metric:
description:
- The metric on which to measure the condition that will trigger the alert. This is required for O(state=present).
type: str
choices: ['cpu', 'memory', 'disk']
duration:
description:
- The length of time in minutes that the condition must exceed the threshold. This is required for O(state=present).
type: str
threshold:
description:
- The threshold that will trigger the alert when the metric equals or exceeds it. This is required for O(state=present).
This number represents a percentage and must be a value between 5.0 - 95.0 that is a multiple of 5.0.
type: int
state:
description:
- Whether to create or delete the policy.
type: str
default: present
choices: ['present', 'absent']
"""
EXAMPLES = r"""
- name: Create Alert Policy Example
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create an Alert Policy for disk above 80% for 5 minutes
community.general.clc_alert_policy:
alias: wfad
name: 'alert for disk > 80%'
alert_recipients:
- test1@centurylink.com
- test2@centurylink.com
metric: 'disk'
duration: '00:05:00'
threshold: 80
state: present
register: policy
- name: Debug
ansible.builtin.debug: var=policy
- name: Delete Alert Policy Example
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Delete an Alert Policy
community.general.clc_alert_policy:
alias: wfad
name: 'alert for disk > 80%'
state: absent
register: policy
- name: Debug
ansible.builtin.debug: var=policy
"""
RETURN = r"""
policy:
description: The alert policy information.
returned: success
type: dict
sample:
{
"actions": [
{
"action": "email",
"settings": {
"recipients": [
"user1@domain.com",
"user1@domain.com"
]
}
}
],
"id": "ba54ac54a60d4a4f1ed6d48c1ce240a7",
"links": [
{
"href": "/v2/alertPolicies/alias/ba54ac54a60d4a4fb1d6d48c1ce240a7",
"rel": "self",
"verbs": [
"GET",
"DELETE",
"PUT"
]
}
],
"name": "test_alert",
"triggers": [
{
"duration": "00:05:00",
"metric": "disk",
"threshold": 80.0
}
]
}
"""
__version__ = '${version}'
import json
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import APIFailedResponse
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcAlertPolicy:
clc = clc_sdk
module = None
def __init__(self, module):
"""
Construct module
"""
self.module = module
self.policy_dict = {}
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(),
id=dict(),
alias=dict(required=True),
alert_recipients=dict(type='list', elements='str'),
metric=dict(
choices=[
'cpu',
'memory',
'disk']),
duration=dict(type='str'),
threshold=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])
)
mutually_exclusive = [
['name', 'id']
]
return {'argument_spec': argument_spec,
'mutually_exclusive': mutually_exclusive}
# Module Behavior Goodness
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
p = self.module.params
self._set_clc_credentials_from_env()
self.policy_dict = self._get_alert_policies(p['alias'])
if p['state'] == 'present':
changed, policy = self._ensure_alert_policy_is_present()
else:
changed, policy = self._ensure_alert_policy_is_absent()
self.module.exit_json(changed=changed, policy=policy)
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _ensure_alert_policy_is_present(self):
"""
Ensures that the alert policy is present
:return: (changed, policy)
changed: A flag representing if anything is modified
policy: the created/updated alert policy
"""
changed = False
p = self.module.params
policy_name = p.get('name')
if not policy_name:
self.module.fail_json(msg='Policy name is a required')
policy = self._alert_policy_exists(policy_name)
if not policy:
changed = True
policy = None
if not self.module.check_mode:
policy = self._create_alert_policy()
else:
changed_u, policy = self._ensure_alert_policy_is_updated(policy)
if changed_u:
changed = True
return changed, policy
def _ensure_alert_policy_is_absent(self):
"""
Ensures that the alert policy is absent
:return: (changed, None)
changed: A flag representing if anything is modified
"""
changed = False
p = self.module.params
alert_policy_id = p.get('id')
alert_policy_name = p.get('name')
alias = p.get('alias')
if not alert_policy_id and not alert_policy_name:
self.module.fail_json(
msg='Either alert policy id or policy name is required')
if not alert_policy_id and alert_policy_name:
alert_policy_id = self._get_alert_policy_id(
self.module,
alert_policy_name)
if alert_policy_id and alert_policy_id in self.policy_dict:
changed = True
if not self.module.check_mode:
self._delete_alert_policy(alias, alert_policy_id)
return changed, None
def _ensure_alert_policy_is_updated(self, alert_policy):
"""
Ensures the alert policy is updated if anything is changed in the alert policy configuration
:param alert_policy: the target alert policy
:return: (changed, policy)
changed: A flag representing if anything is modified
policy: the updated the alert policy
"""
changed = False
p = self.module.params
alert_policy_id = alert_policy.get('id')
email_list = p.get('alert_recipients')
metric = p.get('metric')
duration = p.get('duration')
threshold = p.get('threshold')
policy = alert_policy
if (metric and metric != str(alert_policy.get('triggers')[0].get('metric'))) or \
(duration and duration != str(alert_policy.get('triggers')[0].get('duration'))) or \
(threshold and float(threshold) != float(alert_policy.get('triggers')[0].get('threshold'))):
changed = True
elif email_list:
t_email_list = list(
alert_policy.get('actions')[0].get('settings').get('recipients'))
if set(email_list) != set(t_email_list):
changed = True
if changed and not self.module.check_mode:
policy = self._update_alert_policy(alert_policy_id)
return changed, policy
def _get_alert_policies(self, alias):
"""
Get the alert policies for account alias by calling the CLC API.
:param alias: the account alias
:return: the alert policies for the account alias
"""
response = {}
policies = self.clc.v2.API.Call('GET',
'/v2/alertPolicies/%s'
% alias)
for policy in policies.get('items'):
response[policy.get('id')] = policy
return response
def _create_alert_policy(self):
"""
Create an alert Policy using the CLC API.
:return: response dictionary from the CLC API.
"""
p = self.module.params
alias = p['alias']
email_list = p['alert_recipients']
metric = p['metric']
duration = p['duration']
threshold = p['threshold']
policy_name = p['name']
arguments = json.dumps(
{
'name': policy_name,
'actions': [{
'action': 'email',
'settings': {
'recipients': email_list
}
}],
'triggers': [{
'metric': metric,
'duration': duration,
'threshold': threshold
}]
}
)
try:
result = self.clc.v2.API.Call(
'POST',
'/v2/alertPolicies/%s' % alias,
arguments)
except APIFailedResponse as e:
return self.module.fail_json(
msg='Unable to create alert policy "{0}". {1}'.format(
policy_name, str(e.response_text)))
return result
def _update_alert_policy(self, alert_policy_id):
"""
Update alert policy using the CLC API.
:param alert_policy_id: The clc alert policy id
:return: response dictionary from the CLC API.
"""
p = self.module.params
alias = p['alias']
email_list = p['alert_recipients']
metric = p['metric']
duration = p['duration']
threshold = p['threshold']
policy_name = p['name']
arguments = json.dumps(
{
'name': policy_name,
'actions': [{
'action': 'email',
'settings': {
'recipients': email_list
}
}],
'triggers': [{
'metric': metric,
'duration': duration,
'threshold': threshold
}]
}
)
try:
result = self.clc.v2.API.Call(
'PUT', '/v2/alertPolicies/%s/%s' %
(alias, alert_policy_id), arguments)
except APIFailedResponse as e:
return self.module.fail_json(
msg='Unable to update alert policy "{0}". {1}'.format(
policy_name, str(e.response_text)))
return result
def _delete_alert_policy(self, alias, policy_id):
"""
Delete an alert policy using the CLC API.
:param alias : the account alias
:param policy_id: the alert policy id
:return: response dictionary from the CLC API.
"""
try:
result = self.clc.v2.API.Call(
'DELETE', '/v2/alertPolicies/%s/%s' %
(alias, policy_id), None)
except APIFailedResponse as e:
return self.module.fail_json(
msg='Unable to delete alert policy id "{0}". {1}'.format(
policy_id, str(e.response_text)))
return result
def _alert_policy_exists(self, policy_name):
"""
Check to see if an alert policy exists
:param policy_name: name of the alert policy
:return: boolean of if the policy exists
"""
result = False
for policy_id in self.policy_dict:
if self.policy_dict.get(policy_id).get('name') == policy_name:
result = self.policy_dict.get(policy_id)
return result
def _get_alert_policy_id(self, module, alert_policy_name):
"""
retrieves the alert policy id of the account based on the name of the policy
:param module: the AnsibleModule object
:param alert_policy_name: the alert policy name
:return: alert_policy_id: The alert policy id
"""
alert_policy_id = None
for policy_id in self.policy_dict:
if self.policy_dict.get(policy_id).get('name') == alert_policy_name:
if not alert_policy_id:
alert_policy_id = policy_id
else:
return module.fail_json(
msg='multiple alert policies were found with policy name : %s' % alert_policy_name)
return alert_policy_id
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
argument_dict = ClcAlertPolicy._define_module_argument_spec()
module = AnsibleModule(supports_check_mode=True, **argument_dict)
clc_alert_policy = ClcAlertPolicy(module)
clc_alert_policy.process_request()
if __name__ == '__main__':
main()

View file

@ -1,299 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_blueprint_package
short_description: Deploys a blue print package on a set of servers in CenturyLink Cloud
description:
- An Ansible module to deploy blue print package on a set of servers in CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
server_ids:
description:
- A list of server IDs to deploy the blue print package.
type: list
required: true
elements: str
package_id:
description:
- The package ID of the blue print.
type: str
required: true
package_params:
description:
- The dictionary of arguments required to deploy the blue print.
type: dict
default: {}
required: false
state:
description:
- Whether to install or uninstall the package. Currently it supports only V(present) for install action.
type: str
required: false
default: present
choices: ['present']
wait:
description:
- Whether to wait for the tasks to finish before returning.
type: str
default: 'True'
required: false
"""
EXAMPLES = r"""
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Deploy package
community.general.clc_blueprint_package:
server_ids:
- UC1TEST-SERVER1
- UC1TEST-SERVER2
package_id: 77abb844-579d-478d-3955-c69ab4a7ba1a
package_params: {}
"""
RETURN = r"""
server_ids:
description: The list of server IDs that are changed.
returned: success
type: list
sample: ["UC1TEST-SERVER1", "UC1TEST-SERVER2"]
"""
__version__ = '${version}'
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcBlueprintPackage:
clc = clc_sdk
module = None
def __init__(self, module):
"""
Construct module
"""
self.module = module
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
p = self.module.params
changed = False
changed_server_ids = []
self._set_clc_credentials_from_env()
server_ids = p['server_ids']
package_id = p['package_id']
package_params = p['package_params']
state = p['state']
if state == 'present':
changed, changed_server_ids, request_list = self.ensure_package_installed(
server_ids, package_id, package_params)
self._wait_for_requests_to_complete(request_list)
self.module.exit_json(changed=changed, server_ids=changed_server_ids)
@staticmethod
def define_argument_spec():
"""
This function defines the dictionary object required for
package module
:return: the package dictionary object
"""
argument_spec = dict(
server_ids=dict(type='list', elements='str', required=True),
package_id=dict(required=True),
package_params=dict(type='dict', default={}),
wait=dict(default=True), # @FIXME should be bool?
state=dict(default='present', choices=['present'])
)
return argument_spec
def ensure_package_installed(self, server_ids, package_id, package_params):
"""
Ensure the package is installed in the given list of servers
:param server_ids: the server list where the package needs to be installed
:param package_id: the blueprint package id
:param package_params: the package arguments
:return: (changed, server_ids, request_list)
changed: A flag indicating if a change was made
server_ids: The list of servers modified
request_list: The list of request objects from clc-sdk
"""
changed = False
request_list = []
servers = self._get_servers_from_clc(
server_ids,
'Failed to get servers from CLC')
for server in servers:
if not self.module.check_mode:
request = self.clc_install_package(
server,
package_id,
package_params)
request_list.append(request)
changed = True
return changed, server_ids, request_list
def clc_install_package(self, server, package_id, package_params):
"""
Install the package to a given clc server
:param server: The server object where the package needs to be installed
:param package_id: The blue print package id
:param package_params: the required argument dict for the package installation
:return: The result object from the CLC API call
"""
result = None
try:
result = server.ExecutePackage(
package_id=package_id,
parameters=package_params)
except CLCException as ex:
self.module.fail_json(msg='Failed to install package : {0} to server {1}. {2}'.format(
package_id, server.id, ex.message
))
return result
def _wait_for_requests_to_complete(self, request_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param request_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in request_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process package install request')
def _get_servers_from_clc(self, server_list, message):
"""
Internal function to fetch list of CLC server objects from a list of server ids
:param server_list: the list of server ids
:param message: the error message to raise if there is any error
:return the list of CLC server objects
"""
try:
return self.clc.v2.Servers(server_list).servers
except CLCException as ex:
self.module.fail_json(msg=message + ': %s' % ex)
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
Main function
:return: None
"""
module = AnsibleModule(
argument_spec=ClcBlueprintPackage.define_argument_spec(),
supports_check_mode=True
)
clc_blueprint_package = ClcBlueprintPackage(module)
clc_blueprint_package.process_request()
if __name__ == '__main__':
main()

View file

@ -1,586 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_firewall_policy
short_description: Create/delete/update firewall policies
description:
- Create or delete or update firewall policies on Centurylink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
location:
description:
- Target datacenter for the firewall policy.
type: str
required: true
state:
description:
- Whether to create or delete the firewall policy.
type: str
default: present
choices: ['present', 'absent']
source:
description:
- The list of source addresses for traffic on the originating firewall. This is required when O(state=present).
type: list
elements: str
destination:
description:
- The list of destination addresses for traffic on the terminating firewall. This is required when O(state=present).
type: list
elements: str
ports:
description:
- The list of ports associated with the policy. TCP and UDP can take in single ports or port ranges.
- "Example: V(['any', 'icmp', 'TCP/123', 'UDP/123', 'TCP/123-456', 'UDP/123-456'])."
type: list
elements: str
firewall_policy_id:
description:
- ID of the firewall policy. This is required to update or delete an existing firewall policy.
type: str
source_account_alias:
description:
- CLC alias for the source account.
type: str
required: true
destination_account_alias:
description:
- CLC alias for the destination account.
type: str
wait:
description:
- Whether to wait for the provisioning tasks to finish before returning.
type: str
default: 'True'
enabled:
description:
- Whether the firewall policy is enabled or disabled.
type: str
choices: ['True', 'False']
default: 'True'
"""
EXAMPLES = r"""
- name: Create Firewall Policy
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create / Verify an Firewall Policy at CenturyLink Cloud
clc_firewall:
source_account_alias: WFAD
location: VA1
state: present
source: 10.128.216.0/24
destination: 10.128.216.0/24
ports: Any
destination_account_alias: WFAD
- name: Delete Firewall Policy
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Delete an Firewall Policy at CenturyLink Cloud
clc_firewall:
source_account_alias: WFAD
location: VA1
state: absent
firewall_policy_id: c62105233d7a4231bd2e91b9c791e43e1
"""
RETURN = r"""
firewall_policy_id:
description: The firewall policy ID.
returned: success
type: str
sample: fc36f1bfd47242e488a9c44346438c05
firewall_policy:
description: The firewall policy information.
returned: success
type: dict
sample:
{
"destination":[
"10.1.1.0/24",
"10.2.2.0/24"
],
"destinationAccount":"wfad",
"enabled":true,
"id":"fc36f1bfd47242e488a9c44346438c05",
"links":[
{
"href":"http://api.ctl.io/v2-experimental/firewallPolicies/wfad/uc1/fc36f1bfd47242e488a9c44346438c05",
"rel":"self",
"verbs":[
"GET",
"PUT",
"DELETE"
]
}
],
"ports":[
"any"
],
"source":[
"10.1.1.0/24",
"10.2.2.0/24"
],
"status":"active"
}
"""
__version__ = '${version}'
import os
import traceback
from ansible.module_utils.six.moves.urllib.parse import urlparse
from time import sleep
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import APIFailedResponse
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcFirewallPolicy:
clc = None
def __init__(self, module):
"""
Construct module
"""
self.clc = clc_sdk
self.module = module
self.firewall_dict = {}
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
location=dict(required=True),
source_account_alias=dict(required=True),
destination_account_alias=dict(),
firewall_policy_id=dict(),
ports=dict(type='list', elements='str'),
source=dict(type='list', elements='str'),
destination=dict(type='list', elements='str'),
wait=dict(default=True), # @FIXME type=bool
state=dict(default='present', choices=['present', 'absent']),
enabled=dict(default=True, choices=[True, False])
)
return argument_spec
def process_request(self):
"""
Execute the main code path, and handle the request
:return: none
"""
changed = False
firewall_policy = None
location = self.module.params.get('location')
source_account_alias = self.module.params.get('source_account_alias')
destination_account_alias = self.module.params.get(
'destination_account_alias')
firewall_policy_id = self.module.params.get('firewall_policy_id')
ports = self.module.params.get('ports')
source = self.module.params.get('source')
destination = self.module.params.get('destination')
wait = self.module.params.get('wait')
state = self.module.params.get('state')
enabled = self.module.params.get('enabled')
self.firewall_dict = {
'location': location,
'source_account_alias': source_account_alias,
'destination_account_alias': destination_account_alias,
'firewall_policy_id': firewall_policy_id,
'ports': ports,
'source': source,
'destination': destination,
'wait': wait,
'state': state,
'enabled': enabled}
self._set_clc_credentials_from_env()
if state == 'absent':
changed, firewall_policy_id, firewall_policy = self._ensure_firewall_policy_is_absent(
source_account_alias, location, self.firewall_dict)
elif state == 'present':
changed, firewall_policy_id, firewall_policy = self._ensure_firewall_policy_is_present(
source_account_alias, location, self.firewall_dict)
return self.module.exit_json(
changed=changed,
firewall_policy_id=firewall_policy_id,
firewall_policy=firewall_policy)
@staticmethod
def _get_policy_id_from_response(response):
"""
Method to parse out the policy id from creation response
:param response: response from firewall creation API call
:return: policy_id: firewall policy id from creation call
"""
url = response.get('links')[0]['href']
path = urlparse(url).path
path_list = os.path.split(path)
policy_id = path_list[-1]
return policy_id
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _ensure_firewall_policy_is_present(
self,
source_account_alias,
location,
firewall_dict):
"""
Ensures that a given firewall policy is present
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_dict: dictionary of request parameters for firewall policy
:return: (changed, firewall_policy_id, firewall_policy)
changed: flag for if a change occurred
firewall_policy_id: the firewall policy id that was created/updated
firewall_policy: The firewall_policy object
"""
firewall_policy = None
firewall_policy_id = firewall_dict.get('firewall_policy_id')
if firewall_policy_id is None:
if not self.module.check_mode:
response = self._create_firewall_policy(
source_account_alias,
location,
firewall_dict)
firewall_policy_id = self._get_policy_id_from_response(
response)
changed = True
else:
firewall_policy = self._get_firewall_policy(
source_account_alias, location, firewall_policy_id)
if not firewall_policy:
return self.module.fail_json(
msg='Unable to find the firewall policy id : {0}'.format(
firewall_policy_id))
changed = self._compare_get_request_with_dict(
firewall_policy,
firewall_dict)
if not self.module.check_mode and changed:
self._update_firewall_policy(
source_account_alias,
location,
firewall_policy_id,
firewall_dict)
if changed and firewall_policy_id:
firewall_policy = self._wait_for_requests_to_complete(
source_account_alias,
location,
firewall_policy_id)
return changed, firewall_policy_id, firewall_policy
def _ensure_firewall_policy_is_absent(
self,
source_account_alias,
location,
firewall_dict):
"""
Ensures that a given firewall policy is removed if present
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_dict: firewall policy to delete
:return: (changed, firewall_policy_id, response)
changed: flag for if a change occurred
firewall_policy_id: the firewall policy id that was deleted
response: response from CLC API call
"""
changed = False
response = []
firewall_policy_id = firewall_dict.get('firewall_policy_id')
result = self._get_firewall_policy(
source_account_alias, location, firewall_policy_id)
if result:
if not self.module.check_mode:
response = self._delete_firewall_policy(
source_account_alias,
location,
firewall_policy_id)
changed = True
return changed, firewall_policy_id, response
def _create_firewall_policy(
self,
source_account_alias,
location,
firewall_dict):
"""
Creates the firewall policy for the given account alias
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_dict: dictionary of request parameters for firewall policy
:return: response from CLC API call
"""
payload = {
'destinationAccount': firewall_dict.get('destination_account_alias'),
'source': firewall_dict.get('source'),
'destination': firewall_dict.get('destination'),
'ports': firewall_dict.get('ports')}
try:
response = self.clc.v2.API.Call(
'POST', '/v2-experimental/firewallPolicies/%s/%s' %
(source_account_alias, location), payload)
except APIFailedResponse as e:
return self.module.fail_json(
msg="Unable to create firewall policy. %s" %
str(e.response_text))
return response
def _delete_firewall_policy(
self,
source_account_alias,
location,
firewall_policy_id):
"""
Deletes a given firewall policy for an account alias in a datacenter
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_policy_id: firewall policy id to delete
:return: response: response from CLC API call
"""
try:
response = self.clc.v2.API.Call(
'DELETE', '/v2-experimental/firewallPolicies/%s/%s/%s' %
(source_account_alias, location, firewall_policy_id))
except APIFailedResponse as e:
return self.module.fail_json(
msg="Unable to delete the firewall policy id : {0}. {1}".format(
firewall_policy_id, str(e.response_text)))
return response
def _update_firewall_policy(
self,
source_account_alias,
location,
firewall_policy_id,
firewall_dict):
"""
Updates a firewall policy for a given datacenter and account alias
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_policy_id: firewall policy id to update
:param firewall_dict: dictionary of request parameters for firewall policy
:return: response: response from CLC API call
"""
try:
response = self.clc.v2.API.Call(
'PUT',
'/v2-experimental/firewallPolicies/%s/%s/%s' %
(source_account_alias,
location,
firewall_policy_id),
firewall_dict)
except APIFailedResponse as e:
return self.module.fail_json(
msg="Unable to update the firewall policy id : {0}. {1}".format(
firewall_policy_id, str(e.response_text)))
return response
@staticmethod
def _compare_get_request_with_dict(response, firewall_dict):
"""
Helper method to compare the json response for getting the firewall policy with the request parameters
:param response: response from the get method
:param firewall_dict: dictionary of request parameters for firewall policy
:return: changed: Boolean that returns true if there are differences between
the response parameters and the playbook parameters
"""
changed = False
response_dest_account_alias = response.get('destinationAccount')
response_enabled = response.get('enabled')
response_source = response.get('source')
response_dest = response.get('destination')
response_ports = response.get('ports')
request_dest_account_alias = firewall_dict.get(
'destination_account_alias')
request_enabled = firewall_dict.get('enabled')
if request_enabled is None:
request_enabled = True
request_source = firewall_dict.get('source')
request_dest = firewall_dict.get('destination')
request_ports = firewall_dict.get('ports')
if (
response_dest_account_alias and str(response_dest_account_alias) != str(request_dest_account_alias)) or (
response_enabled != request_enabled) or (
response_source and response_source != request_source) or (
response_dest and response_dest != request_dest) or (
response_ports and response_ports != request_ports):
changed = True
return changed
def _get_firewall_policy(
self,
source_account_alias,
location,
firewall_policy_id):
"""
Get back details for a particular firewall policy
:param source_account_alias: the source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_policy_id: id of the firewall policy to get
:return: response - The response from CLC API call
"""
response = None
try:
response = self.clc.v2.API.Call(
'GET', '/v2-experimental/firewallPolicies/%s/%s/%s' %
(source_account_alias, location, firewall_policy_id))
except APIFailedResponse as e:
if e.response_status_code != 404:
self.module.fail_json(
msg="Unable to fetch the firewall policy with id : {0}. {1}".format(
firewall_policy_id, str(e.response_text)))
return response
def _wait_for_requests_to_complete(
self,
source_account_alias,
location,
firewall_policy_id,
wait_limit=50):
"""
Waits until the CLC requests are complete if the wait argument is True
:param source_account_alias: The source account alias for the firewall policy
:param location: datacenter of the firewall policy
:param firewall_policy_id: The firewall policy id
:param wait_limit: The number of times to check the status for completion
:return: the firewall_policy object
"""
wait = self.module.params.get('wait')
count = 0
firewall_policy = None
while wait:
count += 1
firewall_policy = self._get_firewall_policy(
source_account_alias, location, firewall_policy_id)
status = firewall_policy.get('status')
if status == 'active' or count > wait_limit:
wait = False
else:
# wait for 2 seconds
sleep(2)
return firewall_policy
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcFirewallPolicy._define_module_argument_spec(),
supports_check_mode=True)
clc_firewall = ClcFirewallPolicy(module)
clc_firewall.process_request()
if __name__ == '__main__':
main()

View file

@ -1,512 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_group
short_description: Create/delete Server Groups at Centurylink Cloud
description:
- Create or delete Server Groups at Centurylink Centurylink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the Server Group.
type: str
required: true
description:
description:
- A description of the Server Group.
type: str
required: false
parent:
description:
- The parent group of the server group. If parent is not provided, it creates the group at top level.
type: str
required: false
location:
description:
- Datacenter to create the group in. If location is not provided, the group gets created in the default datacenter associated
with the account.
type: str
required: false
state:
description:
- Whether to create or delete the group.
type: str
default: present
choices: ['present', 'absent']
wait:
description:
- Whether to wait for the tasks to finish before returning.
type: bool
default: true
required: false
"""
EXAMPLES = r"""
# Create a Server Group
- name: Create Server Group
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create / Verify a Server Group at CenturyLink Cloud
community.general.clc_group:
name: My Cool Server Group
parent: Default Group
state: present
register: clc
- name: Debug
ansible.builtin.debug:
var: clc
# Delete a Server Group
- name: Delete Server Group
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Delete / Verify Absent a Server Group at CenturyLink Cloud
community.general.clc_group:
name: My Cool Server Group
parent: Default Group
state: absent
register: clc
- name: Debug
ansible.builtin.debug:
var: clc
"""
RETURN = r"""
group:
description: The group information.
returned: success
type: dict
sample:
{
"changeInfo":{
"createdBy":"service.wfad",
"createdDate":"2015-07-29T18:52:47Z",
"modifiedBy":"service.wfad",
"modifiedDate":"2015-07-29T18:52:47Z"
},
"customFields":[
],
"description":"test group",
"groups":[
],
"id":"bb5f12a3c6044ae4ad0a03e73ae12cd1",
"links":[
{
"href":"/v2/groups/wfad",
"rel":"createGroup",
"verbs":[
"POST"
]
},
{
"href":"/v2/servers/wfad",
"rel":"createServer",
"verbs":[
"POST"
]
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1",
"rel":"self",
"verbs":[
"GET",
"PATCH",
"DELETE"
]
},
{
"href":"/v2/groups/wfad/086ac1dfe0b6411989e8d1b77c4065f0",
"id":"086ac1dfe0b6411989e8d1b77c4065f0",
"rel":"parentGroup"
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/defaults",
"rel":"defaults",
"verbs":[
"GET",
"POST"
]
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/billing",
"rel":"billing"
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/archive",
"rel":"archiveGroupAction"
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/statistics",
"rel":"statistics"
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/upcomingScheduledActivities",
"rel":"upcomingScheduledActivities"
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/horizontalAutoscalePolicy",
"rel":"horizontalAutoscalePolicyMapping",
"verbs":[
"GET",
"PUT",
"DELETE"
]
},
{
"href":"/v2/groups/wfad/bb5f12a3c6044ae4ad0a03e73ae12cd1/scheduledActivities",
"rel":"scheduledActivities",
"verbs":[
"GET",
"POST"
]
}
],
"locationId":"UC1",
"name":"test group",
"status":"active",
"type":"default"
}
"""
__version__ = '${version}'
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcGroup(object):
clc = None
root_group = None
def __init__(self, module):
"""
Construct module
"""
self.clc = clc_sdk
self.module = module
self.group_dict = {}
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Execute the main code path, and handle the request
:return: none
"""
location = self.module.params.get('location')
group_name = self.module.params.get('name')
parent_name = self.module.params.get('parent')
group_description = self.module.params.get('description')
state = self.module.params.get('state')
self._set_clc_credentials_from_env()
self.group_dict = self._get_group_tree_for_datacenter(
datacenter=location)
if state == "absent":
changed, group, requests = self._ensure_group_is_absent(
group_name=group_name, parent_name=parent_name)
if requests:
self._wait_for_requests_to_complete(requests)
else:
changed, group = self._ensure_group_is_present(
group_name=group_name, parent_name=parent_name, group_description=group_description)
try:
group = group.data
except AttributeError:
group = group_name
self.module.exit_json(changed=changed, group=group)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(required=True),
description=dict(),
parent=dict(),
location=dict(),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=True))
return argument_spec
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _ensure_group_is_absent(self, group_name, parent_name):
"""
Ensure that group_name is absent by deleting it if necessary
:param group_name: string - the name of the clc server group to delete
:param parent_name: string - the name of the parent group for group_name
:return: changed, group
"""
changed = False
group = []
results = []
if self._group_exists(group_name=group_name, parent_name=parent_name):
if not self.module.check_mode:
group.append(group_name)
result = self._delete_group(group_name)
results.append(result)
changed = True
return changed, group, results
def _delete_group(self, group_name):
"""
Delete the provided server group
:param group_name: string - the server group to delete
:return: none
"""
response = None
group, parent = self.group_dict.get(group_name)
try:
response = group.Delete()
except CLCException as ex:
self.module.fail_json(msg='Failed to delete group :{0}. {1}'.format(
group_name, ex.response_text
))
return response
def _ensure_group_is_present(
self,
group_name,
parent_name,
group_description):
"""
Checks to see if a server group exists, creates it if it doesn't.
:param group_name: the name of the group to validate/create
:param parent_name: the name of the parent group for group_name
:param group_description: a short description of the server group (used when creating)
:return: (changed, group) -
changed: Boolean- whether a change was made,
group: A clc group object for the group
"""
if not self.root_group:
raise AssertionError("Implementation Error: Root Group not set")
parent = parent_name if parent_name is not None else self.root_group.name
description = group_description
changed = False
group = group_name
parent_exists = self._group_exists(group_name=parent, parent_name=None)
child_exists = self._group_exists(
group_name=group_name,
parent_name=parent)
if parent_exists and child_exists:
group, parent = self.group_dict[group_name]
changed = False
elif parent_exists and not child_exists:
if not self.module.check_mode:
group = self._create_group(
group=group,
parent=parent,
description=description)
changed = True
else:
self.module.fail_json(
msg="parent group: " +
parent +
" does not exist")
return changed, group
def _create_group(self, group, parent, description):
"""
Create the provided server group
:param group: clc_sdk.Group - the group to create
:param parent: clc_sdk.Parent - the parent group for {group}
:param description: string - a text description of the group
:return: clc_sdk.Group - the created group
"""
response = None
(parent, grandparent) = self.group_dict[parent]
try:
response = parent.Create(name=group, description=description)
except CLCException as ex:
self.module.fail_json(msg='Failed to create group :{0}. {1}'.format(
group, ex.response_text))
return response
def _group_exists(self, group_name, parent_name):
"""
Check to see if a group exists
:param group_name: string - the group to check
:param parent_name: string - the parent of group_name
:return: boolean - whether the group exists
"""
result = False
if group_name in self.group_dict:
(group, parent) = self.group_dict[group_name]
if parent_name is None or parent_name == parent.name:
result = True
return result
def _get_group_tree_for_datacenter(self, datacenter=None):
"""
Walk the tree of groups for a datacenter
:param datacenter: string - the datacenter to walk (ex: 'UC1')
:return: a dictionary of groups and parents
"""
self.root_group = self.clc.v2.Datacenter(
location=datacenter).RootGroup()
return self._walk_groups_recursive(
parent_group=None,
child_group=self.root_group)
def _walk_groups_recursive(self, parent_group, child_group):
"""
Walk a parent-child tree of groups, starting with the provided child group
:param parent_group: clc_sdk.Group - the parent group to start the walk
:param child_group: clc_sdk.Group - the child group to start the walk
:return: a dictionary of groups and parents
"""
result = {str(child_group): (child_group, parent_group)}
groups = child_group.Subgroups().groups
if len(groups) > 0:
for group in groups:
if group.type != 'default':
continue
result.update(self._walk_groups_recursive(child_group, group))
return result
def _wait_for_requests_to_complete(self, requests_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param requests_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in requests_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process group request')
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcGroup._define_module_argument_spec(),
supports_check_mode=True)
clc_group = ClcGroup(module)
clc_group.process_request()
if __name__ == '__main__':
main()

View file

@ -1,938 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_loadbalancer
short_description: Create, Delete shared loadbalancers in CenturyLink Cloud
description:
- An Ansible module to Create, Delete shared loadbalancers in CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the loadbalancer.
type: str
required: true
description:
description:
- A description for the loadbalancer.
type: str
alias:
description:
- The alias of your CLC Account.
type: str
required: true
location:
description:
- The location of the datacenter where the load balancer resides in.
type: str
required: true
method:
description:
- The balancing method for the load balancer pool.
type: str
choices: ['leastConnection', 'roundRobin']
persistence:
description:
- The persistence method for the load balancer.
type: str
choices: ['standard', 'sticky']
port:
description:
- Port to configure on the public-facing side of the load balancer pool.
type: str
choices: ['80', '443']
nodes:
description:
- A list of nodes that needs to be added to the load balancer pool.
type: list
default: []
elements: dict
status:
description:
- The status of the loadbalancer.
type: str
default: enabled
choices: ['enabled', 'disabled']
state:
description:
- Whether to create or delete the load balancer pool.
type: str
default: present
choices: ['present', 'absent', 'port_absent', 'nodes_present', 'nodes_absent']
"""
EXAMPLES = r"""
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Create Loadbalancer
hosts: localhost
connection: local
tasks:
- name: Actually Create things
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
location: WA1
port: 443
nodes:
- ipAddress: 10.11.22.123
privatePort: 80
state: present
- name: Add node to an existing loadbalancer pool
hosts: localhost
connection: local
tasks:
- name: Actually Create things
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
location: WA1
port: 443
nodes:
- ipAddress: 10.11.22.234
privatePort: 80
state: nodes_present
- name: Remove node from an existing loadbalancer pool
hosts: localhost
connection: local
tasks:
- name: Actually Create things
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
location: WA1
port: 443
nodes:
- ipAddress: 10.11.22.234
privatePort: 80
state: nodes_absent
- name: Delete LoadbalancerPool
hosts: localhost
connection: local
tasks:
- name: Actually Delete things
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
location: WA1
port: 443
nodes:
- ipAddress: 10.11.22.123
privatePort: 80
state: port_absent
- name: Delete Loadbalancer
hosts: localhost
connection: local
tasks:
- name: Actually Delete things
community.general.clc_loadbalancer:
name: test
description: test
alias: TEST
location: WA1
port: 443
nodes:
- ipAddress: 10.11.22.123
privatePort: 80
state: absent
"""
RETURN = r"""
loadbalancer:
description: The load balancer result object from CLC.
returned: success
type: dict
sample:
{
"description":"test-lb",
"id":"ab5b18cb81e94ab9925b61d1ca043fb5",
"ipAddress":"66.150.174.197",
"links":[
{
"href":"/v2/sharedLoadBalancers/wfad/wa1/ab5b18cb81e94ab9925b61d1ca043fb5",
"rel":"self",
"verbs":[
"GET",
"PUT",
"DELETE"
]
},
{
"href":"/v2/sharedLoadBalancers/wfad/wa1/ab5b18cb81e94ab9925b61d1ca043fb5/pools",
"rel":"pools",
"verbs":[
"GET",
"POST"
]
}
],
"name":"test-lb",
"pools":[
],
"status":"enabled"
}
"""
__version__ = '${version}'
import json
import os
import traceback
from time import sleep
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import APIFailedResponse
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcLoadBalancer:
clc = None
def __init__(self, module):
"""
Construct module
"""
self.clc = clc_sdk
self.module = module
self.lb_dict = {}
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Execute the main code path, and handle the request
:return: none
"""
changed = False
result_lb = None
loadbalancer_name = self.module.params.get('name')
loadbalancer_alias = self.module.params.get('alias')
loadbalancer_location = self.module.params.get('location')
loadbalancer_description = self.module.params.get('description')
loadbalancer_port = self.module.params.get('port')
loadbalancer_method = self.module.params.get('method')
loadbalancer_persistence = self.module.params.get('persistence')
loadbalancer_nodes = self.module.params.get('nodes')
loadbalancer_status = self.module.params.get('status')
state = self.module.params.get('state')
if loadbalancer_description is None:
loadbalancer_description = loadbalancer_name
self._set_clc_credentials_from_env()
self.lb_dict = self._get_loadbalancer_list(
alias=loadbalancer_alias,
location=loadbalancer_location)
if state == 'present':
changed, result_lb, lb_id = self.ensure_loadbalancer_present(
name=loadbalancer_name,
alias=loadbalancer_alias,
location=loadbalancer_location,
description=loadbalancer_description,
status=loadbalancer_status)
if loadbalancer_port:
changed, result_pool, pool_id = self.ensure_loadbalancerpool_present(
lb_id=lb_id,
alias=loadbalancer_alias,
location=loadbalancer_location,
method=loadbalancer_method,
persistence=loadbalancer_persistence,
port=loadbalancer_port)
if loadbalancer_nodes:
changed, result_nodes = self.ensure_lbpool_nodes_set(
alias=loadbalancer_alias,
location=loadbalancer_location,
name=loadbalancer_name,
port=loadbalancer_port,
nodes=loadbalancer_nodes)
elif state == 'absent':
changed, result_lb = self.ensure_loadbalancer_absent(
name=loadbalancer_name,
alias=loadbalancer_alias,
location=loadbalancer_location)
elif state == 'port_absent':
changed, result_lb = self.ensure_loadbalancerpool_absent(
alias=loadbalancer_alias,
location=loadbalancer_location,
name=loadbalancer_name,
port=loadbalancer_port)
elif state == 'nodes_present':
changed, result_lb = self.ensure_lbpool_nodes_present(
alias=loadbalancer_alias,
location=loadbalancer_location,
name=loadbalancer_name,
port=loadbalancer_port,
nodes=loadbalancer_nodes)
elif state == 'nodes_absent':
changed, result_lb = self.ensure_lbpool_nodes_absent(
alias=loadbalancer_alias,
location=loadbalancer_location,
name=loadbalancer_name,
port=loadbalancer_port,
nodes=loadbalancer_nodes)
self.module.exit_json(changed=changed, loadbalancer=result_lb)
def ensure_loadbalancer_present(
self, name, alias, location, description, status):
"""
Checks to see if a load balancer exists and creates one if it does not.
:param name: Name of loadbalancer
:param alias: Alias of account
:param location: Datacenter
:param description: Description of loadbalancer
:param status: Enabled / Disabled
:return: (changed, result, lb_id)
changed: Boolean whether a change was made
result: The result object from the CLC load balancer request
lb_id: The load balancer id
"""
changed = False
result = name
lb_id = self._loadbalancer_exists(name=name)
if not lb_id:
if not self.module.check_mode:
result = self.create_loadbalancer(name=name,
alias=alias,
location=location,
description=description,
status=status)
lb_id = result.get('id')
changed = True
return changed, result, lb_id
def ensure_loadbalancerpool_present(
self, lb_id, alias, location, method, persistence, port):
"""
Checks to see if a load balancer pool exists and creates one if it does not.
:param lb_id: The loadbalancer id
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param method: the load balancing method
:param persistence: the load balancing persistence type
:param port: the port that the load balancer will listen on
:return: (changed, group, pool_id) -
changed: Boolean whether a change was made
result: The result from the CLC API call
pool_id: The string id of the load balancer pool
"""
changed = False
result = port
if not lb_id:
return changed, None, None
pool_id = self._loadbalancerpool_exists(
alias=alias,
location=location,
port=port,
lb_id=lb_id)
if not pool_id:
if not self.module.check_mode:
result = self.create_loadbalancerpool(
alias=alias,
location=location,
lb_id=lb_id,
method=method,
persistence=persistence,
port=port)
pool_id = result.get('id')
changed = True
return changed, result, pool_id
def ensure_loadbalancer_absent(self, name, alias, location):
"""
Checks to see if a load balancer exists and deletes it if it does
:param name: Name of the load balancer
:param alias: Alias of account
:param location: Datacenter
:return: (changed, result)
changed: Boolean whether a change was made
result: The result from the CLC API Call
"""
changed = False
result = name
lb_exists = self._loadbalancer_exists(name=name)
if lb_exists:
if not self.module.check_mode:
result = self.delete_loadbalancer(alias=alias,
location=location,
name=name)
changed = True
return changed, result
def ensure_loadbalancerpool_absent(self, alias, location, name, port):
"""
Checks to see if a load balancer pool exists and deletes it if it does
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param name: the name of the load balancer
:param port: the port that the load balancer listens on
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
result = None
lb_exists = self._loadbalancer_exists(name=name)
if lb_exists:
lb_id = self._get_loadbalancer_id(name=name)
pool_id = self._loadbalancerpool_exists(
alias=alias,
location=location,
port=port,
lb_id=lb_id)
if pool_id:
changed = True
if not self.module.check_mode:
result = self.delete_loadbalancerpool(
alias=alias,
location=location,
lb_id=lb_id,
pool_id=pool_id)
else:
result = "Pool doesn't exist"
else:
result = "LB Doesn't Exist"
return changed, result
def ensure_lbpool_nodes_set(self, alias, location, name, port, nodes):
"""
Checks to see if the provided list of nodes exist for the pool
and set the nodes if any in the list those doesn't exist
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param name: the name of the load balancer
:param port: the port that the load balancer will listen on
:param nodes: The list of nodes to be updated to the pool
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
result = {}
changed = False
lb_exists = self._loadbalancer_exists(name=name)
if lb_exists:
lb_id = self._get_loadbalancer_id(name=name)
pool_id = self._loadbalancerpool_exists(
alias=alias,
location=location,
port=port,
lb_id=lb_id)
if pool_id:
nodes_exist = self._loadbalancerpool_nodes_exists(alias=alias,
location=location,
lb_id=lb_id,
pool_id=pool_id,
nodes_to_check=nodes)
if not nodes_exist:
changed = True
result = self.set_loadbalancernodes(alias=alias,
location=location,
lb_id=lb_id,
pool_id=pool_id,
nodes=nodes)
else:
result = "Pool doesn't exist"
else:
result = "Load balancer doesn't Exist"
return changed, result
def ensure_lbpool_nodes_present(self, alias, location, name, port, nodes):
"""
Checks to see if the provided list of nodes exist for the pool and add the missing nodes to the pool
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param name: the name of the load balancer
:param port: the port that the load balancer will listen on
:param nodes: the list of nodes to be added
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
lb_exists = self._loadbalancer_exists(name=name)
if lb_exists:
lb_id = self._get_loadbalancer_id(name=name)
pool_id = self._loadbalancerpool_exists(
alias=alias,
location=location,
port=port,
lb_id=lb_id)
if pool_id:
changed, result = self.add_lbpool_nodes(alias=alias,
location=location,
lb_id=lb_id,
pool_id=pool_id,
nodes_to_add=nodes)
else:
result = "Pool doesn't exist"
else:
result = "Load balancer doesn't Exist"
return changed, result
def ensure_lbpool_nodes_absent(self, alias, location, name, port, nodes):
"""
Checks to see if the provided list of nodes exist for the pool and removes them if found any
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param name: the name of the load balancer
:param port: the port that the load balancer will listen on
:param nodes: the list of nodes to be removed
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
lb_exists = self._loadbalancer_exists(name=name)
if lb_exists:
lb_id = self._get_loadbalancer_id(name=name)
pool_id = self._loadbalancerpool_exists(
alias=alias,
location=location,
port=port,
lb_id=lb_id)
if pool_id:
changed, result = self.remove_lbpool_nodes(alias=alias,
location=location,
lb_id=lb_id,
pool_id=pool_id,
nodes_to_remove=nodes)
else:
result = "Pool doesn't exist"
else:
result = "Load balancer doesn't Exist"
return changed, result
def create_loadbalancer(self, name, alias, location, description, status):
"""
Create a loadbalancer w/ params
:param name: Name of loadbalancer
:param alias: Alias of account
:param location: Datacenter
:param description: Description for loadbalancer to be created
:param status: Enabled / Disabled
:return: result: The result from the CLC API call
"""
result = None
try:
result = self.clc.v2.API.Call('POST',
'/v2/sharedLoadBalancers/%s/%s' % (alias,
location),
json.dumps({"name": name,
"description": description,
"status": status}))
sleep(1)
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to create load balancer "{0}". {1}'.format(
name, str(e.response_text)))
return result
def create_loadbalancerpool(
self, alias, location, lb_id, method, persistence, port):
"""
Creates a pool on the provided load balancer
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param method: the load balancing method
:param persistence: the load balancing persistence type
:param port: the port that the load balancer will listen on
:return: result: The result from the create API call
"""
result = None
try:
result = self.clc.v2.API.Call(
'POST', '/v2/sharedLoadBalancers/%s/%s/%s/pools' %
(alias, location, lb_id), json.dumps(
{
"port": port, "method": method, "persistence": persistence
}))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to create pool for load balancer id "{0}". {1}'.format(
lb_id, str(e.response_text)))
return result
def delete_loadbalancer(self, alias, location, name):
"""
Delete CLC loadbalancer
:param alias: Alias for account
:param location: Datacenter
:param name: Name of the loadbalancer to delete
:return: result: The result from the CLC API call
"""
result = None
lb_id = self._get_loadbalancer_id(name=name)
try:
result = self.clc.v2.API.Call(
'DELETE', '/v2/sharedLoadBalancers/%s/%s/%s' %
(alias, location, lb_id))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to delete load balancer "{0}". {1}'.format(
name, str(e.response_text)))
return result
def delete_loadbalancerpool(self, alias, location, lb_id, pool_id):
"""
Delete the pool on the provided load balancer
:param alias: The account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param pool_id: the id string of the load balancer pool
:return: result: The result from the delete API call
"""
result = None
try:
result = self.clc.v2.API.Call(
'DELETE', '/v2/sharedLoadBalancers/%s/%s/%s/pools/%s' %
(alias, location, lb_id, pool_id))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to delete pool for load balancer id "{0}". {1}'.format(
lb_id, str(e.response_text)))
return result
def _get_loadbalancer_id(self, name):
"""
Retrieves unique ID of loadbalancer
:param name: Name of loadbalancer
:return: Unique ID of the loadbalancer
"""
id = None
for lb in self.lb_dict:
if lb.get('name') == name:
id = lb.get('id')
return id
def _get_loadbalancer_list(self, alias, location):
"""
Retrieve a list of loadbalancers
:param alias: Alias for account
:param location: Datacenter
:return: JSON data for all loadbalancers at datacenter
"""
result = None
try:
result = self.clc.v2.API.Call(
'GET', '/v2/sharedLoadBalancers/%s/%s' % (alias, location))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to fetch load balancers for account: {0}. {1}'.format(
alias, str(e.response_text)))
return result
def _loadbalancer_exists(self, name):
"""
Verify a loadbalancer exists
:param name: Name of loadbalancer
:return: False or the ID of the existing loadbalancer
"""
result = False
for lb in self.lb_dict:
if lb.get('name') == name:
result = lb.get('id')
return result
def _loadbalancerpool_exists(self, alias, location, port, lb_id):
"""
Checks to see if a pool exists on the specified port on the provided load balancer
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param port: the port to check and see if it exists
:param lb_id: the id string of the provided load balancer
:return: result: The id string of the pool or False
"""
result = False
try:
pool_list = self.clc.v2.API.Call(
'GET', '/v2/sharedLoadBalancers/%s/%s/%s/pools' %
(alias, location, lb_id))
except APIFailedResponse as e:
return self.module.fail_json(
msg='Unable to fetch the load balancer pools for for load balancer id: {0}. {1}'.format(
lb_id, str(e.response_text)))
for pool in pool_list:
if int(pool.get('port')) == int(port):
result = pool.get('id')
return result
def _loadbalancerpool_nodes_exists(
self, alias, location, lb_id, pool_id, nodes_to_check):
"""
Checks to see if a set of nodes exists on the specified port on the provided load balancer
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the provided load balancer
:param pool_id: the id string of the load balancer pool
:param nodes_to_check: the list of nodes to check for
:return: result: True / False indicating if the given nodes exist
"""
result = False
nodes = self._get_lbpool_nodes(alias, location, lb_id, pool_id)
for node in nodes_to_check:
if not node.get('status'):
node['status'] = 'enabled'
if node in nodes:
result = True
else:
result = False
return result
def set_loadbalancernodes(self, alias, location, lb_id, pool_id, nodes):
"""
Updates nodes to the provided pool
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param pool_id: the id string of the pool
:param nodes: a list of dictionaries containing the nodes to set
:return: result: The result from the CLC API call
"""
result = None
if not lb_id:
return result
if not self.module.check_mode:
try:
result = self.clc.v2.API.Call('PUT',
'/v2/sharedLoadBalancers/%s/%s/%s/pools/%s/nodes'
% (alias, location, lb_id, pool_id), json.dumps(nodes))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to set nodes for the load balancer pool id "{0}". {1}'.format(
pool_id, str(e.response_text)))
return result
def add_lbpool_nodes(self, alias, location, lb_id, pool_id, nodes_to_add):
"""
Add nodes to the provided pool
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param pool_id: the id string of the pool
:param nodes_to_add: a list of dictionaries containing the nodes to add
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
result = {}
nodes = self._get_lbpool_nodes(alias, location, lb_id, pool_id)
for node in nodes_to_add:
if not node.get('status'):
node['status'] = 'enabled'
if node not in nodes:
changed = True
nodes.append(node)
if changed is True and not self.module.check_mode:
result = self.set_loadbalancernodes(
alias,
location,
lb_id,
pool_id,
nodes)
return changed, result
def remove_lbpool_nodes(
self, alias, location, lb_id, pool_id, nodes_to_remove):
"""
Removes nodes from the provided pool
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param pool_id: the id string of the pool
:param nodes_to_remove: a list of dictionaries containing the nodes to remove
:return: (changed, result) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
result = {}
nodes = self._get_lbpool_nodes(alias, location, lb_id, pool_id)
for node in nodes_to_remove:
if not node.get('status'):
node['status'] = 'enabled'
if node in nodes:
changed = True
nodes.remove(node)
if changed is True and not self.module.check_mode:
result = self.set_loadbalancernodes(
alias,
location,
lb_id,
pool_id,
nodes)
return changed, result
def _get_lbpool_nodes(self, alias, location, lb_id, pool_id):
"""
Return the list of nodes available to the provided load balancer pool
:param alias: the account alias
:param location: the datacenter the load balancer resides in
:param lb_id: the id string of the load balancer
:param pool_id: the id string of the pool
:return: result: The list of nodes
"""
result = None
try:
result = self.clc.v2.API.Call('GET',
'/v2/sharedLoadBalancers/%s/%s/%s/pools/%s/nodes'
% (alias, location, lb_id, pool_id))
except APIFailedResponse as e:
self.module.fail_json(
msg='Unable to fetch list of available nodes for load balancer pool id: {0}. {1}'.format(
pool_id, str(e.response_text)))
return result
@staticmethod
def define_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
name=dict(required=True),
description=dict(),
location=dict(required=True),
alias=dict(required=True),
port=dict(choices=[80, 443]),
method=dict(choices=['leastConnection', 'roundRobin']),
persistence=dict(choices=['standard', 'sticky']),
nodes=dict(type='list', default=[], elements='dict'),
status=dict(default='enabled', choices=['enabled', 'disabled']),
state=dict(
default='present',
choices=[
'present',
'absent',
'port_absent',
'nodes_present',
'nodes_absent'])
)
return argument_spec
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(argument_spec=ClcLoadBalancer.define_argument_spec(),
supports_check_mode=True)
clc_loadbalancer = ClcLoadBalancer(module)
clc_loadbalancer.process_request()
if __name__ == '__main__':
main()

View file

@ -1,961 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_modify_server
short_description: Modify servers in CenturyLink Cloud
description:
- An Ansible module to modify servers in CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
server_ids:
description:
- A list of server IDs to modify.
type: list
required: true
elements: str
cpu:
description:
- How many CPUs to update on the server.
type: str
memory:
description:
- Memory (in GB) to set to the server.
type: str
anti_affinity_policy_id:
description:
- The anti affinity policy ID to be set for a hyper scale server. This is mutually exclusive with O(anti_affinity_policy_name).
type: str
anti_affinity_policy_name:
description:
- The anti affinity policy name to be set for a hyper scale server. This is mutually exclusive with O(anti_affinity_policy_id).
type: str
alert_policy_id:
description:
- The alert policy ID to be associated to the server. This is mutually exclusive with O(alert_policy_name).
type: str
alert_policy_name:
description:
- The alert policy name to be associated to the server. This is mutually exclusive with O(alert_policy_id).
type: str
state:
description:
- The state to insure that the provided resources are in.
type: str
default: 'present'
choices: ['present', 'absent']
wait:
description:
- Whether to wait for the provisioning tasks to finish before returning.
type: bool
default: true
"""
EXAMPLES = r"""
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Set the cpu count to 4 on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
cpu: 4
state: present
- name: Set the memory to 8GB on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
memory: 8
state: present
- name: Set the anti affinity policy on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
anti_affinity_policy_name: 'aa_policy'
state: present
- name: Remove the anti affinity policy on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
anti_affinity_policy_name: 'aa_policy'
state: absent
- name: Add the alert policy on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
alert_policy_name: 'alert_policy'
state: present
- name: Remove the alert policy on a server
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
alert_policy_name: 'alert_policy'
state: absent
- name: Ret the memory to 16GB and cpu to 8 core on a lust if servers
community.general.clc_modify_server:
server_ids:
- UC1TESTSVR01
- UC1TESTSVR02
cpu: 8
memory: 16
state: present
"""
RETURN = r"""
server_ids:
description: The list of server IDs that are changed.
returned: success
type: list
sample: ["UC1TEST-SVR01", "UC1TEST-SVR02"]
servers:
description: The list of server objects that are changed.
returned: success
type: list
sample:
[
{
"changeInfo":{
"createdBy":"service.wfad",
"createdDate":1438196820,
"modifiedBy":"service.wfad",
"modifiedDate":1438196820
},
"description":"test-server",
"details":{
"alertPolicies":[
],
"cpu":1,
"customFields":[
],
"diskCount":3,
"disks":[
{
"id":"0:0",
"partitionPaths":[
],
"sizeGB":1
},
{
"id":"0:1",
"partitionPaths":[
],
"sizeGB":2
},
{
"id":"0:2",
"partitionPaths":[
],
"sizeGB":14
}
],
"hostName":"",
"inMaintenanceMode":false,
"ipAddresses":[
{
"internal":"10.1.1.1"
}
],
"memoryGB":1,
"memoryMB":1024,
"partitions":[
],
"powerState":"started",
"snapshots":[
],
"storageGB":17
},
"groupId":"086ac1dfe0b6411989e8d1b77c4065f0",
"id":"test-server",
"ipaddress":"10.120.45.23",
"isTemplate":false,
"links":[
{
"href":"/v2/servers/wfad/test-server",
"id":"test-server",
"rel":"self",
"verbs":[
"GET",
"PATCH",
"DELETE"
]
},
{
"href":"/v2/groups/wfad/086ac1dfe0b6411989e8d1b77c4065f0",
"id":"086ac1dfe0b6411989e8d1b77c4065f0",
"rel":"group"
},
{
"href":"/v2/accounts/wfad",
"id":"wfad",
"rel":"account"
},
{
"href":"/v2/billing/wfad/serverPricing/test-server",
"rel":"billing"
},
{
"href":"/v2/servers/wfad/test-server/publicIPAddresses",
"rel":"publicIPAddresses",
"verbs":[
"POST"
]
},
{
"href":"/v2/servers/wfad/test-server/credentials",
"rel":"credentials"
},
{
"href":"/v2/servers/wfad/test-server/statistics",
"rel":"statistics"
},
{
"href":"/v2/servers/wfad/510ec21ae82d4dc89d28479753bf736a/upcomingScheduledActivities",
"rel":"upcomingScheduledActivities"
},
{
"href":"/v2/servers/wfad/510ec21ae82d4dc89d28479753bf736a/scheduledActivities",
"rel":"scheduledActivities",
"verbs":[
"GET",
"POST"
]
},
{
"href":"/v2/servers/wfad/test-server/capabilities",
"rel":"capabilities"
},
{
"href":"/v2/servers/wfad/test-server/alertPolicies",
"rel":"alertPolicyMappings",
"verbs":[
"POST"
]
},
{
"href":"/v2/servers/wfad/test-server/antiAffinityPolicy",
"rel":"antiAffinityPolicyMapping",
"verbs":[
"PUT",
"DELETE"
]
},
{
"href":"/v2/servers/wfad/test-server/cpuAutoscalePolicy",
"rel":"cpuAutoscalePolicyMapping",
"verbs":[
"PUT",
"DELETE"
]
}
],
"locationId":"UC1",
"name":"test-server",
"os":"ubuntu14_64Bit",
"osType":"Ubuntu 14 64-bit",
"status":"active",
"storageType":"standard",
"type":"standard"
}
]
"""
__version__ = '${version}'
import json
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
from clc import APIFailedResponse
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcModifyServer:
clc = clc_sdk
def __init__(self, module):
"""
Construct module
"""
self.clc = clc_sdk
self.module = module
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
self._set_clc_credentials_from_env()
p = self.module.params
cpu = p.get('cpu')
memory = p.get('memory')
state = p.get('state')
if state == 'absent' and (cpu or memory):
return self.module.fail_json(
msg='\'absent\' state is not supported for \'cpu\' and \'memory\' arguments')
server_ids = p['server_ids']
if not isinstance(server_ids, list):
return self.module.fail_json(
msg='server_ids needs to be a list of instances to modify: %s' %
server_ids)
(changed, server_dict_array, changed_server_ids) = self._modify_servers(
server_ids=server_ids)
self.module.exit_json(
changed=changed,
server_ids=changed_server_ids,
servers=server_dict_array)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
server_ids=dict(type='list', required=True, elements='str'),
state=dict(default='present', choices=['present', 'absent']),
cpu=dict(),
memory=dict(),
anti_affinity_policy_id=dict(),
anti_affinity_policy_name=dict(),
alert_policy_id=dict(),
alert_policy_name=dict(),
wait=dict(type='bool', default=True)
)
mutually_exclusive = [
['anti_affinity_policy_id', 'anti_affinity_policy_name'],
['alert_policy_id', 'alert_policy_name']
]
return {"argument_spec": argument_spec,
"mutually_exclusive": mutually_exclusive}
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _get_servers_from_clc(self, server_list, message):
"""
Internal function to fetch list of CLC server objects from a list of server ids
:param server_list: The list of server ids
:param message: the error message to throw in case of any error
:return the list of CLC server objects
"""
try:
return self.clc.v2.Servers(server_list).servers
except CLCException as ex:
return self.module.fail_json(msg=message + ': %s' % ex.message)
def _modify_servers(self, server_ids):
"""
modify the servers configuration on the provided list
:param server_ids: list of servers to modify
:return: a list of dictionaries with server information about the servers that were modified
"""
p = self.module.params
state = p.get('state')
server_params = {
'cpu': p.get('cpu'),
'memory': p.get('memory'),
'anti_affinity_policy_id': p.get('anti_affinity_policy_id'),
'anti_affinity_policy_name': p.get('anti_affinity_policy_name'),
'alert_policy_id': p.get('alert_policy_id'),
'alert_policy_name': p.get('alert_policy_name'),
}
changed = False
server_changed = False
aa_changed = False
ap_changed = False
server_dict_array = []
result_server_ids = []
request_list = []
changed_servers = []
if not isinstance(server_ids, list) or len(server_ids) < 1:
return self.module.fail_json(
msg='server_ids should be a list of servers, aborting')
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
for server in servers:
if state == 'present':
server_changed, server_result = self._ensure_server_config(
server, server_params)
if server_result:
request_list.append(server_result)
aa_changed = self._ensure_aa_policy_present(
server,
server_params)
ap_changed = self._ensure_alert_policy_present(
server,
server_params)
elif state == 'absent':
aa_changed = self._ensure_aa_policy_absent(
server,
server_params)
ap_changed = self._ensure_alert_policy_absent(
server,
server_params)
if server_changed or aa_changed or ap_changed:
changed_servers.append(server)
changed = True
self._wait_for_requests(self.module, request_list)
self._refresh_servers(self.module, changed_servers)
for server in changed_servers:
server_dict_array.append(server.data)
result_server_ids.append(server.id)
return changed, server_dict_array, result_server_ids
def _ensure_server_config(
self, server, server_params):
"""
ensures the server is updated with the provided cpu and memory
:param server: the CLC server object
:param server_params: the dictionary of server parameters
:return: (changed, group) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
cpu = server_params.get('cpu')
memory = server_params.get('memory')
changed = False
result = None
if not cpu:
cpu = server.cpu
if not memory:
memory = server.memory
if memory != server.memory or cpu != server.cpu:
if not self.module.check_mode:
result = self._modify_clc_server(
self.clc,
self.module,
server.id,
cpu,
memory)
changed = True
return changed, result
@staticmethod
def _modify_clc_server(clc, module, server_id, cpu, memory):
"""
Modify the memory or CPU of a clc server.
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param server_id: id of the server to modify
:param cpu: the new cpu value
:param memory: the new memory value
:return: the result of CLC API call
"""
result = None
acct_alias = clc.v2.Account.GetAlias()
try:
# Update the server configuration
job_obj = clc.v2.API.Call('PATCH',
'servers/%s/%s' % (acct_alias,
server_id),
json.dumps([{"op": "set",
"member": "memory",
"value": memory},
{"op": "set",
"member": "cpu",
"value": cpu}]))
result = clc.v2.Requests(job_obj)
except APIFailedResponse as ex:
module.fail_json(
msg='Unable to update the server configuration for server : "{0}". {1}'.format(
server_id, str(ex.response_text)))
return result
@staticmethod
def _wait_for_requests(module, request_list):
"""
Block until server provisioning requests are completed.
:param module: the AnsibleModule object
:param request_list: a list of clc-sdk.Request instances
:return: none
"""
wait = module.params.get('wait')
if wait:
# Requests.WaitUntilComplete() returns the count of failed requests
failed_requests_count = sum(
[request.WaitUntilComplete() for request in request_list])
if failed_requests_count > 0:
module.fail_json(
msg='Unable to process modify server request')
@staticmethod
def _refresh_servers(module, servers):
"""
Loop through a list of servers and refresh them.
:param module: the AnsibleModule object
:param servers: list of clc-sdk.Server instances to refresh
:return: none
"""
for server in servers:
try:
server.Refresh()
except CLCException as ex:
module.fail_json(msg='Unable to refresh the server {0}. {1}'.format(
server.id, ex.message
))
def _ensure_aa_policy_present(
self, server, server_params):
"""
ensures the server is updated with the provided anti affinity policy
:param server: the CLC server object
:param server_params: the dictionary of server parameters
:return: (changed, group) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
acct_alias = self.clc.v2.Account.GetAlias()
aa_policy_id = server_params.get('anti_affinity_policy_id')
aa_policy_name = server_params.get('anti_affinity_policy_name')
if not aa_policy_id and aa_policy_name:
aa_policy_id = self._get_aa_policy_id_by_name(
self.clc,
self.module,
acct_alias,
aa_policy_name)
current_aa_policy_id = self._get_aa_policy_id_of_server(
self.clc,
self.module,
acct_alias,
server.id)
if aa_policy_id and aa_policy_id != current_aa_policy_id:
self._modify_aa_policy(
self.clc,
self.module,
acct_alias,
server.id,
aa_policy_id)
changed = True
return changed
def _ensure_aa_policy_absent(
self, server, server_params):
"""
ensures the provided anti affinity policy is removed from the server
:param server: the CLC server object
:param server_params: the dictionary of server parameters
:return: (changed, group) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
acct_alias = self.clc.v2.Account.GetAlias()
aa_policy_id = server_params.get('anti_affinity_policy_id')
aa_policy_name = server_params.get('anti_affinity_policy_name')
if not aa_policy_id and aa_policy_name:
aa_policy_id = self._get_aa_policy_id_by_name(
self.clc,
self.module,
acct_alias,
aa_policy_name)
current_aa_policy_id = self._get_aa_policy_id_of_server(
self.clc,
self.module,
acct_alias,
server.id)
if aa_policy_id and aa_policy_id == current_aa_policy_id:
self._delete_aa_policy(
self.clc,
self.module,
acct_alias,
server.id)
changed = True
return changed
@staticmethod
def _modify_aa_policy(clc, module, acct_alias, server_id, aa_policy_id):
"""
modifies the anti affinity policy of the CLC server
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param acct_alias: the CLC account alias
:param server_id: the CLC server id
:param aa_policy_id: the anti affinity policy id
:return: result: The result from the CLC API call
"""
result = None
if not module.check_mode:
try:
result = clc.v2.API.Call('PUT',
'servers/%s/%s/antiAffinityPolicy' % (
acct_alias,
server_id),
json.dumps({"id": aa_policy_id}))
except APIFailedResponse as ex:
module.fail_json(
msg='Unable to modify anti affinity policy to server : "{0}". {1}'.format(
server_id, str(ex.response_text)))
return result
@staticmethod
def _delete_aa_policy(clc, module, acct_alias, server_id):
"""
Delete the anti affinity policy of the CLC server
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param acct_alias: the CLC account alias
:param server_id: the CLC server id
:return: result: The result from the CLC API call
"""
result = None
if not module.check_mode:
try:
result = clc.v2.API.Call('DELETE',
'servers/%s/%s/antiAffinityPolicy' % (
acct_alias,
server_id),
json.dumps({}))
except APIFailedResponse as ex:
module.fail_json(
msg='Unable to delete anti affinity policy to server : "{0}". {1}'.format(
server_id, str(ex.response_text)))
return result
@staticmethod
def _get_aa_policy_id_by_name(clc, module, alias, aa_policy_name):
"""
retrieves the anti affinity policy id of the server based on the name of the policy
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param alias: the CLC account alias
:param aa_policy_name: the anti affinity policy name
:return: aa_policy_id: The anti affinity policy id
"""
aa_policy_id = None
try:
aa_policies = clc.v2.API.Call(method='GET',
url='antiAffinityPolicies/%s' % alias)
except APIFailedResponse as ex:
return module.fail_json(
msg='Unable to fetch anti affinity policies from account alias : "{0}". {1}'.format(
alias, str(ex.response_text)))
for aa_policy in aa_policies.get('items'):
if aa_policy.get('name') == aa_policy_name:
if not aa_policy_id:
aa_policy_id = aa_policy.get('id')
else:
return module.fail_json(
msg='multiple anti affinity policies were found with policy name : %s' % aa_policy_name)
if not aa_policy_id:
module.fail_json(
msg='No anti affinity policy was found with policy name : %s' % aa_policy_name)
return aa_policy_id
@staticmethod
def _get_aa_policy_id_of_server(clc, module, alias, server_id):
"""
retrieves the anti affinity policy id of the server based on the CLC server id
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param alias: the CLC account alias
:param server_id: the CLC server id
:return: aa_policy_id: The anti affinity policy id
"""
aa_policy_id = None
try:
result = clc.v2.API.Call(
method='GET', url='servers/%s/%s/antiAffinityPolicy' %
(alias, server_id))
aa_policy_id = result.get('id')
except APIFailedResponse as ex:
if ex.response_status_code != 404:
module.fail_json(msg='Unable to fetch anti affinity policy for server "{0}". {1}'.format(
server_id, str(ex.response_text)))
return aa_policy_id
def _ensure_alert_policy_present(
self, server, server_params):
"""
ensures the server is updated with the provided alert policy
:param server: the CLC server object
:param server_params: the dictionary of server parameters
:return: (changed, group) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
acct_alias = self.clc.v2.Account.GetAlias()
alert_policy_id = server_params.get('alert_policy_id')
alert_policy_name = server_params.get('alert_policy_name')
if not alert_policy_id and alert_policy_name:
alert_policy_id = self._get_alert_policy_id_by_name(
self.clc,
self.module,
acct_alias,
alert_policy_name)
if alert_policy_id and not self._alert_policy_exists(
server, alert_policy_id):
self._add_alert_policy_to_server(
self.clc,
self.module,
acct_alias,
server.id,
alert_policy_id)
changed = True
return changed
def _ensure_alert_policy_absent(
self, server, server_params):
"""
ensures the alert policy is removed from the server
:param server: the CLC server object
:param server_params: the dictionary of server parameters
:return: (changed, group) -
changed: Boolean whether a change was made
result: The result from the CLC API call
"""
changed = False
acct_alias = self.clc.v2.Account.GetAlias()
alert_policy_id = server_params.get('alert_policy_id')
alert_policy_name = server_params.get('alert_policy_name')
if not alert_policy_id and alert_policy_name:
alert_policy_id = self._get_alert_policy_id_by_name(
self.clc,
self.module,
acct_alias,
alert_policy_name)
if alert_policy_id and self._alert_policy_exists(
server, alert_policy_id):
self._remove_alert_policy_to_server(
self.clc,
self.module,
acct_alias,
server.id,
alert_policy_id)
changed = True
return changed
@staticmethod
def _add_alert_policy_to_server(
clc, module, acct_alias, server_id, alert_policy_id):
"""
add the alert policy to CLC server
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param acct_alias: the CLC account alias
:param server_id: the CLC server id
:param alert_policy_id: the alert policy id
:return: result: The result from the CLC API call
"""
result = None
if not module.check_mode:
try:
result = clc.v2.API.Call('POST',
'servers/%s/%s/alertPolicies' % (
acct_alias,
server_id),
json.dumps({"id": alert_policy_id}))
except APIFailedResponse as ex:
module.fail_json(msg='Unable to set alert policy to the server : "{0}". {1}'.format(
server_id, str(ex.response_text)))
return result
@staticmethod
def _remove_alert_policy_to_server(
clc, module, acct_alias, server_id, alert_policy_id):
"""
remove the alert policy to the CLC server
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param acct_alias: the CLC account alias
:param server_id: the CLC server id
:param alert_policy_id: the alert policy id
:return: result: The result from the CLC API call
"""
result = None
if not module.check_mode:
try:
result = clc.v2.API.Call('DELETE',
'servers/%s/%s/alertPolicies/%s'
% (acct_alias, server_id, alert_policy_id))
except APIFailedResponse as ex:
module.fail_json(msg='Unable to remove alert policy from the server : "{0}". {1}'.format(
server_id, str(ex.response_text)))
return result
@staticmethod
def _get_alert_policy_id_by_name(clc, module, alias, alert_policy_name):
"""
retrieves the alert policy id of the server based on the name of the policy
:param clc: the clc-sdk instance to use
:param module: the AnsibleModule object
:param alias: the CLC account alias
:param alert_policy_name: the alert policy name
:return: alert_policy_id: The alert policy id
"""
alert_policy_id = None
try:
alert_policies = clc.v2.API.Call(method='GET',
url='alertPolicies/%s' % alias)
except APIFailedResponse as ex:
return module.fail_json(msg='Unable to fetch alert policies for account : "{0}". {1}'.format(
alias, str(ex.response_text)))
for alert_policy in alert_policies.get('items'):
if alert_policy.get('name') == alert_policy_name:
if not alert_policy_id:
alert_policy_id = alert_policy.get('id')
else:
return module.fail_json(
msg='multiple alert policies were found with policy name : %s' % alert_policy_name)
return alert_policy_id
@staticmethod
def _alert_policy_exists(server, alert_policy_id):
"""
Checks if the alert policy exists for the server
:param server: the clc server object
:param alert_policy_id: the alert policy
:return: True: if the given alert policy id associated to the server, False otherwise
"""
result = False
alert_policies = server.alertPolicies
if alert_policies:
for alert_policy in alert_policies:
if alert_policy.get('id') == alert_policy_id:
result = True
return result
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
argument_dict = ClcModifyServer._define_module_argument_spec()
module = AnsibleModule(supports_check_mode=True, **argument_dict)
clc_modify_server = ClcModifyServer(module)
clc_modify_server.process_request()
if __name__ == '__main__':
main()

View file

@ -1,359 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_publicip
short_description: Add and Delete public IPs on servers in CenturyLink Cloud
description:
- An Ansible module to add or delete public IP addresses on an existing server or servers in CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
protocol:
description:
- The protocol that the public IP will listen for.
type: str
default: TCP
choices: ['TCP', 'UDP', 'ICMP']
ports:
description:
- A list of ports to expose. This is required when O(state=present).
type: list
elements: int
server_ids:
description:
- A list of servers to create public IPs on.
type: list
required: true
elements: str
state:
description:
- Determine whether to create or delete public IPs. If V(present) module will not create a second public IP if one already
exists.
type: str
default: present
choices: ['present', 'absent']
wait:
description:
- Whether to wait for the tasks to finish before returning.
type: bool
default: true
"""
EXAMPLES = r"""
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Add Public IP to Server
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create Public IP For Servers
community.general.clc_publicip:
protocol: TCP
ports:
- 80
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
state: present
register: clc
- name: Debug
ansible.builtin.debug:
var: clc
- name: Delete Public IP from Server
hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Create Public IP For Servers
community.general.clc_publicip:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
state: absent
register: clc
- name: Debug
ansible.builtin.debug:
var: clc
"""
RETURN = r"""
server_ids:
description: The list of server IDs that are changed.
returned: success
type: list
sample: ["UC1TEST-SVR01", "UC1TEST-SVR02"]
"""
__version__ = '${version}'
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcPublicIp(object):
clc = clc_sdk
module = None
def __init__(self, module):
"""
Construct module
"""
self.module = module
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
self._set_clc_credentials_from_env()
params = self.module.params
server_ids = params['server_ids']
ports = params['ports']
protocol = params['protocol']
state = params['state']
if state == 'present':
changed, changed_server_ids, requests = self.ensure_public_ip_present(
server_ids=server_ids, protocol=protocol, ports=ports)
elif state == 'absent':
changed, changed_server_ids, requests = self.ensure_public_ip_absent(
server_ids=server_ids)
else:
return self.module.fail_json(msg="Unknown State: " + state)
self._wait_for_requests_to_complete(requests)
return self.module.exit_json(changed=changed,
server_ids=changed_server_ids)
@staticmethod
def _define_module_argument_spec():
"""
Define the argument spec for the ansible module
:return: argument spec dictionary
"""
argument_spec = dict(
server_ids=dict(type='list', required=True, elements='str'),
protocol=dict(default='TCP', choices=['TCP', 'UDP', 'ICMP']),
ports=dict(type='list', elements='int'),
wait=dict(type='bool', default=True),
state=dict(default='present', choices=['present', 'absent']),
)
return argument_spec
def ensure_public_ip_present(self, server_ids, protocol, ports):
"""
Ensures the given server ids having the public ip available
:param server_ids: the list of server ids
:param protocol: the ip protocol
:param ports: the list of ports to expose
:return: (changed, changed_server_ids, results)
changed: A flag indicating if there is any change
changed_server_ids : the list of server ids that are changed
results: The result list from clc public ip call
"""
changed = False
results = []
changed_server_ids = []
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.PublicIPs().public_ips) == 0]
ports_to_expose = [{'protocol': protocol, 'port': port}
for port in ports]
for server in servers_to_change:
if not self.module.check_mode:
result = self._add_publicip_to_server(server, ports_to_expose)
results.append(result)
changed_server_ids.append(server.id)
changed = True
return changed, changed_server_ids, results
def _add_publicip_to_server(self, server, ports_to_expose):
result = None
try:
result = server.PublicIPs().Add(ports_to_expose)
except CLCException as ex:
self.module.fail_json(msg='Failed to add public ip to the server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def ensure_public_ip_absent(self, server_ids):
"""
Ensures the given server ids having the public ip removed if there is any
:param server_ids: the list of server ids
:return: (changed, changed_server_ids, results)
changed: A flag indicating if there is any change
changed_server_ids : the list of server ids that are changed
results: The result list from clc public ip call
"""
changed = False
results = []
changed_server_ids = []
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.PublicIPs().public_ips) > 0]
for server in servers_to_change:
if not self.module.check_mode:
result = self._remove_publicip_from_server(server)
results.append(result)
changed_server_ids.append(server.id)
changed = True
return changed, changed_server_ids, results
def _remove_publicip_from_server(self, server):
result = None
try:
for ip_address in server.PublicIPs().public_ips:
result = ip_address.Delete()
except CLCException as ex:
self.module.fail_json(msg='Failed to remove public ip from the server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def _wait_for_requests_to_complete(self, requests_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param requests_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in requests_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process public ip request')
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
def _get_servers_from_clc(self, server_ids, message):
"""
Gets list of servers form CLC api
"""
try:
return self.clc.v2.Servers(server_ids).servers
except CLCException as exception:
self.module.fail_json(msg=message + ': %s' % exception)
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
The main function. Instantiates the module and calls process_request.
:return: none
"""
module = AnsibleModule(
argument_spec=ClcPublicIp._define_module_argument_spec(),
supports_check_mode=True
)
clc_public_ip = ClcPublicIp(module)
clc_public_ip.process_request()
if __name__ == '__main__':
main()

File diff suppressed because it is too large Load diff

View file

@ -1,409 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015 CenturyLink
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: clc_server_snapshot
short_description: Create, Delete and Restore server snapshots in CenturyLink Cloud
description:
- An Ansible module to Create, Delete and Restore server snapshots in CenturyLink Cloud.
deprecated:
removed_in: 11.0.0
why: >
Lumen Public Cloud (formerly known as CenturyLink Cloud) has gone End-of-Life in September 2023.
See more at U(https://www.ctl.io/knowledge-base/release-notes/2023/lumen-public-cloud-platform-end-of-life-notice/?).
alternative: There is none.
extends_documentation_fragment:
- community.general.attributes
- community.general.clc
author:
- "CLC Runner (@clc-runner)"
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
server_ids:
description:
- The list of CLC server IDs.
type: list
required: true
elements: str
expiration_days:
description:
- The number of days to keep the server snapshot before it expires.
type: int
default: 7
required: false
state:
description:
- The state to insure that the provided resources are in.
type: str
default: 'present'
required: false
choices: ['present', 'absent', 'restore']
wait:
description:
- Whether to wait for the provisioning tasks to finish before returning.
default: 'True'
required: false
type: str
"""
EXAMPLES = r"""
# Note - You must set the CLC_V2_API_USERNAME And CLC_V2_API_PASSWD Environment variables before running these examples
- name: Create server snapshot
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
expiration_days: 10
wait: true
state: present
- name: Restore server snapshot
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
wait: true
state: restore
- name: Delete server snapshot
community.general.clc_server_snapshot:
server_ids:
- UC1TEST-SVR01
- UC1TEST-SVR02
wait: true
state: absent
"""
RETURN = r"""
server_ids:
description: The list of server IDs that are changed.
returned: success
type: list
sample: ["UC1TEST-SVR01", "UC1TEST-SVR02"]
"""
__version__ = '${version}'
import os
import traceback
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
REQUESTS_IMP_ERR = None
try:
import requests
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
REQUESTS_FOUND = False
else:
REQUESTS_FOUND = True
#
# Requires the clc-python-sdk.
# sudo pip install clc-sdk
#
CLC_IMP_ERR = None
try:
import clc as clc_sdk
from clc import CLCException
except ImportError:
CLC_IMP_ERR = traceback.format_exc()
CLC_FOUND = False
clc_sdk = None
else:
CLC_FOUND = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class ClcSnapshot:
clc = clc_sdk
module = None
def __init__(self, module):
"""
Construct module
"""
self.module = module
if not CLC_FOUND:
self.module.fail_json(msg=missing_required_lib('clc-sdk'), exception=CLC_IMP_ERR)
if not REQUESTS_FOUND:
self.module.fail_json(msg=missing_required_lib('requests'), exception=REQUESTS_IMP_ERR)
if requests.__version__ and LooseVersion(requests.__version__) < LooseVersion('2.5.0'):
self.module.fail_json(
msg='requests library version should be >= 2.5.0')
self._set_user_agent(self.clc)
def process_request(self):
"""
Process the request - Main Code Path
:return: Returns with either an exit_json or fail_json
"""
p = self.module.params
server_ids = p['server_ids']
expiration_days = p['expiration_days']
state = p['state']
request_list = []
changed = False
changed_servers = []
self._set_clc_credentials_from_env()
if state == 'present':
changed, request_list, changed_servers = self.ensure_server_snapshot_present(
server_ids=server_ids,
expiration_days=expiration_days)
elif state == 'absent':
changed, request_list, changed_servers = self.ensure_server_snapshot_absent(
server_ids=server_ids)
elif state == 'restore':
changed, request_list, changed_servers = self.ensure_server_snapshot_restore(
server_ids=server_ids)
self._wait_for_requests_to_complete(request_list)
return self.module.exit_json(
changed=changed,
server_ids=changed_servers)
def ensure_server_snapshot_present(self, server_ids, expiration_days):
"""
Ensures the given set of server_ids have the snapshots created
:param server_ids: The list of server_ids to create the snapshot
:param expiration_days: The number of days to keep the snapshot
:return: (changed, request_list, changed_servers)
changed: A flag indicating whether any change was made
request_list: the list of clc request objects from CLC API call
changed_servers: The list of servers ids that are modified
"""
request_list = []
changed = False
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.GetSnapshots()) == 0]
for server in servers_to_change:
changed = True
if not self.module.check_mode:
request = self._create_server_snapshot(server, expiration_days)
request_list.append(request)
changed_servers = [
server.id for server in servers_to_change if server.id]
return changed, request_list, changed_servers
def _create_server_snapshot(self, server, expiration_days):
"""
Create the snapshot for the CLC server
:param server: the CLC server object
:param expiration_days: The number of days to keep the snapshot
:return: the create request object from CLC API Call
"""
result = None
try:
result = server.CreateSnapshot(
delete_existing=True,
expiration_days=expiration_days)
except CLCException as ex:
self.module.fail_json(msg='Failed to create snapshot for server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def ensure_server_snapshot_absent(self, server_ids):
"""
Ensures the given set of server_ids have the snapshots removed
:param server_ids: The list of server_ids to delete the snapshot
:return: (changed, request_list, changed_servers)
changed: A flag indicating whether any change was made
request_list: the list of clc request objects from CLC API call
changed_servers: The list of servers ids that are modified
"""
request_list = []
changed = False
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.GetSnapshots()) > 0]
for server in servers_to_change:
changed = True
if not self.module.check_mode:
request = self._delete_server_snapshot(server)
request_list.append(request)
changed_servers = [
server.id for server in servers_to_change if server.id]
return changed, request_list, changed_servers
def _delete_server_snapshot(self, server):
"""
Delete snapshot for the CLC server
:param server: the CLC server object
:return: the delete snapshot request object from CLC API
"""
result = None
try:
result = server.DeleteSnapshot()
except CLCException as ex:
self.module.fail_json(msg='Failed to delete snapshot for server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def ensure_server_snapshot_restore(self, server_ids):
"""
Ensures the given set of server_ids have the snapshots restored
:param server_ids: The list of server_ids to delete the snapshot
:return: (changed, request_list, changed_servers)
changed: A flag indicating whether any change was made
request_list: the list of clc request objects from CLC API call
changed_servers: The list of servers ids that are modified
"""
request_list = []
changed = False
servers = self._get_servers_from_clc(
server_ids,
'Failed to obtain server list from the CLC API')
servers_to_change = [
server for server in servers if len(
server.GetSnapshots()) > 0]
for server in servers_to_change:
changed = True
if not self.module.check_mode:
request = self._restore_server_snapshot(server)
request_list.append(request)
changed_servers = [
server.id for server in servers_to_change if server.id]
return changed, request_list, changed_servers
def _restore_server_snapshot(self, server):
"""
Restore snapshot for the CLC server
:param server: the CLC server object
:return: the restore snapshot request object from CLC API
"""
result = None
try:
result = server.RestoreSnapshot()
except CLCException as ex:
self.module.fail_json(msg='Failed to restore snapshot for server : {0}. {1}'.format(
server.id, ex.response_text
))
return result
def _wait_for_requests_to_complete(self, requests_lst):
"""
Waits until the CLC requests are complete if the wait argument is True
:param requests_lst: The list of CLC request objects
:return: none
"""
if not self.module.params['wait']:
return
for request in requests_lst:
request.WaitUntilComplete()
for request_details in request.requests:
if request_details.Status() != 'succeeded':
self.module.fail_json(
msg='Unable to process server snapshot request')
@staticmethod
def define_argument_spec():
"""
This function defines the dictionary object required for
package module
:return: the package dictionary object
"""
argument_spec = dict(
server_ids=dict(type='list', required=True, elements='str'),
expiration_days=dict(default=7, type='int'),
wait=dict(default=True),
state=dict(
default='present',
choices=[
'present',
'absent',
'restore']),
)
return argument_spec
def _get_servers_from_clc(self, server_list, message):
"""
Internal function to fetch list of CLC server objects from a list of server ids
:param server_list: The list of server ids
:param message: The error message to throw in case of any error
:return the list of CLC server objects
"""
try:
return self.clc.v2.Servers(server_list).servers
except CLCException as ex:
return self.module.fail_json(msg=message + ': %s' % ex)
def _set_clc_credentials_from_env(self):
"""
Set the CLC Credentials on the sdk by reading environment variables
:return: none
"""
env = os.environ
v2_api_token = env.get('CLC_V2_API_TOKEN', False)
v2_api_username = env.get('CLC_V2_API_USERNAME', False)
v2_api_passwd = env.get('CLC_V2_API_PASSWD', False)
clc_alias = env.get('CLC_ACCT_ALIAS', False)
api_url = env.get('CLC_V2_API_URL', False)
if api_url:
self.clc.defaults.ENDPOINT_URL_V2 = api_url
if v2_api_token and clc_alias:
self.clc._LOGIN_TOKEN_V2 = v2_api_token
self.clc._V2_ENABLED = True
self.clc.ALIAS = clc_alias
elif v2_api_username and v2_api_passwd:
self.clc.v2.SetCredentials(
api_username=v2_api_username,
api_passwd=v2_api_passwd)
else:
return self.module.fail_json(
msg="You must set the CLC_V2_API_USERNAME and CLC_V2_API_PASSWD "
"environment variables")
@staticmethod
def _set_user_agent(clc):
if hasattr(clc, 'SetRequestsSession'):
agent_string = "ClcAnsibleModule/" + __version__
ses = requests.Session()
ses.headers.update({"Api-Client": agent_string})
ses.headers['User-Agent'] += " " + agent_string
clc.SetRequestsSession(ses)
def main():
"""
Main function
:return: None
"""
module = AnsibleModule(
argument_spec=ClcSnapshot.define_argument_spec(),
supports_check_mode=True
)
clc_snapshot = ClcSnapshot(module)
clc_snapshot.process_request()
if __name__ == '__main__':
main()

View file

@ -204,7 +204,6 @@ class CPANMinus(ModuleHelper):
pkg_spec=cmd_runner_fmt.as_list(),
cpanm_version=cmd_runner_fmt.as_fixed("--version"),
)
use_old_vardict = False
def __init_module__(self):
v = self.vars

View file

@ -132,7 +132,6 @@ def decompress(b_src, b_dest, handler):
class Decompress(ModuleHelper):
destination_filename_template = "%s_decompressed"
use_old_vardict = False
output_params = 'dest'
module = dict(

View file

@ -122,12 +122,6 @@ options:
type: str
required: false
aliases: [test_runner]
ack_venv_creation_deprecation:
description:
- This option no longer has any effect since community.general 9.0.0.
- It will be removed from community.general 11.0.0.
type: bool
version_added: 5.8.0
notes:
- 'B(ATTENTION): Support for Django releases older than 4.1 has been removed in community.general version 9.0.0. While the
@ -291,7 +285,6 @@ def main():
skip=dict(type='bool'),
merge=dict(type='bool'),
link=dict(type='bool'),
ack_venv_creation_deprecation=dict(type='bool', removed_in_version='11.0.0', removed_from_collection='community.general'),
),
)

View file

@ -128,7 +128,6 @@ class GConftool(StateModuleHelper):
],
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.runner = gconftool2_runner(self.module, check_rc=True)

View file

@ -67,7 +67,6 @@ class GConftoolInfo(ModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.runner = gconftool2_runner(self.module, check_rc=True)

View file

@ -94,7 +94,6 @@ class GioMime(ModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.runner = gio_mime_runner(self.module, check_rc=True)

View file

@ -31,17 +31,11 @@ attributes:
diff_mode:
support: none
options:
list_all:
description:
- List all settings (optionally limited to a given O(scope)).
- This option is B(deprecated) and will be removed from community.general 11.0.0. Please use M(community.general.git_config_info)
instead.
type: bool
default: false
name:
description:
- The name of the setting. If no value is supplied, the value will be read from the config if it has been set.
- The name of the setting.
type: str
required: true
repo:
description:
- Path to a git repository for reading and writing values from a specific repo.
@ -57,7 +51,7 @@ options:
- This is required when setting config values.
- If this is set to V(local), you must also specify the O(repo) parameter.
- If this is set to V(file), you must also specify the O(file) parameter.
- It defaults to system only when not using O(list_all=true).
- It defaults to system.
choices: ["file", "local", "global", "system"]
type: str
state:
@ -70,7 +64,7 @@ options:
value:
description:
- When specifying the name of a single setting, supply a value to set that setting to the given value.
- From community.general 11.0.0 on, O(value) will be required if O(state=present). To read values, use the M(community.general.git_config_info)
- From community.general 11.0.0 on, O(value) is required if O(state=present). To read values, use the M(community.general.git_config_info)
module instead.
type: str
add_mode:
@ -144,21 +138,6 @@ EXAMPLES = r"""
"""
RETURN = r"""
config_value:
description: When O(list_all=false) and value is not set, a string containing the value of the setting in name.
returned: success
type: str
sample: "vim"
config_values:
description: When O(list_all=true), a dict containing key/value pairs of multiple configuration settings.
returned: success
type: dict
sample:
core.editor: "vim"
color.ui: "auto"
alias.diffc: "diff --cached"
alias.remotev: "remote -v"
"""
from ansible.module_utils.basic import AnsibleModule
@ -167,8 +146,7 @@ from ansible.module_utils.basic import AnsibleModule
def main():
module = AnsibleModule(
argument_spec=dict(
list_all=dict(required=False, type='bool', default=False, removed_in_version='11.0.0', removed_from_collection='community.general'),
name=dict(type='str'),
name=dict(type='str', required=True),
repo=dict(type='path'),
file=dict(type='path'),
add_mode=dict(required=False, type='str', default='replace-all', choices=['add', 'replace-all']),
@ -176,12 +154,11 @@ def main():
state=dict(required=False, type='str', default='present', choices=['present', 'absent']),
value=dict(required=False),
),
mutually_exclusive=[['list_all', 'name'], ['list_all', 'value'], ['list_all', 'state']],
required_if=[
('scope', 'local', ['repo']),
('scope', 'file', ['file'])
('scope', 'file', ['file']),
('state', 'present', ['value']),
],
required_one_of=[['list_all', 'name']],
supports_check_mode=True,
)
git_path = module.get_bin_path('git', True)
@ -196,13 +173,8 @@ def main():
new_value = params['value'] or ''
add_mode = params['add_mode']
if not unset and not new_value and not params['list_all']:
module.deprecate(
'If state=present, a value must be specified from community.general 11.0.0 on.'
' To read a config value, use the community.general.git_config_info module instead.',
version='11.0.0',
collection_name='community.general',
)
if not unset and not new_value:
module.fail_json(msg="If state=present, a value must be specified. Use the community.general.git_config_info module to read a config value.")
scope = determine_scope(params)
cwd = determine_cwd(scope, params)
@ -217,33 +189,18 @@ def main():
list_args = list(base_args)
if params['list_all']:
list_args.append('-l')
if name:
list_args.append("--get-all")
list_args.append(name)
list_args.append("--get-all")
list_args.append(name)
(rc, out, err) = module.run_command(list_args, cwd=cwd, expand_user_and_vars=False)
if params['list_all'] and scope and rc == 128 and 'unable to read config file' in err:
# This just means nothing has been set at the given scope
module.exit_json(changed=False, msg='', config_values={})
elif rc >= 2:
if rc >= 2:
# If the return code is 1, it just means the option hasn't been set yet, which is fine.
module.fail_json(rc=rc, msg=err, cmd=' '.join(list_args))
old_values = out.rstrip().splitlines()
if params['list_all']:
config_values = {}
for value in old_values:
k, v = value.split('=', 1)
config_values[k] = v
module.exit_json(changed=False, msg='', config_values=config_values)
elif not new_value and not unset:
module.exit_json(changed=False, msg='', config_value=old_values[0] if old_values else '')
elif unset and not out:
if unset and not out:
module.exit_json(changed=False, msg='no setting to unset')
elif new_value in old_values and (len(old_values) == 1 or add_mode == "add") and not unset:
module.exit_json(changed=False, msg="")
@ -286,30 +243,22 @@ def main():
def determine_scope(params):
if params['scope']:
return params['scope']
elif params['list_all']:
return ""
else:
return 'system'
return 'system'
def build_diff_value(value):
if not value:
return "\n"
elif len(value) == 1:
if len(value) == 1:
return value[0] + "\n"
else:
return value
return value
def determine_cwd(scope, params):
if scope == 'local':
return params['repo']
elif params['list_all'] and params['repo']:
# Include local settings from a specific repo when listing all available settings
return params['repo']
else:
# Run from root directory to avoid accidentally picking up any local config settings
return "/"
# Run from root directory to avoid accidentally picking up any local config settings
return "/"
if __name__ == '__main__':

View file

@ -1,222 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: hipchat
short_description: Send a message to Hipchat
description:
- Send a message to a Hipchat room, with options to control the formatting.
extends_documentation_fragment:
- community.general.attributes
deprecated:
removed_in: 11.0.0
why: The hipchat service has been discontinued and the self-hosted variant has been End of Life since 2020.
alternative: There is none.
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
token:
type: str
description:
- API token.
required: true
room:
type: str
description:
- ID or name of the room.
required: true
msg_from:
type: str
description:
- Name the message will appear to be sent from. Max length is 15 characters - above this it will be truncated.
default: Ansible
aliases: [from]
msg:
type: str
description:
- The message body.
required: true
color:
type: str
description:
- Background color for the message.
default: yellow
choices: ["yellow", "red", "green", "purple", "gray", "random"]
msg_format:
type: str
description:
- Message format.
default: text
choices: ["text", "html"]
notify:
description:
- If true, a notification will be triggered for users in the room.
type: bool
default: true
validate_certs:
description:
- If V(false), SSL certificates will not be validated. This should only be used on personally controlled sites using
self-signed certificates.
type: bool
default: true
api:
type: str
description:
- API URL if using a self-hosted hipchat server. For Hipchat API version 2 use the default URI with C(/v2) instead of
C(/v1).
default: 'https://api.hipchat.com/v1'
author:
- Shirou Wakayama (@shirou)
- Paul Bourdel (@pb8226)
"""
EXAMPLES = r"""
- name: Send a message to a Hipchat room
community.general.hipchat:
room: notif
msg: Ansible task finished
- name: Send a message to a Hipchat room using Hipchat API version 2
community.general.hipchat:
api: https://api.hipchat.com/v2/
token: OAUTH2_TOKEN
room: notify
msg: Ansible task finished
"""
# ===========================================
# HipChat module specific support methods.
#
import json
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.six.moves.urllib.request import pathname2url
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.urls import fetch_url
DEFAULT_URI = "https://api.hipchat.com/v1"
MSG_URI_V1 = "/rooms/message"
NOTIFY_URI_V2 = "/room/{id_or_name}/notification"
def send_msg_v1(module, token, room, msg_from, msg, msg_format='text',
color='yellow', notify=False, api=MSG_URI_V1):
'''sending message to hipchat v1 server'''
params = {}
params['room_id'] = room
params['from'] = msg_from[:15] # max length is 15
params['message'] = msg
params['message_format'] = msg_format
params['color'] = color
params['api'] = api
params['notify'] = int(notify)
url = api + MSG_URI_V1 + "?auth_token=%s" % (token)
data = urlencode(params)
if module.check_mode:
# In check mode, exit before actually sending the message
module.exit_json(changed=False)
response, info = fetch_url(module, url, data=data)
if info['status'] == 200:
return response.read()
else:
module.fail_json(msg="failed to send message, return status=%s" % str(info['status']))
def send_msg_v2(module, token, room, msg_from, msg, msg_format='text',
color='yellow', notify=False, api=NOTIFY_URI_V2):
'''sending message to hipchat v2 server'''
headers = {'Authorization': 'Bearer %s' % token, 'Content-Type': 'application/json'}
body = dict()
body['message'] = msg
body['color'] = color
body['message_format'] = msg_format
body['notify'] = notify
POST_URL = api + NOTIFY_URI_V2
url = POST_URL.replace('{id_or_name}', pathname2url(room))
data = json.dumps(body)
if module.check_mode:
# In check mode, exit before actually sending the message
module.exit_json(changed=False)
response, info = fetch_url(module, url, data=data, headers=headers, method='POST')
# https://www.hipchat.com/docs/apiv2/method/send_room_notification shows
# 204 to be the expected result code.
if info['status'] in [200, 204]:
return response.read()
else:
module.fail_json(msg="failed to send message, return status=%s" % str(info['status']))
# ===========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict(
token=dict(required=True, no_log=True),
room=dict(required=True),
msg=dict(required=True),
msg_from=dict(default="Ansible", aliases=['from']),
color=dict(default="yellow", choices=["yellow", "red", "green",
"purple", "gray", "random"]),
msg_format=dict(default="text", choices=["text", "html"]),
notify=dict(default=True, type='bool'),
validate_certs=dict(default=True, type='bool'),
api=dict(default=DEFAULT_URI),
),
supports_check_mode=True
)
token = module.params["token"]
room = str(module.params["room"])
msg = module.params["msg"]
msg_from = module.params["msg_from"]
color = module.params["color"]
msg_format = module.params["msg_format"]
notify = module.params["notify"]
api = module.params["api"]
try:
if api.find('/v2') != -1:
send_msg_v2(module, token, room, msg_from, msg, msg_format, color, notify, api)
else:
send_msg_v1(module, token, room, msg_from, msg, msg_format, color, notify, api)
except Exception as e:
module.fail_json(msg="unable to send msg: %s" % to_native(e), exception=traceback.format_exc())
changed = True
module.exit_json(changed=changed, room=room, msg_from=msg_from, msg=msg)
if __name__ == '__main__':
main()

View file

@ -97,7 +97,6 @@ class HPOnCfg(ModuleHelper):
verbose=cmd_runner_fmt.as_bool("-v"),
minfw=cmd_runner_fmt.as_opt_val("-m"),
)
use_old_vardict = False
def __run__(self):
runner = CmdRunner(

View file

@ -560,7 +560,6 @@ class JIRA(StateModuleHelper):
),
supports_check_mode=False
)
use_old_vardict = False
state_param = 'operation'
def __init_module__(self):

View file

@ -65,7 +65,6 @@ class Blacklist(StateModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.pattern = re.compile(r'^blacklist\s+{0}$'.format(re.escape(self.vars.name)))

View file

@ -111,7 +111,6 @@ class LocaleGen(StateModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.MECHANISMS = dict(

View file

@ -141,7 +141,6 @@ class MkSysB(ModuleHelper):
backup_dmapi_fs=cmd_runner_fmt.as_bool("-A"),
combined_path=cmd_runner_fmt.as_func(cmd_runner_fmt.unpack_args(lambda p, n: ["%s/%s" % (p, n)])),
)
use_old_vardict = False
def __init_module__(self):
if not os.path.isdir(self.vars.storage_path):

View file

@ -134,7 +134,6 @@ class Opkg(StateModuleHelper):
executable=dict(type="path"),
),
)
use_old_vardict = False
def __init_module__(self):
self.vars.set("install_c", 0, output=False, change=True)

View file

@ -164,7 +164,6 @@ class PacemakerResource(StateModuleHelper):
required_if=[('state', 'present', ['resource_type', 'resource_option'])],
supports_check_mode=True,
)
use_old_vardict = False
default_state = "present"
def __init_module__(self):

View file

@ -264,7 +264,6 @@ class PipX(StateModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def _retrieve_installed(self):
output_process = make_process_dict(include_injected=True)

View file

@ -144,7 +144,6 @@ class PipXInfo(ModuleHelper):
argument_spec=argument_spec,
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
if self.vars.executable:

View file

@ -1,671 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: profitbricks
short_description: Create, destroy, start, stop, and reboot a ProfitBricks virtual machine
description:
- Create, destroy, update, start, stop, and reboot a ProfitBricks virtual machine. When the virtual machine is created it
can optionally wait for it to be 'running' before returning. This module has a dependency on profitbricks >= 1.0.0.
deprecated:
removed_in: 11.0.0
why: Module relies on library unsupported since 2021.
alternative: >
Profitbricks has rebranded as Ionos Cloud and they provide a collection named ionoscloudsdk.ionoscloud.
Whilst it is likely it will provide the features of this module, that has not been verified.
Please refer to that collection's documentation for more details.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
auto_increment:
description:
- Whether or not to increment a single number in the name for created virtual machines.
type: bool
default: true
name:
description:
- The name of the virtual machine.
type: str
image:
description:
- The system image ID for creating the virtual machine, for example V(a3eae284-a2fe-11e4-b187-5f1f641608c8).
type: str
image_password:
description:
- Password set for the administrative user.
type: str
ssh_keys:
description:
- Public SSH keys allowing access to the virtual machine.
type: list
elements: str
default: []
datacenter:
description:
- The datacenter to provision this virtual machine.
type: str
cores:
description:
- The number of CPU cores to allocate to the virtual machine.
default: 2
type: int
ram:
description:
- The amount of memory to allocate to the virtual machine.
default: 2048
type: int
cpu_family:
description:
- The CPU family type to allocate to the virtual machine.
type: str
default: AMD_OPTERON
choices: ["AMD_OPTERON", "INTEL_XEON"]
volume_size:
description:
- The size in GB of the boot volume.
type: int
default: 10
bus:
description:
- The bus type for the volume.
type: str
default: VIRTIO
choices: ["IDE", "VIRTIO"]
instance_ids:
description:
- List of instance IDs, currently only used when state='absent' to remove instances.
type: list
elements: str
default: []
count:
description:
- The number of virtual machines to create.
type: int
default: 1
location:
description:
- The datacenter location. Use only if you want to create the Datacenter or else this value is ignored.
type: str
default: us/las
choices: ["us/las", "de/fra", "de/fkb"]
assign_public_ip:
description:
- This will assign the machine to the public LAN. If no LAN exists with public Internet access it is created.
type: bool
default: false
lan:
description:
- The ID of the LAN you wish to add the servers to.
type: int
default: 1
subscription_user:
description:
- The ProfitBricks username. Overrides the E(PB_SUBSCRIPTION_ID) environment variable.
type: str
subscription_password:
description:
- THe ProfitBricks password. Overrides the E(PB_PASSWORD) environment variable.
type: str
wait:
description:
- Wait for the instance to be in state 'running' before returning.
type: bool
default: true
wait_timeout:
description:
- How long before wait gives up, in seconds.
type: int
default: 600
remove_boot_volume:
description:
- Remove the bootVolume of the virtual machine you are destroying.
type: bool
default: true
state:
description:
- Create or terminate instances.
- 'The choices available are: V(running), V(stopped), V(absent), V(present).'
type: str
default: 'present'
disk_type:
description:
- The type of disk to be allocated.
type: str
choices: [SSD, HDD]
default: HDD
requirements:
- "profitbricks"
author: Matt Baldwin (@baldwinSPC) <baldwin@stackpointcloud.com>
"""
EXAMPLES = r"""
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Provisioning example
- name: Create three servers and enumerate their names
community.general.profitbricks:
datacenter: Tardis One
name: web%02d.stackpointcloud.com
cores: 4
ram: 2048
volume_size: 50
cpu_family: INTEL_XEON
image: a3eae284-a2fe-11e4-b187-5f1f641608c8
location: us/las
count: 3
assign_public_ip: true
- name: Remove virtual machines
community.general.profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: absent
- name: Start virtual machines
community.general.profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: running
- name: Stop virtual machines
community.general.profitbricks:
datacenter: Tardis One
instance_ids:
- 'web001.stackpointcloud.com'
- 'web002.stackpointcloud.com'
- 'web003.stackpointcloud.com'
wait_timeout: 500
state: stopped
"""
import re
import uuid
import time
import traceback
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, Volume, Server, Datacenter, NIC, LAN
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xrange
from ansible.module_utils.common.text.converters import to_native
LOCATIONS = ['us/las',
'de/fra',
'de/fkb']
uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def _create_machine(module, profitbricks, datacenter, name):
cores = module.params.get('cores')
ram = module.params.get('ram')
cpu_family = module.params.get('cpu_family')
volume_size = module.params.get('volume_size')
disk_type = module.params.get('disk_type')
image_password = module.params.get('image_password')
ssh_keys = module.params.get('ssh_keys')
bus = module.params.get('bus')
lan = module.params.get('lan')
assign_public_ip = module.params.get('assign_public_ip')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
location = module.params.get('location')
image = module.params.get('image')
assign_public_ip = module.boolean(module.params.get('assign_public_ip'))
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
if assign_public_ip:
public_found = False
lans = profitbricks.list_lans(datacenter)
for lan in lans['items']:
if lan['properties']['public']:
public_found = True
lan = lan['id']
if not public_found:
i = LAN(
name='public',
public=True)
lan_response = profitbricks.create_lan(datacenter, i)
_wait_for_completion(profitbricks, lan_response,
wait_timeout, "_create_machine")
lan = lan_response['id']
v = Volume(
name=str(uuid.uuid4()).replace('-', '')[:10],
size=volume_size,
image=image,
image_password=image_password,
ssh_keys=ssh_keys,
disk_type=disk_type,
bus=bus)
n = NIC(
lan=int(lan)
)
s = Server(
name=name,
ram=ram,
cores=cores,
cpu_family=cpu_family,
create_volumes=[v],
nics=[n],
)
try:
create_server_response = profitbricks.create_server(
datacenter_id=datacenter, server=s)
_wait_for_completion(profitbricks, create_server_response,
wait_timeout, "create_virtual_machine")
server_response = profitbricks.get_server(
datacenter_id=datacenter,
server_id=create_server_response['id'],
depth=3
)
except Exception as e:
module.fail_json(msg="failed to create the new server: %s" % str(e))
else:
return server_response
def _startstop_machine(module, profitbricks, datacenter_id, server_id):
state = module.params.get('state')
try:
if state == 'running':
profitbricks.start_server(datacenter_id, server_id)
else:
profitbricks.stop_server(datacenter_id, server_id)
return True
except Exception as e:
module.fail_json(msg="failed to start or stop the virtual machine %s at %s: %s" % (server_id, datacenter_id, str(e)))
def _create_datacenter(module, profitbricks):
datacenter = module.params.get('datacenter')
location = module.params.get('location')
wait_timeout = module.params.get('wait_timeout')
i = Datacenter(
name=datacenter,
location=location
)
try:
datacenter_response = profitbricks.create_datacenter(datacenter=i)
_wait_for_completion(profitbricks, datacenter_response,
wait_timeout, "_create_datacenter")
return datacenter_response
except Exception as e:
module.fail_json(msg="failed to create the new server(s): %s" % str(e))
def create_virtual_machine(module, profitbricks):
"""
Create new virtual machine
module : AnsibleModule object
community.general.profitbricks: authenticated profitbricks object
Returns:
True if a new virtual machine was created, false otherwise
"""
datacenter = module.params.get('datacenter')
name = module.params.get('name')
auto_increment = module.params.get('auto_increment')
count = module.params.get('count')
lan = module.params.get('lan')
wait_timeout = module.params.get('wait_timeout')
failed = True
datacenter_found = False
virtual_machines = []
virtual_machine_ids = []
# Locate UUID for datacenter if referenced by name.
datacenter_list = profitbricks.list_datacenters()
datacenter_id = _get_datacenter_id(datacenter_list, datacenter)
if datacenter_id:
datacenter_found = True
if not datacenter_found:
datacenter_response = _create_datacenter(module, profitbricks)
datacenter_id = datacenter_response['id']
_wait_for_completion(profitbricks, datacenter_response,
wait_timeout, "create_virtual_machine")
if auto_increment:
numbers = set()
count_offset = 1
try:
name % 0
except TypeError as e:
if e.message.startswith('not all'):
name = '%s%%d' % name
else:
module.fail_json(msg=e.message, exception=traceback.format_exc())
number_range = xrange(count_offset, count_offset + count + len(numbers))
available_numbers = list(set(number_range).difference(numbers))
names = []
numbers_to_use = available_numbers[:count]
for number in numbers_to_use:
names.append(name % number)
else:
names = [name]
# Prefetch a list of servers for later comparison.
server_list = profitbricks.list_servers(datacenter_id)
for name in names:
# Skip server creation if the server already exists.
if _get_server_id(server_list, name):
continue
create_response = _create_machine(module, profitbricks, str(datacenter_id), name)
nics = profitbricks.list_nics(datacenter_id, create_response['id'])
for n in nics['items']:
if lan == n['properties']['lan']:
create_response.update({'public_ip': n['properties']['ips'][0]})
virtual_machines.append(create_response)
failed = False
results = {
'failed': failed,
'machines': virtual_machines,
'action': 'create',
'instance_ids': {
'instances': [i['id'] for i in virtual_machines],
}
}
return results
def remove_virtual_machine(module, profitbricks):
"""
Removes a virtual machine.
This will remove the virtual machine along with the bootVolume.
module : AnsibleModule object
community.general.profitbricks: authenticated profitbricks object.
Not yet supported: handle deletion of attached data disks.
Returns:
True if a new virtual server was deleted, false otherwise
"""
datacenter = module.params.get('datacenter')
instance_ids = module.params.get('instance_ids')
remove_boot_volume = module.params.get('remove_boot_volume')
changed = False
if not isinstance(module.params.get('instance_ids'), list) or len(module.params.get('instance_ids')) < 1:
module.fail_json(msg='instance_ids should be a list of virtual machine ids or names, aborting')
# Locate UUID for datacenter if referenced by name.
datacenter_list = profitbricks.list_datacenters()
datacenter_id = _get_datacenter_id(datacenter_list, datacenter)
if not datacenter_id:
module.fail_json(msg='Virtual data center \'%s\' not found.' % str(datacenter))
# Prefetch server list for later comparison.
server_list = profitbricks.list_servers(datacenter_id)
for instance in instance_ids:
# Locate UUID for server if referenced by name.
server_id = _get_server_id(server_list, instance)
if server_id:
# Remove the server's boot volume
if remove_boot_volume:
_remove_boot_volume(module, profitbricks, datacenter_id, server_id)
# Remove the server
try:
server_response = profitbricks.delete_server(datacenter_id, server_id)
except Exception as e:
module.fail_json(msg="failed to terminate the virtual server: %s" % to_native(e), exception=traceback.format_exc())
else:
changed = True
return changed
def _remove_boot_volume(module, profitbricks, datacenter_id, server_id):
"""
Remove the boot volume from the server
"""
try:
server = profitbricks.get_server(datacenter_id, server_id)
volume_id = server['properties']['bootVolume']['id']
volume_response = profitbricks.delete_volume(datacenter_id, volume_id)
except Exception as e:
module.fail_json(msg="failed to remove the server's boot volume: %s" % to_native(e), exception=traceback.format_exc())
def startstop_machine(module, profitbricks, state):
"""
Starts or Stops a virtual machine.
module : AnsibleModule object
community.general.profitbricks: authenticated profitbricks object.
Returns:
True when the servers process the action successfully, false otherwise.
"""
if not isinstance(module.params.get('instance_ids'), list) or len(module.params.get('instance_ids')) < 1:
module.fail_json(msg='instance_ids should be a list of virtual machine ids or names, aborting')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
changed = False
datacenter = module.params.get('datacenter')
instance_ids = module.params.get('instance_ids')
# Locate UUID for datacenter if referenced by name.
datacenter_list = profitbricks.list_datacenters()
datacenter_id = _get_datacenter_id(datacenter_list, datacenter)
if not datacenter_id:
module.fail_json(msg='Virtual data center \'%s\' not found.' % str(datacenter))
# Prefetch server list for later comparison.
server_list = profitbricks.list_servers(datacenter_id)
for instance in instance_ids:
# Locate UUID of server if referenced by name.
server_id = _get_server_id(server_list, instance)
if server_id:
_startstop_machine(module, profitbricks, datacenter_id, server_id)
changed = True
if wait:
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
matched_instances = []
for res in profitbricks.list_servers(datacenter_id)['items']:
if state == 'running':
if res['properties']['vmState'].lower() == state:
matched_instances.append(res)
elif state == 'stopped':
if res['properties']['vmState'].lower() == 'shutoff':
matched_instances.append(res)
if len(matched_instances) < len(instance_ids):
time.sleep(5)
else:
break
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="wait for virtual machine state timeout on %s" % time.asctime())
return (changed)
def _get_datacenter_id(datacenters, identity):
"""
Fetch and return datacenter UUID by datacenter name if found.
"""
for datacenter in datacenters['items']:
if identity in (datacenter['properties']['name'], datacenter['id']):
return datacenter['id']
return None
def _get_server_id(servers, identity):
"""
Fetch and return server UUID by server name if found.
"""
for server in servers['items']:
if identity in (server['properties']['name'], server['id']):
return server['id']
return None
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
name=dict(),
image=dict(),
cores=dict(type='int', default=2),
ram=dict(type='int', default=2048),
cpu_family=dict(choices=['AMD_OPTERON', 'INTEL_XEON'],
default='AMD_OPTERON'),
volume_size=dict(type='int', default=10),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
image_password=dict(no_log=True),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
lan=dict(type='int', default=1),
count=dict(type='int', default=1),
auto_increment=dict(type='bool', default=True),
instance_ids=dict(type='list', elements='str', default=[]),
subscription_user=dict(),
subscription_password=dict(no_log=True),
location=dict(choices=LOCATIONS, default='us/las'),
assign_public_ip=dict(type='bool', default=False),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
remove_boot_volume=dict(type='bool', default=True),
state=dict(default='present'),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required ' +
'for running or stopping machines.')
try:
(changed) = remove_virtual_machine(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % to_native(e), exception=traceback.format_exc())
elif state in ('running', 'stopped'):
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required for ' +
'running or stopping machines.')
try:
(changed) = startstop_machine(module, profitbricks, state)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % to_native(e), exception=traceback.format_exc())
elif state == 'present':
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for new instance')
if not module.params.get('image'):
module.fail_json(msg='image parameter is required for new instance')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is ' +
'required for new instance')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is ' +
'required for new instance')
try:
(machine_dict_array) = create_virtual_machine(module, profitbricks)
module.exit_json(**machine_dict_array)
except Exception as e:
module.fail_json(msg='failed to set instance state: %s' % to_native(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,273 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: profitbricks_datacenter
short_description: Create or destroy a ProfitBricks Virtual Datacenter
description:
- This is a simple module that supports creating or removing vDCs. A vDC is required before you can create servers. This
module has a dependency on profitbricks >= 1.0.0.
deprecated:
removed_in: 11.0.0
why: Module relies on library unsupported since 2021.
alternative: >
Profitbricks has rebranded as Ionos Cloud and they provide a collection named ionoscloudsdk.ionoscloud.
Whilst it is likely it will provide the features of this module, that has not been verified.
Please refer to that collection's documentation for more details.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
name:
description:
- The name of the virtual datacenter.
type: str
description:
description:
- The description of the virtual datacenter.
type: str
required: false
location:
description:
- The datacenter location.
type: str
required: false
default: us/las
choices: ["us/las", "de/fra", "de/fkb"]
subscription_user:
description:
- The ProfitBricks username. Overrides the E(PB_SUBSCRIPTION_ID) environment variable.
type: str
required: false
subscription_password:
description:
- THe ProfitBricks password. Overrides the E(PB_PASSWORD) environment variable.
type: str
required: false
wait:
description:
- Wait for the datacenter to be created before returning.
required: false
default: true
type: bool
wait_timeout:
description:
- How long before wait gives up, in seconds.
type: int
default: 600
state:
description:
- Create or terminate datacenters.
- 'The available choices are: V(present), V(absent).'
type: str
required: false
default: 'present'
requirements: ["profitbricks"]
author: Matt Baldwin (@baldwinSPC) <baldwin@stackpointcloud.com>
"""
EXAMPLES = r"""
- name: Create a datacenter
community.general.profitbricks_datacenter:
datacenter: Tardis One
wait_timeout: 500
- name: Destroy a datacenter (remove all servers, volumes, and other objects in the datacenter)
community.general.profitbricks_datacenter:
datacenter: Tardis One
wait_timeout: 500
state: absent
"""
import re
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, Datacenter
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
LOCATIONS = ['us/las',
'de/fra',
'de/fkb']
uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def _remove_datacenter(module, profitbricks, datacenter):
try:
profitbricks.delete_datacenter(datacenter)
except Exception as e:
module.fail_json(msg="failed to remove the datacenter: %s" % str(e))
def create_datacenter(module, profitbricks):
"""
Creates a Datacenter
This will create a new Datacenter in the specified location.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if a new datacenter was created, false otherwise
"""
name = module.params.get('name')
location = module.params.get('location')
description = module.params.get('description')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
i = Datacenter(
name=name,
location=location,
description=description
)
try:
datacenter_response = profitbricks.create_datacenter(datacenter=i)
if wait:
_wait_for_completion(profitbricks, datacenter_response,
wait_timeout, "_create_datacenter")
results = {
'datacenter_id': datacenter_response['id']
}
return results
except Exception as e:
module.fail_json(msg="failed to create the new datacenter: %s" % str(e))
def remove_datacenter(module, profitbricks):
"""
Removes a Datacenter.
This will remove a datacenter.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the datacenter was deleted, false otherwise
"""
name = module.params.get('name')
changed = False
if uuid_match.match(name):
_remove_datacenter(module, profitbricks, name)
changed = True
else:
datacenters = profitbricks.list_datacenters()
for d in datacenters['items']:
vdc = profitbricks.get_datacenter(d['id'])
if name == vdc['properties']['name']:
name = d['id']
_remove_datacenter(module, profitbricks, name)
changed = True
return changed
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(),
description=dict(),
location=dict(choices=LOCATIONS, default='us/las'),
subscription_user=dict(),
subscription_password=dict(no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(default=600, type='int'),
state=dict(default='present'), # @TODO add choices
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
if not module.params.get('name'):
module.fail_json(msg='name parameter is required deleting a virtual datacenter.')
try:
(changed) = remove_datacenter(module, profitbricks)
module.exit_json(
changed=changed)
except Exception as e:
module.fail_json(msg='failed to set datacenter state: %s' % str(e))
elif state == 'present':
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for a new datacenter')
if not module.params.get('location'):
module.fail_json(msg='location parameter is required for a new datacenter')
try:
(datacenter_dict_array) = create_datacenter(module, profitbricks)
module.exit_json(**datacenter_dict_array)
except Exception as e:
module.fail_json(msg='failed to set datacenter state: %s' % str(e))
if __name__ == '__main__':
main()

View file

@ -1,303 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: profitbricks_nic
short_description: Create or Remove a NIC
description:
- This module allows you to create or restore a volume snapshot. This module has a dependency on profitbricks >= 1.0.0.
deprecated:
removed_in: 11.0.0
why: Module relies on library unsupported since 2021.
alternative: >
Profitbricks has rebranded as Ionos Cloud and they provide a collection named ionoscloudsdk.ionoscloud.
Whilst it is likely it will provide the features of this module, that has not been verified.
Please refer to that collection's documentation for more details.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
datacenter:
description:
- The datacenter in which to operate.
type: str
required: true
server:
description:
- The server name or ID.
type: str
required: true
name:
description:
- The name or ID of the NIC. This is only required on deletes, but not on create.
- If not specified, it defaults to a value based on UUID4.
type: str
lan:
description:
- The LAN to place the NIC on. You can pass a LAN that does not exist and it will be created. Required on create.
type: str
subscription_user:
description:
- The ProfitBricks username. Overrides the E(PB_SUBSCRIPTION_ID) environment variable.
type: str
required: true
subscription_password:
description:
- THe ProfitBricks password. Overrides the E(PB_PASSWORD) environment variable.
type: str
required: true
wait:
description:
- Wait for the operation to complete before returning.
required: false
default: true
type: bool
wait_timeout:
description:
- How long before wait gives up, in seconds.
type: int
default: 600
state:
description:
- Indicate desired state of the resource.
- 'The available choices are: V(present), V(absent).'
type: str
required: false
default: 'present'
requirements: ["profitbricks"]
author: Matt Baldwin (@baldwinSPC) <baldwin@stackpointcloud.com>
"""
EXAMPLES = r"""
- name: Create a NIC
community.general.profitbricks_nic:
datacenter: Tardis One
server: node002
lan: 2
wait_timeout: 500
state: present
- name: Remove a NIC
community.general.profitbricks_nic:
datacenter: Tardis One
server: node002
name: 7341c2454f
wait_timeout: 500
state: absent
"""
import re
import uuid
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, NIC
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _make_default_name():
return str(uuid.uuid4()).replace('-', '')[:10]
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def create_nic(module, profitbricks):
"""
Creates a NIC.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the nic creates, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
lan = module.params.get('lan')
name = module.params.get('name')
if name is None:
name = _make_default_name()
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server = s['id']
break
try:
n = NIC(
name=name,
lan=lan
)
nic_response = profitbricks.create_nic(datacenter, server, n)
if wait:
_wait_for_completion(profitbricks, nic_response,
wait_timeout, "create_nic")
return nic_response
except Exception as e:
module.fail_json(msg="failed to create the NIC: %s" % str(e))
def delete_nic(module, profitbricks):
"""
Removes a NIC
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the NIC was removed, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
name = module.params.get('name')
if name is None:
name = _make_default_name()
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
server_found = False
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server_found = True
server = s['id']
break
if not server_found:
return False
# Locate UUID for NIC
nic_found = False
if not (uuid_match.match(name)):
nic_list = profitbricks.list_nics(datacenter, server)
for n in nic_list['items']:
if name == n['properties']['name']:
nic_found = True
name = n['id']
break
if not nic_found:
return False
try:
nic_response = profitbricks.delete_nic(datacenter, server, name)
return nic_response
except Exception as e:
module.fail_json(msg="failed to remove the NIC: %s" % str(e))
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(required=True),
server=dict(required=True),
name=dict(),
lan=dict(),
subscription_user=dict(required=True),
subscription_password=dict(required=True, no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
),
required_if=(
('state', 'absent', ['name']),
('state', 'present', ['lan']),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
try:
(changed) = delete_nic(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set nic state: %s' % str(e))
elif state == 'present':
try:
(nic_dict) = create_nic(module, profitbricks)
module.exit_json(nics=nic_dict) # @FIXME changed not calculated?
except Exception as e:
module.fail_json(msg='failed to set nic state: %s' % str(e))
if __name__ == '__main__':
main()

View file

@ -1,448 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: profitbricks_volume
short_description: Create or destroy a volume
description:
- Allows you to create or remove a volume from a ProfitBricks datacenter. This module has a dependency on profitbricks >=
1.0.0.
deprecated:
removed_in: 11.0.0
why: Module relies on library unsupported since 2021.
alternative: >
Profitbricks has rebranded as Ionos Cloud and they provide a collection named ionoscloudsdk.ionoscloud.
Whilst it is likely it will provide the features of this module, that has not been verified.
Please refer to that collection's documentation for more details.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
datacenter:
description:
- The datacenter in which to create the volumes.
type: str
name:
description:
- The name of the volumes. You can enumerate the names using auto_increment.
type: str
size:
description:
- The size of the volume.
type: int
required: false
default: 10
bus:
description:
- The bus type.
type: str
required: false
default: VIRTIO
choices: ["IDE", "VIRTIO"]
image:
description:
- The system image ID for the volume, for example V(a3eae284-a2fe-11e4-b187-5f1f641608c8). This can also be a snapshot
image ID.
type: str
image_password:
description:
- Password set for the administrative user.
type: str
required: false
ssh_keys:
description:
- Public SSH keys allowing access to the virtual machine.
type: list
elements: str
default: []
disk_type:
description:
- The disk type of the volume.
type: str
required: false
default: HDD
choices: ["HDD", "SSD"]
licence_type:
description:
- The licence type for the volume. This is used when the image is non-standard.
- 'The available choices are: V(LINUX), V(WINDOWS), V(UNKNOWN), V(OTHER).'
type: str
required: false
default: UNKNOWN
count:
description:
- The number of volumes you wish to create.
type: int
required: false
default: 1
auto_increment:
description:
- Whether or not to increment a single number in the name for created virtual machines.
default: true
type: bool
instance_ids:
description:
- List of instance IDs, currently only used when O(state=absent) to remove instances.
type: list
elements: str
default: []
subscription_user:
description:
- The ProfitBricks username. Overrides the E(PB_SUBSCRIPTION_ID) environment variable.
type: str
required: false
subscription_password:
description:
- THe ProfitBricks password. Overrides the E(PB_PASSWORD) environment variable.
type: str
required: false
wait:
description:
- Wait for the datacenter to be created before returning.
required: false
default: true
type: bool
wait_timeout:
description:
- How long before wait gives up, in seconds.
type: int
default: 600
state:
description:
- Create or terminate datacenters.
- 'The available choices are: V(present), V(absent).'
type: str
required: false
default: 'present'
server:
description:
- Server name to attach the volume to.
type: str
requirements: ["profitbricks"]
author: Matt Baldwin (@baldwinSPC) <baldwin@stackpointcloud.com>
"""
EXAMPLES = r"""
- name: Create multiple volumes
community.general.profitbricks_volume:
datacenter: Tardis One
name: vol%02d
count: 5
auto_increment: true
wait_timeout: 500
state: present
- name: Remove Volumes
community.general.profitbricks_volume:
datacenter: Tardis One
instance_ids:
- 'vol01'
- 'vol02'
wait_timeout: 500
state: absent
"""
import re
import time
import traceback
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, Volume
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xrange
from ansible.module_utils.common.text.converters import to_native
uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def _create_volume(module, profitbricks, datacenter, name):
size = module.params.get('size')
bus = module.params.get('bus')
image = module.params.get('image')
image_password = module.params.get('image_password')
ssh_keys = module.params.get('ssh_keys')
disk_type = module.params.get('disk_type')
licence_type = module.params.get('licence_type')
wait_timeout = module.params.get('wait_timeout')
wait = module.params.get('wait')
try:
v = Volume(
name=name,
size=size,
bus=bus,
image=image,
image_password=image_password,
ssh_keys=ssh_keys,
disk_type=disk_type,
licence_type=licence_type
)
volume_response = profitbricks.create_volume(datacenter, v)
if wait:
_wait_for_completion(profitbricks, volume_response,
wait_timeout, "_create_volume")
except Exception as e:
module.fail_json(msg="failed to create the volume: %s" % str(e))
return volume_response
def _delete_volume(module, profitbricks, datacenter, volume):
try:
profitbricks.delete_volume(datacenter, volume)
except Exception as e:
module.fail_json(msg="failed to remove the volume: %s" % str(e))
def create_volume(module, profitbricks):
"""
Creates a volume.
This will create a volume in a datacenter.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was created, false otherwise
"""
datacenter = module.params.get('datacenter')
name = module.params.get('name')
auto_increment = module.params.get('auto_increment')
count = module.params.get('count')
datacenter_found = False
failed = True
volumes = []
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
datacenter_found = True
break
if not datacenter_found:
module.fail_json(msg='datacenter could not be found.')
if auto_increment:
numbers = set()
count_offset = 1
try:
name % 0
except TypeError as e:
if e.message.startswith('not all'):
name = '%s%%d' % name
else:
module.fail_json(msg=e.message, exception=traceback.format_exc())
number_range = xrange(count_offset, count_offset + count + len(numbers))
available_numbers = list(set(number_range).difference(numbers))
names = []
numbers_to_use = available_numbers[:count]
for number in numbers_to_use:
names.append(name % number)
else:
names = [name] * count
for name in names:
create_response = _create_volume(module, profitbricks, str(datacenter), name)
volumes.append(create_response)
_attach_volume(module, profitbricks, datacenter, create_response['id'])
failed = False
results = {
'failed': failed,
'volumes': volumes,
'action': 'create',
'instance_ids': {
'instances': [i['id'] for i in volumes],
}
}
return results
def delete_volume(module, profitbricks):
"""
Removes a volume.
This will create a volume in a datacenter.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was removed, false otherwise
"""
if not isinstance(module.params.get('instance_ids'), list) or len(module.params.get('instance_ids')) < 1:
module.fail_json(msg='instance_ids should be a list of virtual machine ids or names, aborting')
datacenter = module.params.get('datacenter')
changed = False
instance_ids = module.params.get('instance_ids')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
for n in instance_ids:
if uuid_match.match(n):
_delete_volume(module, profitbricks, datacenter, n)
changed = True
else:
volumes = profitbricks.list_volumes(datacenter)
for v in volumes['items']:
if n == v['properties']['name']:
volume_id = v['id']
_delete_volume(module, profitbricks, datacenter, volume_id)
changed = True
return changed
def _attach_volume(module, profitbricks, datacenter, volume):
"""
Attaches a volume.
This will attach a volume to the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was attached, false otherwise
"""
server = module.params.get('server')
# Locate UUID for Server
if server:
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server = s['id']
break
try:
return profitbricks.attach_volume(datacenter, server, volume)
except Exception as e:
module.fail_json(msg='failed to attach volume: %s' % to_native(e), exception=traceback.format_exc())
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
server=dict(),
name=dict(),
size=dict(type='int', default=10),
bus=dict(choices=['VIRTIO', 'IDE'], default='VIRTIO'),
image=dict(),
image_password=dict(no_log=True),
ssh_keys=dict(type='list', elements='str', default=[], no_log=False),
disk_type=dict(choices=['HDD', 'SSD'], default='HDD'),
licence_type=dict(default='UNKNOWN'),
count=dict(type='int', default=1),
auto_increment=dict(type='bool', default=True),
instance_ids=dict(type='list', elements='str', default=[]),
subscription_user=dict(),
subscription_password=dict(no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
)
)
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required for running or stopping machines.')
try:
(changed) = delete_volume(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set volume state: %s' % to_native(e), exception=traceback.format_exc())
elif state == 'present':
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required for new instance')
if not module.params.get('name'):
module.fail_json(msg='name parameter is required for new instance')
try:
(volume_dict_array) = create_volume(module, profitbricks)
module.exit_json(**volume_dict_array)
except Exception as e:
module.fail_json(msg='failed to set volume state: %s' % to_native(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,273 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: profitbricks_volume_attachments
short_description: Attach or detach a volume
description:
- Allows you to attach or detach a volume from a ProfitBricks server. This module has a dependency on profitbricks >= 1.0.0.
deprecated:
removed_in: 11.0.0
why: Module relies on library unsupported since 2021.
alternative: >
Profitbricks has rebranded as Ionos Cloud and they provide a collection named ionoscloudsdk.ionoscloud.
Whilst it is likely it will provide the features of this module, that has not been verified.
Please refer to that collection's documentation for more details.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
datacenter:
description:
- The datacenter in which to operate.
type: str
server:
description:
- The name of the server you wish to detach or attach the volume.
type: str
volume:
description:
- The volume name or ID.
type: str
subscription_user:
description:
- The ProfitBricks username. Overrides the E(PB_SUBSCRIPTION_ID) environment variable.
type: str
required: false
subscription_password:
description:
- THe ProfitBricks password. Overrides the E(PB_PASSWORD) environment variable.
type: str
required: false
wait:
description:
- Wait for the operation to complete before returning.
required: false
default: true
type: bool
wait_timeout:
description:
- How long before wait gives up, in seconds.
type: int
default: 600
state:
description:
- Indicate desired state of the resource.
- 'The available choices are: V(present), V(absent).'
type: str
required: false
default: 'present'
requirements: ["profitbricks"]
author: Matt Baldwin (@baldwinSPC) <baldwin@stackpointcloud.com>
"""
EXAMPLES = r"""
- name: Attach a volume
community.general.profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: present
- name: Detach a volume
community.general.profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: absent
"""
import re
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService
except ImportError:
HAS_PB_SDK = False
from ansible.module_utils.basic import AnsibleModule
uuid_match = re.compile(
r'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def attach_volume(module, profitbricks):
"""
Attaches a volume.
This will attach a volume to the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was attached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server = s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.attach_volume(datacenter, server, volume)
def detach_volume(module, profitbricks):
"""
Detaches a volume.
This will remove a volume from the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was detached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server = s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.detach_volume(datacenter, server, volume)
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
server=dict(),
volume=dict(),
subscription_user=dict(),
subscription_password=dict(no_log=True),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required')
if not module.params.get('server'):
module.fail_json(msg='server parameter is required')
if not module.params.get('volume'):
module.fail_json(msg='volume parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
try:
(changed) = detach_volume(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
elif state == 'present':
try:
attach_volume(module, profitbricks)
module.exit_json()
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
if __name__ == '__main__':
main()

View file

@ -228,10 +228,11 @@ options:
update:
description:
- If V(true), the container will be updated with new values.
- The current default value of V(false) is deprecated and should will change to V(true) in community.general 11.0.0.
Please set O(update) explicitly to V(false) or V(true) to avoid surprises and get rid of the deprecation warning.
- If V(false), it will not be updated.
- The default changed from V(false) to V(true) in community.general 11.0.0.
type: bool
version_added: 8.1.0
default: true
force:
description:
- Forcing operations.
@ -700,7 +701,7 @@ def get_proxmox_args():
nameserver=dict(),
searchdomain=dict(),
timeout=dict(type="int", default=30),
update=dict(type="bool"),
update=dict(type="bool", default=True),
force=dict(type="bool", default=False),
purge=dict(type="bool", default=False),
state=dict(
@ -844,15 +845,6 @@ class ProxmoxLxcAnsible(ProxmoxAnsible):
# check if the container exists already
if lxc is not None:
if update is None:
# TODO: Remove deprecation warning in version 11.0.0
self.module.deprecate(
msg="The default value of false for 'update' has been deprecated and will be changed to true in version 11.0.0.",
version="11.0.0",
collection_name="community.general",
)
update = False
if update:
# Update it if we should
identifier = self.format_vm_identifier(vmid, hostname)

View file

@ -203,7 +203,6 @@ class Snap(StateModuleHelper):
},
supports_check_mode=True,
)
use_old_vardict = False
@staticmethod
def _first_non_zero(a):

View file

@ -109,7 +109,6 @@ class SnapAlias(StateModuleHelper):
],
supports_check_mode=True,
)
use_old_vardict = False
def _aliases(self):
n = self.vars.name

View file

@ -103,7 +103,6 @@ class XdgMime(ModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.runner = xdg_mime_runner(self.module, check_rc=True)

View file

@ -190,7 +190,6 @@ class XFConfProperty(StateModuleHelper):
required_together=[('value', 'value_type')],
supports_check_mode=True,
)
use_old_vardict = False
default_state = 'present'

View file

@ -142,7 +142,6 @@ class XFConfInfo(ModuleHelper):
),
supports_check_mode=True,
)
use_old_vardict = False
def __init_module__(self):
self.runner = xfconf_runner(self.module, check_rc=True)

View file

@ -1,20 +0,0 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- import_tasks: setup_no_value.yml
- name: testing exclusion between state and list_all parameters
git_config:
list_all: true
state: absent
register: result
ignore_errors: true
- name: assert git_config failed
assert:
that:
- result is failed
- "result.msg == 'parameters are mutually exclusive: list_all|state'"
...

View file

@ -13,7 +13,7 @@
register: set_result
- name: getting value without state
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result
@ -24,6 +24,5 @@
- set_result is changed
- set_result.diff.before == "\n"
- set_result.diff.after == option_value + "\n"
- get_result is not changed
- get_result.config_value == option_value
...

View file

@ -14,10 +14,9 @@
register: result
- name: getting value with state=present
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
state: present
register: get_result
- name: assert set changed and value is correct with state=present
@ -26,6 +25,5 @@
- set_result is changed
- set_result.diff.before == "\n"
- set_result.diff.after == option_value + "\n"
- get_result is not changed
- get_result.config_value == option_value
...

View file

@ -15,11 +15,10 @@
register: result
- name: getting value with state=present
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "file"
file: "{{ remote_tmp_dir }}/gitconfig_file"
state: present
path: "{{ remote_tmp_dir }}/gitconfig_file"
register: get_result
- name: assert set changed and value is correct with state=present
@ -28,6 +27,5 @@
- set_result is changed
- set_result.diff.before == "\n"
- set_result.diff.after == option_value + "\n"
- get_result is not changed
- get_result.config_value == option_value
...

View file

@ -14,8 +14,6 @@
- block:
- import_tasks: set_value.yml
# testing parameters exclusion: state and list_all
- import_tasks: exclusion_state_list-all.yml
# testing get/set option without state
- import_tasks: get_set_no_state.yml
# testing get/set option with state=present

View file

@ -14,7 +14,7 @@
register: unset_result
- name: getting value
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result

View file

@ -31,17 +31,11 @@
- 'merge_request.target=foobar'
register: set_result2
- name: getting the multi-value
git_config:
name: push.pushoption
scope: global
register: get_single_result
- name: getting all values for the single option
git_config_info:
name: push.pushoption
scope: global
register: get_all_result
register: get_result
- name: replace-all values
git_config:
@ -62,8 +56,8 @@
- set_result2.results[1] is not changed
- set_result2.results[2] is not changed
- set_result3 is changed
- get_single_result.config_value == 'merge_request.create'
- 'get_all_result.config_values == {"push.pushoption": ["merge_request.create", "merge_request.draft", "merge_request.target=foobar"]}'
- get_result.config_value == 'merge_request.create'
- 'get_result.config_values == {"push.pushoption": ["merge_request.create", "merge_request.draft", "merge_request.target=foobar"]}'
- name: assert the diffs are also right
assert:

View file

@ -20,7 +20,7 @@
register: set_result2
- name: getting value
git_config:
git_config_info:
name: core.name
scope: global
register: get_result
@ -30,7 +30,6 @@
that:
- set_result1 is changed
- set_result2 is changed
- get_result is not changed
- get_result.config_value == 'bar'
- set_result1.diff.before == "\n"
- set_result1.diff.after == "foo\n"

View file

@ -22,7 +22,7 @@
register: set_result2
- name: getting value
git_config:
git_config_info:
name: core.hooksPath
scope: global
register: get_result
@ -32,6 +32,5 @@
that:
- set_result1 is changed
- set_result2 is not changed
- get_result is not changed
- get_result.config_value == '~/foo/bar'
...

View file

@ -14,7 +14,7 @@
register: unset_result
- name: getting value
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result

View file

@ -13,7 +13,7 @@
register: unset_result
- name: getting value
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result

View file

@ -13,7 +13,7 @@
register: unset_result
- name: getting value
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result
@ -37,7 +37,7 @@
register: unset_result
- name: getting value
git_config:
git_config_info:
name: "{{ option_name }}"
scope: "{{ option_scope }}"
register: get_result

View file

@ -57,7 +57,7 @@ class MSimple(ModuleHelper):
raise Exception("a >= 100")
if self.vars.c == "abc change":
self.vars['abc'] = "changed abc"
if self.vars.get('a', 0) == 2:
if self.vars.a == 2:
self.vars['b'] = str(self.vars.b) * 2
self.vars['c'] = str(self.vars.c) * 2

View file

@ -63,7 +63,7 @@ class MSimple(ModuleHelper):
raise Exception("a >= 100")
if self.vars.c == "abc change":
self.vars['abc'] = "changed abc"
if self.vars.get('a', 0) == 2:
if self.vars.a == 2:
self.vars['b'] = str(self.vars.b) * 2
self.vars['c'] = str(self.vars.c) * 2
self.process_a3_bc()

View file

@ -49,7 +49,6 @@ class MState(StateModuleHelper):
state=dict(type='str', choices=['join', 'b_x_a', 'c_x_a', 'both_x_a', 'nop'], default='join'),
),
)
use_old_vardict = False
def __init_module__(self):
self.vars.set('result', "abc", diff=True)

View file

@ -1,206 +0,0 @@
# Copyright (c) 2020 Shay Rybak <shay.rybak@stackpath.com>
# Copyright (c) 2020 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.errors import AnsibleError
from ansible.inventory.data import InventoryData
from ansible_collections.community.general.plugins.inventory.stackpath_compute import InventoryModule
@pytest.fixture(scope="module")
def inventory():
r = InventoryModule()
r.inventory = InventoryData()
return r
def test_get_stack_slugs(inventory):
stacks = [
{
'status': 'ACTIVE',
'name': 'test1',
'id': 'XXXX',
'updatedAt': '2020-07-08T01:00:00.000000Z',
'slug': 'test1',
'createdAt': '2020-07-08T00:00:00.000000Z',
'accountId': 'XXXX',
}, {
'status': 'ACTIVE',
'name': 'test2',
'id': 'XXXX',
'updatedAt': '2019-10-22T18:00:00.000000Z',
'slug': 'test2',
'createdAt': '2019-10-22T18:00:00.000000Z',
'accountId': 'XXXX',
}, {
'status': 'DISABLED',
'name': 'test3',
'id': 'XXXX',
'updatedAt': '2020-01-16T20:00:00.000000Z',
'slug': 'test3',
'createdAt': '2019-10-15T13:00:00.000000Z',
'accountId': 'XXXX',
}, {
'status': 'ACTIVE',
'name': 'test4',
'id': 'XXXX',
'updatedAt': '2019-11-20T22:00:00.000000Z',
'slug': 'test4',
'createdAt': '2019-11-20T22:00:00.000000Z',
'accountId': 'XXXX',
}
]
inventory._get_stack_slugs(stacks)
assert len(inventory.stack_slugs) == 4
assert inventory.stack_slugs == [
"test1",
"test2",
"test3",
"test4"
]
def test_verify_file(tmp_path, inventory):
file = tmp_path / "foobar.stackpath_compute.yml"
file.touch()
assert inventory.verify_file(str(file)) is True
def test_verify_file_bad_config(inventory):
assert inventory.verify_file('foobar.stackpath_compute.yml') is False
def test_validate_config(inventory):
config = {
"client_secret": "short_client_secret",
"use_internal_ip": False,
"stack_slugs": ["test1"],
"client_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"plugin": "community.general.stackpath_compute",
}
with pytest.raises(AnsibleError) as error_message:
inventory._validate_config(config)
assert "client_secret must be 64 characters long" in error_message
config = {
"client_secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"use_internal_ip": True,
"stack_slugs": ["test1"],
"client_id": "short_client_id",
"plugin": "community.general.stackpath_compute",
}
with pytest.raises(AnsibleError) as error_message:
inventory._validate_config(config)
assert "client_id must be 32 characters long" in error_message
config = {
"use_internal_ip": True,
"stack_slugs": ["test1"],
"client_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"plugin": "community.general.stackpath_compute",
}
with pytest.raises(AnsibleError) as error_message:
inventory._validate_config(config)
assert "config missing client_secret, a required parameter" in error_message
config = {
"client_secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"use_internal_ip": False,
"plugin": "community.general.stackpath_compute",
}
with pytest.raises(AnsibleError) as error_message:
inventory._validate_config(config)
assert "config missing client_id, a required parameter" in error_message
def test_populate(inventory):
instances = [
{
"name": "instance1",
"countryCode": "SE",
"workloadSlug": "wokrload1",
"continent": "Europe",
"workloadId": "id1",
"cityCode": "ARN",
"externalIpAddress": "20.0.0.1",
"target": "target1",
"stackSlug": "stack1",
"ipAddress": "10.0.0.1",
},
{
"name": "instance2",
"countryCode": "US",
"workloadSlug": "wokrload2",
"continent": "America",
"workloadId": "id2",
"cityCode": "JFK",
"externalIpAddress": "20.0.0.2",
"target": "target2",
"stackSlug": "stack1",
"ipAddress": "10.0.0.2",
},
{
"name": "instance3",
"countryCode": "SE",
"workloadSlug": "workload3",
"continent": "Europe",
"workloadId": "id3",
"cityCode": "ARN",
"externalIpAddress": "20.0.0.3",
"target": "target1",
"stackSlug": "stack2",
"ipAddress": "10.0.0.3",
},
{
"name": "instance4",
"countryCode": "US",
"workloadSlug": "workload3",
"continent": "America",
"workloadId": "id4",
"cityCode": "JFK",
"externalIpAddress": "20.0.0.4",
"target": "target2",
"stackSlug": "stack2",
"ipAddress": "10.0.0.4",
},
]
inventory.hostname_key = "externalIpAddress"
inventory._populate(instances)
# get different hosts
host1 = inventory.inventory.get_host('20.0.0.1')
host2 = inventory.inventory.get_host('20.0.0.2')
host3 = inventory.inventory.get_host('20.0.0.3')
host4 = inventory.inventory.get_host('20.0.0.4')
# get different groups
assert 'citycode_arn' in inventory.inventory.groups
group_citycode_arn = inventory.inventory.groups['citycode_arn']
assert 'countrycode_se' in inventory.inventory.groups
group_countrycode_se = inventory.inventory.groups['countrycode_se']
assert 'continent_america' in inventory.inventory.groups
group_continent_america = inventory.inventory.groups['continent_america']
assert 'name_instance1' in inventory.inventory.groups
group_name_instance1 = inventory.inventory.groups['name_instance1']
assert 'stackslug_stack1' in inventory.inventory.groups
group_stackslug_stack1 = inventory.inventory.groups['stackslug_stack1']
assert 'target_target1' in inventory.inventory.groups
group_target_target1 = inventory.inventory.groups['target_target1']
assert 'workloadslug_workload3' in inventory.inventory.groups
group_workloadslug_workload3 = inventory.inventory.groups['workloadslug_workload3']
assert 'workloadid_id1' in inventory.inventory.groups
group_workloadid_id1 = inventory.inventory.groups['workloadid_id1']
assert group_citycode_arn.hosts == [host1, host3]
assert group_countrycode_se.hosts == [host1, host3]
assert group_continent_america.hosts == [host2, host4]
assert group_name_instance1.hosts == [host1]
assert group_stackslug_stack1.hosts == [host1, host2]
assert group_target_target1.hosts == [host1, host3]
assert group_workloadslug_workload3.hosts == [host3, host4]
assert group_workloadid_id1.hosts == [host1]

View file

@ -1,537 +0,0 @@
# Copyright (c) 2018, Arigato Machine Inc.
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible_collections.community.internal_test_tools.tests.unit.compat import unittest
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch, call
from ansible.errors import AnsibleError
from ansible.module_utils.urls import ConnectionError, SSLValidationError
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils import six
from ansible.plugins.loader import lookup_loader
from ansible_collections.community.general.plugins.lookup.manifold import ManifoldApiClient, ApiError
import json
import os
API_FIXTURES = {
'https://api.marketplace.manifold.co/v1/resources':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
},
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?label=resource-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/resources?label=resource-2':
[
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/resources?project_id=pid-1':
[
{
"body": {
"label": "resource-2",
"name": "Resource 2"
},
"id": "rid-2"
}
],
'https://api.marketplace.manifold.co/v1/resources?project_id=pid-2':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
},
{
"body": {
"label": "resource-3",
"name": "Resource 3"
},
"id": "rid-3"
}
],
'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1':
[
{
"body": {
"label": "resource-1",
"name": "Resource 1"
},
"id": "rid-1"
}
],
'https://api.marketplace.manifold.co/v1/projects':
[
{
"body": {
"label": "project-1",
"name": "Project 1",
},
"id": "pid-1",
},
{
"body": {
"label": "project-2",
"name": "Project 2",
},
"id": "pid-2",
}
],
'https://api.marketplace.manifold.co/v1/projects?label=project-2':
[
{
"body": {
"label": "project-2",
"name": "Project 2",
},
"id": "pid-2",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-1':
[
{
"body": {
"resource_id": "rid-1",
"values": {
"RESOURCE_TOKEN_1": "token-1",
"RESOURCE_TOKEN_2": "token-2"
}
},
"id": "cid-1",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-2':
[
{
"body": {
"resource_id": "rid-2",
"values": {
"RESOURCE_TOKEN_3": "token-3",
"RESOURCE_TOKEN_4": "token-4"
}
},
"id": "cid-2",
}
],
'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-3':
[
{
"body": {
"resource_id": "rid-3",
"values": {
"RESOURCE_TOKEN_1": "token-5",
"RESOURCE_TOKEN_2": "token-6"
}
},
"id": "cid-3",
}
],
'https://api.identity.manifold.co/v1/teams':
[
{
"id": "tid-1",
"body": {
"name": "Team 1",
"label": "team-1"
}
},
{
"id": "tid-2",
"body": {
"name": "Team 2",
"label": "team-2"
}
}
]
}
def mock_fixture(open_url_mock, fixture=None, data=None, headers=None):
if not headers:
headers = {}
if fixture:
data = json.dumps(API_FIXTURES[fixture])
if 'content-type' not in headers:
headers['content-type'] = 'application/json'
open_url_mock.return_value.read.return_value = data
open_url_mock.return_value.headers = headers
class TestManifoldApiClient(unittest.TestCase):
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_sends_default_headers(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint')
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_decodes_json(self, open_url_mock):
mock_fixture(open_url_mock, fixture='https://api.marketplace.manifold.co/v1/resources')
client = ManifoldApiClient('token-123')
self.assertIsInstance(client.request('marketplace', 'resources'), list)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_streams_text(self, open_url_mock):
mock_fixture(open_url_mock, data='hello', headers={'content-type': "text/plain"})
client = ManifoldApiClient('token-123')
self.assertEqual('hello', client.request('test', 'endpoint'))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_processes_parameterized_headers(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint', headers={'X-HEADER': 'MANIFOLD'})
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123',
'X-HEADER': 'MANIFOLD'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_passes_arbitrary_parameters(self, open_url_mock):
mock_fixture(open_url_mock, data='hello')
client = ManifoldApiClient('token-123')
client.request('test', 'endpoint', use_proxy=False, timeout=5)
open_url_mock.assert_called_with('https://api.test.manifold.co/v1/endpoint',
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0',
use_proxy=False, timeout=5)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_incorrect_json(self, open_url_mock):
mock_fixture(open_url_mock, data='noJson', headers={'content-type': "application/json"})
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('JSON response can\'t be parsed while requesting https://api.test.manifold.co/v1/endpoint:\n'
'noJson',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_status_500(self, open_url_mock):
open_url_mock.side_effect = HTTPError('https://api.test.manifold.co/v1/endpoint',
500, 'Server error', {}, six.StringIO('ERROR'))
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Server returned: HTTP Error 500: Server error while requesting '
'https://api.test.manifold.co/v1/endpoint:\nERROR',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_bad_url(self, open_url_mock):
open_url_mock.side_effect = URLError('URL is invalid')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Failed lookup url for https://api.test.manifold.co/v1/endpoint : <url'
'open error URL is invalid>',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_ssl_error(self, open_url_mock):
open_url_mock.side_effect = SSLValidationError('SSL Error')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Error validating the server\'s certificate for https://api.test.manifold.co/v1/endpoint: '
'SSL Error',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_request_raises_on_connection_error(self, open_url_mock):
open_url_mock.side_effect = ConnectionError('Unknown connection error')
client = ManifoldApiClient('token-123')
with self.assertRaises(ApiError) as context:
client.request('test', 'endpoint')
self.assertEqual('Error connecting to https://api.test.manifold.co/v1/endpoint: Unknown connection error',
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_get_all(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_filter_label(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources?label=resource-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources(label='resource-1'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_resources_filter_team_and_project(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_resources(team_id='tid-1', project_id='pid-1'))
args, kwargs = open_url_mock.call_args
url_called = args[0]
# Dict order is not guaranteed, so an url may have querystring parameters order randomized
self.assertIn('team_id=tid-1', url_called)
self.assertIn('project_id=pid-1', url_called)
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_teams_get_all(self, open_url_mock):
url = 'https://api.identity.manifold.co/v1/teams'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_teams())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_teams_filter_label(self, open_url_mock):
url = 'https://api.identity.manifold.co/v1/teams'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url][1:2], client.get_teams(label='team-2'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_projects_get_all(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/projects'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_projects())
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_projects_filter_label(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/projects?label=project-2'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_projects(label='project-2'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
@patch('ansible_collections.community.general.plugins.lookup.manifold.open_url')
def test_get_credentials(self, open_url_mock):
url = 'https://api.marketplace.manifold.co/v1/credentials?resource_id=rid-1'
mock_fixture(open_url_mock, fixture=url)
client = ManifoldApiClient('token-123')
self.assertListEqual(API_FIXTURES[url], client.get_credentials(resource_id='rid-1'))
open_url_mock.assert_called_with(url,
headers={'Accept': '*/*', 'Authorization': 'Bearer token-123'},
http_agent='python-manifold-ansible-1.0.0')
class TestLookupModule(unittest.TestCase):
def setUp(self):
self.lookup = lookup_loader.get('community.general.manifold')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_all(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2',
'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_one_resource(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?label=resource-2']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run(['resource-2'], api_token='token-123'))
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None, label='resource-2')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_two_resources(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2',
'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run(['resource-1', 'resource-2'], api_token='token-123'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.display')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_get_resources_with_same_credential_names(self, client_mock, display_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-5',
'RESOURCE_TOKEN_2': 'token-6'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?project_id=pid-2']
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects?label=project-2']
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-2'))
client_mock.assert_called_with('token-123')
display_mock.warning.assert_has_calls([
call("'RESOURCE_TOKEN_1' with label 'resource-1' was replaced by resource data with label 'resource-3'"),
call("'RESOURCE_TOKEN_2' with label 'resource-1' was replaced by resource data with label 'resource-3'")],
any_order=True
)
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-2')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_team(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?team_id=tid-1']
client_mock.return_value.get_teams.return_value = API_FIXTURES['https://api.identity.manifold.co/v1/teams'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', team='team-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id='tid-1', project_id=None)
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_project(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_3': 'token-3',
'RESOURCE_TOKEN_4': 'token-4'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?project_id=pid-1']
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-1')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_filter_by_team_and_project(self, client_mock):
expected_result = [{'RESOURCE_TOKEN_1': 'token-1',
'RESOURCE_TOKEN_2': 'token-2'
}]
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources?team_id=tid-1&project_id=pid-1']
client_mock.return_value.get_teams.return_value = API_FIXTURES['https://api.identity.manifold.co/v1/teams'][0:1]
client_mock.return_value.get_projects.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/projects'][0:1]
client_mock.return_value.get_credentials.side_effect = lambda x: API_FIXTURES['https://api.marketplace.manifold.co/v1/'
'credentials?resource_id={0}'.format(x)]
self.assertListEqual(expected_result, self.lookup.run([], api_token='token-123', project='project-1'))
client_mock.assert_called_with('token-123')
client_mock.return_value.get_resources.assert_called_with(team_id=None, project_id='pid-1')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_team_doesnt_exist(self, client_mock):
client_mock.return_value.get_teams.return_value = []
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123', team='no-team')
self.assertEqual("Team 'no-team' does not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_project_doesnt_exist(self, client_mock):
client_mock.return_value.get_projects.return_value = []
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123', project='no-project')
self.assertEqual("Project 'no-project' does not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_raise_resource_doesnt_exist(self, client_mock):
client_mock.return_value.get_resources.return_value = API_FIXTURES['https://api.marketplace.manifold.co/v1/resources']
with self.assertRaises(AnsibleError) as context:
self.lookup.run(['resource-1', 'no-resource-1', 'no-resource-2'], api_token='token-123')
self.assertEqual("Resource(s) no-resource-1, no-resource-2 do not exist",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_catch_api_error(self, client_mock):
client_mock.side_effect = ApiError('Generic error')
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123')
self.assertEqual("API Error: Generic error",
str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_catch_unhandled_exception(self, client_mock):
client_mock.side_effect = Exception('Unknown error')
with self.assertRaises(AnsibleError) as context:
self.lookup.run([], api_token='token-123')
self.assertTrue('Exception: Unknown error' in str(context.exception))
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_falls_back_to_env_var(self, client_mock):
client_mock.return_value.get_resources.return_value = []
client_mock.return_value.get_credentials.return_value = []
try:
os.environ['MANIFOLD_API_TOKEN'] = 'token-321'
self.lookup.run([])
finally:
os.environ.pop('MANIFOLD_API_TOKEN', None)
client_mock.assert_called_with('token-321')
@patch('ansible_collections.community.general.plugins.lookup.manifold.ManifoldApiClient')
def test_falls_raises_on_no_token(self, client_mock):
client_mock.return_value.get_resources.return_value = []
client_mock.return_value.get_credentials.return_value = []
os.environ.pop('MANIFOLD_API_TOKEN', None)
with self.assertRaises(AnsibleError) as context:
self.lookup.run([])
assert 'api_token' in str(context.exception)

View file

@ -10,123 +10,13 @@ __metaclass__ = type
import pytest
from ansible_collections.community.general.plugins.module_utils.module_helper import (
DependencyCtxMgr, VarMeta, VarDict, cause_changes
cause_changes
)
# remove in 11.0.0
def test_dependency_ctxmgr():
ctx = DependencyCtxMgr("POTATOES", "Potatoes must be installed")
with ctx:
import potatoes_that_will_never_be_there # noqa: F401, pylint: disable=unused-import
print("POTATOES: ctx.text={0}".format(ctx.text))
assert ctx.text == "Potatoes must be installed"
assert not ctx.has_it
ctx = DependencyCtxMgr("POTATOES2")
with ctx:
import potatoes_that_will_never_be_there_again # noqa: F401, pylint: disable=unused-import
assert not ctx.has_it
print("POTATOES2: ctx.text={0}".format(ctx.text))
assert ctx.text.startswith("No module named")
assert "potatoes_that_will_never_be_there_again" in ctx.text
ctx = DependencyCtxMgr("TYPING")
with ctx:
import sys # noqa: F401, pylint: disable=unused-import
assert ctx.has_it
# remove in 11.0.0
def test_variable_meta():
meta = VarMeta()
assert meta.output is True
assert meta.diff is False
assert meta.value is None
meta.set_value("abc")
assert meta.initial_value == "abc"
assert meta.value == "abc"
assert meta.diff_result is None
meta.set_value("def")
assert meta.initial_value == "abc"
assert meta.value == "def"
assert meta.diff_result is None
# remove in 11.0.0
def test_variable_meta_diff():
meta = VarMeta(diff=True)
assert meta.output is True
assert meta.diff is True
assert meta.value is None
meta.set_value("abc")
assert meta.initial_value == "abc"
assert meta.value == "abc"
assert meta.diff_result is None
meta.set_value("def")
assert meta.initial_value == "abc"
assert meta.value == "def"
assert meta.diff_result == {"before": "abc", "after": "def"}
meta.set_value("ghi")
assert meta.initial_value == "abc"
assert meta.value == "ghi"
assert meta.diff_result == {"before": "abc", "after": "ghi"}
# remove in 11.0.0
def test_vardict():
vd = VarDict()
vd.set('a', 123)
assert vd['a'] == 123
assert vd.a == 123
assert 'a' in vd._meta
assert vd.meta('a').output is True
assert vd.meta('a').diff is False
assert vd.meta('a').change is False
vd['b'] = 456
assert vd.meta('b').output is True
assert vd.meta('b').diff is False
assert vd.meta('b').change is False
vd.set_meta('a', diff=True, change=True)
vd.set_meta('b', diff=True, output=False)
vd['c'] = 789
assert vd.has_changed('c') is False
vd['a'] = 'new_a'
assert vd.has_changed('a') is True
vd['c'] = 'new_c'
assert vd.has_changed('c') is False
vd['b'] = 'new_b'
assert vd.has_changed('b') is False
assert vd.a == 'new_a'
assert vd.c == 'new_c'
assert vd.output() == {'a': 'new_a', 'c': 'new_c'}
assert vd.diff() == {'before': {'a': 123}, 'after': {'a': 'new_a'}}, "diff={0}".format(vd.diff())
# remove in 11.0.0
def test_variable_meta_change():
vd = VarDict()
vd.set('a', 123, change=True)
vd.set('b', [4, 5, 6], change=True)
vd.set('c', {'m': 7, 'n': 8, 'o': 9}, change=True)
vd.set('d', {'a1': {'a11': 33, 'a12': 34}}, change=True)
vd.a = 1234
assert vd.has_changed('a') is True
vd.b.append(7)
assert vd.b == [4, 5, 6, 7]
assert vd.has_changed('b')
vd.c.update({'p': 10})
assert vd.c == {'m': 7, 'n': 8, 'o': 9, 'p': 10}
assert vd.has_changed('c')
vd.d['a1'].update({'a13': 35})
assert vd.d == {'a1': {'a11': 33, 'a12': 34, 'a13': 35}}
assert vd.has_changed('d')
#
# DEPRECATION NOTICE
# Parameters on_success and on_failure are deprecated and will be removed in community.genral 12.0.0
# Parameters on_success and on_failure are deprecated and will be removed in community.general 12.0.0
# Remove testcases with those params when releasing 12.0.0
#
CAUSE_CHG_DECO_PARAMS = ['deco_args', 'expect_exception', 'expect_changed']