pacemaker_cluster: enhancements and add unit tests (#10227)
Some checks are pending
EOL CI / EOL Sanity (Ⓐ2.16) (push) Waiting to run
EOL CI / EOL Units (Ⓐ2.16+py2.7) (push) Waiting to run
EOL CI / EOL Units (Ⓐ2.16+py3.11) (push) Waiting to run
EOL CI / EOL Units (Ⓐ2.16+py3.6) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/1/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/2/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/3/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/1/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/2/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/3/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/1/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/2/) (push) Waiting to run
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/3/) (push) Waiting to run
nox / Run extra sanity tests (push) Waiting to run

* feat(initial): Add unit tests and rewrite pacemaker_cluster

This commit introduces unit tests and pacemaker_cluster module rewrite
to use the pacemaker module utils.

* feat(cleanup): Various fixes and add resource state

This commit migrates the pacemaker_cluster's cleanup state to the
pacemaker_resource module. Additionally, the unit tests for
pacemaker_cluster have been corrected to proper mock run command order.

* doc(botmeta): Add author to pacemaker_cluster

* style(whitespace): Cleanup test files

* refactor(cleanup): Remove unused state value

* bug(fix): Parse apply_all as separate option

* refactor(review): Apply code review suggestions

This commit refactors breaking changes in pacemaker_cluster module into
deprecated features. The following will be scheduled for deprecation:
`state: cleanup` and `state: None`.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* refactor(review): Additional review suggestions

* refactor(deprecations): Remove all deprecation changes

* refactor(review): Enhance rename changelog entry and fix empty string logic

* refactor(cleanup): Remove from pacemaker_resource

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* refactor(review): Add changelog and revert required name

* revert(default): Use default state=present

* Update changelogs/fragments/10227-pacemaker-cluster-and-resource-enhancement.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelog fragment.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
This commit is contained in:
Dexter 2025-07-14 01:48:36 -04:00 committed by GitHub
commit 283d947f17
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
8 changed files with 695 additions and 156 deletions

2
.github/BOTMETA.yml vendored
View file

@ -1057,7 +1057,7 @@ files:
$modules/ovh_monthly_billing.py:
maintainers: fraff
$modules/pacemaker_cluster.py:
maintainers: matbu
maintainers: matbu munchtoast
$modules/pacemaker_resource.py:
maintainers: munchtoast
$modules/packet_:

View file

@ -0,0 +1,7 @@
deprecated_features:
- pacemaker_cluster - the parameter ``state`` will become a required parameter in community.general 12.0.0 (https://github.com/ansible-collections/community.general/pull/10227).
minor_changes:
- pacemaker_cluster - add ``state=maintenance`` for managing pacemaker maintenance mode (https://github.com/ansible-collections/community.general/issues/10200, https://github.com/ansible-collections/community.general/pull/10227).
- pacemaker_cluster - rename ``node`` to ``name`` and add ``node`` alias (https://github.com/ansible-collections/community.general/pull/10227).
- pacemaker_resource - enhance module by removing duplicative code (https://github.com/ansible-collections/community.general/pull/10227).

View file

@ -14,7 +14,12 @@ _state_map = {
"absent": "remove",
"status": "status",
"enabled": "enable",
"disabled": "disable"
"disabled": "disable",
"online": "start",
"offline": "stop",
"maintenance": "set",
"config": "config",
"cleanup": "cleanup",
}
@ -38,20 +43,19 @@ def fmt_resource_argument(value):
def get_pacemaker_maintenance_mode(runner):
with runner("config") as ctx:
rc, out, err = ctx.run()
with runner("cli_action config") as ctx:
rc, out, err = ctx.run(cli_action="property")
maintenance_mode_output = list(filter(lambda string: "maintenance-mode=true" in string.lower(), out.splitlines()))
return bool(maintenance_mode_output)
def pacemaker_runner(module, cli_action=None, **kwargs):
def pacemaker_runner(module, **kwargs):
runner_command = ['pcs']
if cli_action:
runner_command.append(cli_action)
runner = CmdRunner(
module,
command=runner_command,
arg_formats=dict(
cli_action=cmd_runner_fmt.as_list(),
state=cmd_runner_fmt.as_map(_state_map),
name=cmd_runner_fmt.as_list(),
resource_type=cmd_runner_fmt.as_func(fmt_resource_type),
@ -59,6 +63,7 @@ def pacemaker_runner(module, cli_action=None, **kwargs):
resource_operation=cmd_runner_fmt.as_func(fmt_resource_operation),
resource_meta=cmd_runner_fmt.stack(cmd_runner_fmt.as_opt_val)("meta"),
resource_argument=cmd_runner_fmt.as_func(fmt_resource_argument),
apply_all=cmd_runner_fmt.as_bool("--all"),
wait=cmd_runner_fmt.as_opt_eq_val("--wait"),
config=cmd_runner_fmt.as_fixed("config"),
force=cmd_runner_fmt.as_bool("--force"),

View file

@ -13,6 +13,7 @@ module: pacemaker_cluster
short_description: Manage pacemaker clusters
author:
- Mathieu Bultel (@matbu)
- Dexter Le (@munchtoast)
description:
- This module can manage a pacemaker cluster and nodes from Ansible using the pacemaker CLI.
extends_documentation_fragment:
@ -26,18 +27,20 @@ options:
state:
description:
- Indicate desired state of the cluster.
choices: [cleanup, offline, online, restart]
- The value V(maintenance) has been added in community.general 11.1.0.
choices: [cleanup, offline, online, restart, maintenance]
type: str
node:
name:
description:
- Specify which node of the cluster you want to manage. V(null) == the cluster status itself, V(all) == check the status
of all nodes.
type: str
aliases: ['node']
timeout:
description:
- Timeout when the module should considered that the action has failed.
default: 300
- Timeout period (in seconds) for polling the cluster operation.
type: int
default: 300
force:
description:
- Force the change of the cluster state.
@ -63,132 +66,104 @@ out:
returned: always
"""
import time
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.module_helper import StateModuleHelper
from ansible_collections.community.general.plugins.module_utils.pacemaker import pacemaker_runner, get_pacemaker_maintenance_mode
_PCS_CLUSTER_DOWN = "Error: cluster is not currently running on this node"
class PacemakerCluster(StateModuleHelper):
module = dict(
argument_spec=dict(
state=dict(type='str', choices=[
'cleanup', 'offline', 'online', 'restart', 'maintenance']),
name=dict(type='str', aliases=['node']),
timeout=dict(type='int', default=300),
force=dict(type='bool', default=True)
),
supports_check_mode=True,
)
default_state = ""
def __init_module__(self):
self.runner = pacemaker_runner(self.module)
self.vars.set('apply_all', True if not self.module.params['name'] else False)
get_args = dict([('cli_action', 'cluster'), ('state', 'status'), ('name', None), ('apply_all', self.vars.apply_all)])
if self.module.params['state'] == "maintenance":
get_args['cli_action'] = "property"
get_args['state'] = "config"
get_args['name'] = "maintenance-mode"
elif self.module.params['state'] == "cleanup":
get_args['cli_action'] = "resource"
get_args['name'] = self.module.params['name']
def get_cluster_status(module):
cmd = ["pcs", "cluster", "status"]
rc, out, err = module.run_command(cmd)
if out in _PCS_CLUSTER_DOWN:
return 'offline'
else:
return 'online'
self.vars.set('get_args', get_args)
self.vars.set('previous_value', self._get()['out'])
self.vars.set('value', self.vars.previous_value, change=True, diff=True)
if not self.module.params['state']:
self.module.deprecate(
'Parameter "state" values not set is being deprecated. Make sure to provide a value for "state"',
version='12.0.0',
collection_name='community.general'
)
def get_node_status(module, node='all'):
node_l = ["all"] if node == "all" else []
cmd = ["pcs", "cluster", "pcsd-status"] + node_l
rc, out, err = module.run_command(cmd)
if rc == 1:
module.fail_json(msg="Command execution failed.\nCommand: `%s`\nError: %s" % (cmd, err))
status = []
for o in out.splitlines():
status.append(o.split(':'))
return status
def __quit_module__(self):
self.vars.set('value', self._get()['out'])
def _process_command_output(self, fail_on_err, ignore_err_msg=""):
def process(rc, out, err):
if fail_on_err and rc != 0 and err and ignore_err_msg not in err:
self.do_raise('pcs failed with error (rc={0}): {1}'.format(rc, err))
out = out.rstrip()
return None if out == "" else out
return process
def clean_cluster(module, timeout):
cmd = ["pcs", "resource", "cleanup"]
rc, out, err = module.run_command(cmd)
if rc == 1:
module.fail_json(msg="Command execution failed.\nCommand: `%s`\nError: %s" % (cmd, err))
def _get(self):
with self.runner('cli_action state name') as ctx:
result = ctx.run(cli_action=self.vars.get_args['cli_action'], state=self.vars.get_args['state'], name=self.vars.get_args['name'])
return dict([('rc', result[0]),
('out', result[1] if result[1] != "" else None),
('err', result[2])])
def state_cleanup(self):
with self.runner('cli_action state name', output_process=self._process_command_output(True, "Fail"), check_mode_skip=True) as ctx:
ctx.run(cli_action='resource')
def set_cluster(module, state, timeout, force):
if state == 'online':
cmd = ["pcs", "cluster", "start"]
if state == 'offline':
cmd = ["pcs", "cluster", "stop"]
if force:
cmd = cmd + ["--force"]
rc, out, err = module.run_command(cmd)
if rc == 1:
module.fail_json(msg="Command execution failed.\nCommand: `%s`\nError: %s" % (cmd, err))
def state_offline(self):
with self.runner('cli_action state name apply_all wait',
output_process=self._process_command_output(True, "not currently running"),
check_mode_skip=True) as ctx:
ctx.run(cli_action='cluster', apply_all=self.vars.apply_all, wait=self.module.params['timeout'])
t = time.time()
ready = False
while time.time() < t + timeout:
cluster_state = get_cluster_status(module)
if cluster_state == state:
ready = True
break
if not ready:
module.fail_json(msg="Failed to set the state `%s` on the cluster\n" % (state))
def state_online(self):
with self.runner('cli_action state name apply_all wait',
output_process=self._process_command_output(True, "currently running"),
check_mode_skip=True) as ctx:
ctx.run(cli_action='cluster', apply_all=self.vars.apply_all, wait=self.module.params['timeout'])
if get_pacemaker_maintenance_mode(self.runner):
with self.runner('cli_action state name', output_process=self._process_command_output(True, "Fail"), check_mode_skip=True) as ctx:
ctx.run(cli_action='property', state='maintenance', name='maintenance-mode=false')
def state_maintenance(self):
with self.runner('cli_action state name',
output_process=self._process_command_output(True, "Fail"),
check_mode_skip=True) as ctx:
ctx.run(cli_action='property', name='maintenance-mode=true')
def state_restart(self):
with self.runner('cli_action state name apply_all wait',
output_process=self._process_command_output(True, "not currently running"),
check_mode_skip=True) as ctx:
ctx.run(cli_action='cluster', state='offline', apply_all=self.vars.apply_all, wait=self.module.params['timeout'])
ctx.run(cli_action='cluster', state='online', apply_all=self.vars.apply_all, wait=self.module.params['timeout'])
if get_pacemaker_maintenance_mode(self.runner):
with self.runner('cli_action state name', output_process=self._process_command_output(True, "Fail"), check_mode_skip=True) as ctx:
ctx.run(cli_action='property', state='maintenance', name='maintenance-mode=false')
def main():
argument_spec = dict(
state=dict(type='str', choices=['online', 'offline', 'restart', 'cleanup']),
node=dict(type='str'),
timeout=dict(type='int', default=300),
force=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec,
supports_check_mode=True,
)
changed = False
state = module.params['state']
node = module.params['node']
force = module.params['force']
timeout = module.params['timeout']
if state in ['online', 'offline']:
# Get cluster status
if node is None:
cluster_state = get_cluster_status(module)
if cluster_state == state:
module.exit_json(changed=changed, out=cluster_state)
else:
if module.check_mode:
module.exit_json(changed=True)
set_cluster(module, state, timeout, force)
cluster_state = get_cluster_status(module)
if cluster_state == state:
module.exit_json(changed=True, out=cluster_state)
else:
module.fail_json(msg="Fail to bring the cluster %s" % state)
else:
cluster_state = get_node_status(module, node)
# Check cluster state
for node_state in cluster_state:
if node_state[1].strip().lower() == state:
module.exit_json(changed=changed, out=cluster_state)
else:
if module.check_mode:
module.exit_json(changed=True)
# Set cluster status if needed
set_cluster(module, state, timeout, force)
cluster_state = get_node_status(module, node)
module.exit_json(changed=True, out=cluster_state)
elif state == 'restart':
if module.check_mode:
module.exit_json(changed=True)
set_cluster(module, 'offline', timeout, force)
cluster_state = get_cluster_status(module)
if cluster_state == 'offline':
set_cluster(module, 'online', timeout, force)
cluster_state = get_cluster_status(module)
if cluster_state == 'online':
module.exit_json(changed=True, out=cluster_state)
else:
module.fail_json(msg="Failed during the restart of the cluster, the cluster cannot be started")
else:
module.fail_json(msg="Failed during the restart of the cluster, the cluster cannot be stopped")
elif state == 'cleanup':
if module.check_mode:
module.exit_json(changed=True)
clean_cluster(module, timeout)
cluster_state = get_cluster_status(module)
module.exit_json(changed=True, out=cluster_state)
PacemakerCluster.execute()
if __name__ == '__main__':

View file

@ -163,13 +163,15 @@ class PacemakerResource(StateModuleHelper):
required_if=[('state', 'present', ['resource_type', 'resource_option'])],
supports_check_mode=True,
)
default_state = "present"
def __init_module__(self):
self.runner = pacemaker_runner(self.module, cli_action='resource')
self._maintenance_mode_runner = pacemaker_runner(self.module, cli_action='property')
self.vars.set('previous_value', self._get())
self.runner = pacemaker_runner(self.module)
self.vars.set('previous_value', self._get()['out'])
self.vars.set('value', self.vars.previous_value, change=True, diff=True)
self.module.params['name'] = self.module.params['name'] or None
def __quit_module__(self):
self.vars.set('value', self._get()['out'])
def _process_command_output(self, fail_on_err, ignore_err_msg=""):
def process(rc, out, err):
@ -180,45 +182,31 @@ class PacemakerResource(StateModuleHelper):
return process
def _get(self):
with self.runner('state name', output_process=self._process_command_output(False)) as ctx:
return ctx.run(state='status')
with self.runner('cli_action state name') as ctx:
result = ctx.run(cli_action="resource", state='status')
return dict([('rc', result[0]),
('out', result[1] if result[1] != "" else None),
('err', result[2])])
def state_absent(self):
runner_args = ['state', 'name', 'force']
force = get_pacemaker_maintenance_mode(self._maintenance_mode_runner)
with self.runner(runner_args, output_process=self._process_command_output(True, "does not exist"), check_mode_skip=True) as ctx:
ctx.run(force=force)
self.vars.set('value', self._get())
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
force = get_pacemaker_maintenance_mode(self.runner)
with self.runner('cli_action state name force', output_process=self._process_command_output(True, "does not exist"), check_mode_skip=True) as ctx:
ctx.run(cli_action='resource', force=force)
def state_present(self):
with self.runner(
'state name resource_type resource_option resource_operation resource_meta resource_argument wait',
output_process=self._process_command_output(not get_pacemaker_maintenance_mode(self._maintenance_mode_runner), "already exists"),
'cli_action state name resource_type resource_option resource_operation resource_meta resource_argument wait',
output_process=self._process_command_output(not get_pacemaker_maintenance_mode(self.runner), "already exists"),
check_mode_skip=True) as ctx:
ctx.run()
self.vars.set('value', self._get())
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
ctx.run(cli_action='resource')
def state_enabled(self):
with self.runner('state name', output_process=self._process_command_output(True, "Starting"), check_mode_skip=True) as ctx:
ctx.run()
self.vars.set('value', self._get())
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
with self.runner('cli_action state name', output_process=self._process_command_output(True, "Starting"), check_mode_skip=True) as ctx:
ctx.run(cli_action='resource')
def state_disabled(self):
with self.runner('state name', output_process=self._process_command_output(True, "Stopped"), check_mode_skip=True) as ctx:
ctx.run()
self.vars.set('value', self._get())
self.vars.stdout = ctx.results_out
self.vars.stderr = ctx.results_err
self.vars.cmd = ctx.cmd
with self.runner('cli_action state name', output_process=self._process_command_output(True, "Stopped"), check_mode_skip=True) as ctx:
ctx.run(cli_action='resource')
def main():

View file

@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
# Author: Dexter Le (dextersydney2001@gmail.com)
# Largely adapted from test_redhat_subscription by
# Jiri Hnidek (jhnidek@redhat.com)
#
# Copyright (c) Dexter Le (dextersydney2001@gmail.com)
# Copyright (c) Jiri Hnidek (jhnidek@redhat.com)
#
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible_collections.community.general.plugins.modules import pacemaker_cluster
from .uthelper import UTHelper, RunCommandMock
UTHelper.from_module(pacemaker_cluster, __name__, mocks=[RunCommandMock])

View file

@ -0,0 +1,488 @@
# -*- coding: utf-8 -*-
# Copyright (c) Dexter Le (dextersydney2001@gmail.com)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
---
anchors:
environ: &env-def {environ_update: {LANGUAGE: C, LC_ALL: C}, check_rc: false}
test_cases:
- id: test_online_minimal_input_initial_online_all_no_maintenance
input:
state: online
output:
changed: false
previous_value: ' * Online: [ pc1, pc2, pc3 ]'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- command: [/testbin/pcs, cluster, start, --all, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_online_minimal_input_initial_offline_all_maintenance
input:
state: online
output:
changed: true
previous_value: 'Error: cluster is not currently running on this node'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- command: [/testbin/pcs, cluster, start, --all, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 0
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
maintenance-mode=true
err: ""
- command: [/testbin/pcs, property, set, maintenance-mode=false]
environ: *env-def
rc: 0
out: ""
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_online_minimal_input_initial_offline_single_nonlocal_no_maintenance
input:
state: online
name: pc2
output:
changed: true
previous_value: '* Node pc2: UNCLEAN (offline)\n * Online: [ pc1, pc3 ]'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: '* Node pc2: UNCLEAN (offline)\n * Online: [ pc1, pc3 ]'
err: ""
- command: [/testbin/pcs, cluster, start, pc2, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_online_minimal_input_initial_offline_single_local_no_maintenance
input:
state: online
name: pc1
output:
changed: true
previous_value: 'Error: cluster is not currently running on this node'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- command: [/testbin/pcs, cluster, start, pc1, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_offline_minimal_input_initial_online_all
input:
state: offline
output:
changed: true
previous_value: ' * Online: [ pc1, pc2, pc3 ]'
value: 'Error: cluster is not currently running on this node'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- command: [/testbin/pcs, cluster, stop, --all, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- id: test_offline_minimal_input_initial_offline_all
input:
state: offline
output:
changed: false
previous_value: 'Error: cluster is not currently running on this node'
value: 'Error: cluster is not currently running on this node'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- command: [/testbin/pcs, cluster, stop, --all, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- id: test_offline_minimal_input_initial_offline_single_nonlocal
input:
state: offline
name: pc3
output:
changed: true
previous_value: '* Node pc2: UNCLEAN (offline)\n * Online: [ pc1, pc3 ]'
value: '* Node pc2: UNCLEAN (offline)\n* Node pc3: UNCLEAN (offline)\n * Online: [ pc1 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: '* Node pc2: UNCLEAN (offline)\n * Online: [ pc1, pc3 ]'
err: ""
- command: [/testbin/pcs, cluster, stop, pc3, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: '* Node pc2: UNCLEAN (offline)\n* Node pc3: UNCLEAN (offline)\n * Online: [ pc1 ]'
err: ""
- id: test_restart_minimal_input_initial_online_all_no_maintenance
input:
state: restart
output:
changed: false
previous_value: ' * Online: [ pc1, pc2, pc3 ]'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- command: [/testbin/pcs, cluster, stop, --all, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, start, --all, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_restart_minimal_input_initial_offline_all_no_maintenance
input:
state: restart
output:
changed: true
previous_value: 'Error: cluster is not currently running on this node'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- command: [/testbin/pcs, cluster, stop, --all, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, start, --all, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_restart_minimal_input_initial_offline_all_maintenance
input:
state: restart
output:
changed: true
previous_value: 'Error: cluster is not currently running on this node'
value: ' * Online: [ pc1, pc2, pc3 ]'
mocks:
run_command:
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 1
out: 'Error: cluster is not currently running on this node'
err: ""
- command: [/testbin/pcs, cluster, stop, --all, --wait=300]
environ: *env-def
rc: 0
out: "Stopping Cluster..."
err: ""
- command: [/testbin/pcs, cluster, start, --all, --wait=300]
environ: *env-def
rc: 0
out: "Starting Cluster..."
err: ""
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 0
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
maintenance-mode=true
err: ""
- command: [/testbin/pcs, property, set, maintenance-mode=false]
environ: *env-def
rc: 0
out: ""
err: ""
- command: [/testbin/pcs, cluster, status]
environ: *env-def
rc: 0
out: ' * Online: [ pc1, pc2, pc3 ]'
err: ""
- id: test_maintenance_minimal_input_initial_online
input:
state: maintenance
output:
changed: true
previous_value: 'maintenance-mode=false (default)'
value: 'maintenance-mode=true'
mocks:
run_command:
- command: [/testbin/pcs, property, config, maintenance-mode]
environ: *env-def
rc: 0
out: 'maintenance-mode=false (default)'
err: ""
- command: [/testbin/pcs, property, set, maintenance-mode=true]
environ: *env-def
rc: 0
out: ""
err: ""
- command: [/testbin/pcs, property, config, maintenance-mode]
environ: *env-def
rc: 0
out: 'maintenance-mode=true'
err: ""
- id: test_maintenance_minimal_input_initial_offline
input:
state: maintenance
output:
failed: true
msg: "pcs failed with error (rc=1): Error: unable to get cib"
mocks:
run_command:
- command: [/testbin/pcs, property, config, maintenance-mode]
environ: *env-def
rc: 1
out: ""
err: "Error: unable to get cib"
- command: [/testbin/pcs, property, set, maintenance-mode=true]
environ: *env-def
rc: 1
out: ""
err: "Error: unable to get cib"
- id: test_maintenance_minimal_input_initial_maintenance
input:
state: maintenance
output:
changed: false
previous_value: 'maintenance-mode=true'
value: 'maintenance-mode=true'
mocks:
run_command:
- command: [/testbin/pcs, property, config, maintenance-mode]
environ: *env-def
rc: 0
out: 'maintenance-mode=true'
err: ""
- command: [/testbin/pcs, property, set, maintenance-mode=true]
environ: *env-def
rc: 0
out: ""
err: ""
- command: [/testbin/pcs, property, config, maintenance-mode]
environ: *env-def
rc: 0
out: 'maintenance-mode=true'
err: ""
- id: test_cleanup_minimal_input_initial_resources_not_exist
input:
state: cleanup
output:
changed: false
previous_value: "NO resources configured"
value: "NO resources configured"
mocks:
run_command:
- command: [/testbin/pcs, resource, status]
environ: *env-def
rc: 0
out: "NO resources configured"
err: ""
- command: [/testbin/pcs, resource, cleanup]
environ: *env-def
rc: 0
out: "Cleaned up all resources on all nodes"
err: ""
- command: [/testbin/pcs, resource, status]
environ: *env-def
rc: 0
out: "NO resources configured"
err: ""
- id: test_cleanup_minimal_input_initial_resources_exists
input:
state: cleanup
output:
changed: true
previous_value: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
value: "NO resources configured"
mocks:
run_command:
- command: [/testbin/pcs, resource, status]
environ: *env-def
rc: 0
out: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
err: ""
- command: [/testbin/pcs, resource, cleanup]
environ: *env-def
rc: 0
out: "Cleaned up all resources on all nodes"
err: ""
- command: [/testbin/pcs, resource, status]
environ: *env-def
rc: 0
out: "NO resources configured"
err: ""
- id: test_cleanup_specific_minimal_input_initial_resources_exists
input:
state: cleanup
name: virtual-ip
output:
changed: true
previous_value: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
value: "NO resources configured"
mocks:
run_command:
- command: [/testbin/pcs, resource, status, virtual-ip]
environ: *env-def
rc: 0
out: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
err: ""
- command: [/testbin/pcs, resource, cleanup, virtual-ip]
environ: *env-def
rc: 0
out: "Cleaned up virtual-ip on X"
err: ""
- command: [/testbin/pcs, resource, status, virtual-ip]
environ: *env-def
rc: 0
out: "NO resources configured"
err: ""

View file

@ -51,6 +51,63 @@ test_cases:
rc: 0
out: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
err: ""
- id: test_present_filled_input_resource_not_exist
input:
state: present
name: virtual-ip
resource_type:
resource_name: IPaddr2
resource_option:
- "ip=[192.168.2.1]"
resource_operation:
- operation_action: start
operation_option:
- timeout=1200
- operation_action: stop
operation_option:
- timeout=1200
- operation_action: monitor
operation_option:
- timeout=1200
resource_meta:
- test_meta1=123
- test_meta2=456
resource_argument:
argument_action: group
argument_option:
- test_group
wait: 200
output:
changed: true
previous_value: null
value: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
mocks:
run_command:
- command: [/testbin/pcs, resource, status, virtual-ip]
environ: *env-def
rc: 1
out: ""
err: "Error: resource or tag id 'virtual-ip' not found"
- command: [/testbin/pcs, property, config]
environ: *env-def
rc: 1
out: |
Cluster Properties: cib-bootstrap-options
cluster-infrastructure=corosync
cluster-name=hacluster
dc-version=2.1.9-1.fc41-7188dbf
have-watchdog=false
err: ""
- command: [/testbin/pcs, resource, create, virtual-ip, IPaddr2, "ip=[192.168.2.1]", op, start, timeout=1200, op, stop, timeout=1200, op, monitor, timeout=1200, meta, test_meta1=123, meta, test_meta2=456, --group, test_group, --wait=200]
environ: *env-def
rc: 0
out: "Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPAddr2')"
err: ""
- command: [/testbin/pcs, resource, status, virtual-ip]
environ: *env-def
rc: 0
out: " * virtual-ip\t(ocf:heartbeat:IPAddr2):\t Started"
err: ""
- id: test_present_minimal_input_resource_exists
input:
state: present