Remove proxmox content (#10110)
Some checks failed
EOL CI / EOL Sanity (Ⓐ2.16) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.16+py2.7) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.16+py3.11) (push) Has been cancelled
EOL CI / EOL Units (Ⓐ2.16+py3.6) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/1/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/2/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+alpine3+py:azp/posix/3/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/1/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/2/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+fedora38+py:azp/posix/3/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/1/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/2/) (push) Has been cancelled
EOL CI / EOL I (Ⓐ2.16+opensuse15+py:azp/posix/3/) (push) Has been cancelled
nox / Run extra sanity tests (push) Has been cancelled

Remove proxmox content.
This commit is contained in:
Felix Fontein 2025-06-08 16:18:16 +02:00 committed by GitHub
parent f63fdceb23
commit f2b7bdf293
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
76 changed files with 124 additions and 14629 deletions

33
.github/BOTMETA.yml vendored
View file

@ -113,9 +113,6 @@ files:
$connections/lxd.py: $connections/lxd.py:
labels: lxd labels: lxd
maintainers: mattclay maintainers: mattclay
$connections/proxmox_pct_remote.py:
labels: proxmox
maintainers: mietzen
$connections/qubes.py: $connections/qubes.py:
maintainers: kushaldas maintainers: kushaldas
$connections/saltstack.py: $connections/saltstack.py:
@ -247,8 +244,6 @@ files:
keywords: opennebula dynamic inventory script keywords: opennebula dynamic inventory script
labels: cloud opennebula labels: cloud opennebula
maintainers: feldsam maintainers: feldsam
$inventories/proxmox.py:
maintainers: $team_virt ilijamt krauthosting
$inventories/scaleway.py: $inventories/scaleway.py:
labels: cloud scaleway labels: cloud scaleway
maintainers: $team_scaleway maintainers: $team_scaleway
@ -1136,34 +1131,6 @@ files:
maintainers: $team_bsd berenddeboer maintainers: $team_bsd berenddeboer
$modules/pritunl_: $modules/pritunl_:
maintainers: Lowess maintainers: Lowess
$modules/proxmox:
keywords: kvm libvirt proxmox qemu
labels: proxmox virt
maintainers: $team_virt UnderGreen krauthosting
ignore: tleguern
$modules/proxmox.py:
ignore: skvidal
maintainers: UnderGreen krauthosting
$modules/proxmox_disk.py:
maintainers: castorsky krauthosting
$modules/proxmox_kvm.py:
ignore: skvidal
maintainers: helldorado krauthosting
$modules/proxmox_backup.py:
maintainers: IamLunchbox
$modules/proxmox_backup_info.py:
maintainers: raoufnezhad mmayabi
$modules/proxmox_nic.py:
maintainers: Kogelvis krauthosting
$modules/proxmox_node_info.py:
maintainers: jwbernin krauthosting
$modules/proxmox_storage_contents_info.py:
maintainers: l00ptr krauthosting
$modules/proxmox_tasks_info:
maintainers: paginabianca krauthosting
$modules/proxmox_template.py:
ignore: skvidal
maintainers: UnderGreen krauthosting
$modules/pubnub_blocks.py: $modules/pubnub_blocks.py:
maintainers: parfeon pubnub maintainers: parfeon pubnub
$modules/pulp_repo.py: $modules/pulp_repo.py:

View file

@ -54,12 +54,6 @@ exclusions = [
] ]
doc_fragment = "community.general.keycloak.actiongroup_keycloak" doc_fragment = "community.general.keycloak.actiongroup_keycloak"
[[sessions.extra_checks.action_groups_config]]
name = "proxmox"
pattern = "^proxmox(_.*)?$"
exclusions = []
doc_fragment = "community.general.proxmox.actiongroup_proxmox"
[sessions.build_import_check] [sessions.build_import_check]
run_galaxy_importer = true run_galaxy_importer = true

View file

@ -1,2 +0,0 @@
bugfixes:
- "proxmox - fix crash in module when the used on an existing LXC container with ``state=present`` and ``force=true`` (https://github.com/ansible-collections/community.proxmox/pull/91, https://github.com/ansible-collections/community.general/pull/10155)."

View file

@ -1,3 +0,0 @@
---
minor_changes:
- proxmox_snap - correctly handle proxmox_snap timeout parameter (https://github.com/ansible-collections/community.proxmox/issues/73, https://github.com/ansible-collections/community.proxmox/issues/95, https://github.com/ansible-collections/community.general/pull/10176).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox_template - add server side artifact fetching support (https://github.com/ansible-collections/community.general/pull/9113).

View file

@ -1,2 +0,0 @@
bugfixes:
- proxmox_backup - fix incorrect key lookup in vmid permission check (https://github.com/ansible-collections/community.general/pull/9223).

View file

@ -1,11 +0,0 @@
minor_changes:
- proxmox - refactors the proxmox module (https://github.com/ansible-collections/community.general/pull/9225).
bugfixes:
- proxmox - fixes idempotency of template conversions (https://github.com/ansible-collections/community.general/pull/9225, https://github.com/ansible-collections/community.general/issues/8811).
- proxmox - fixes issues with disk_volume variable (https://github.com/ansible-collections/community.general/pull/9225, https://github.com/ansible-collections/community.general/issues/9065).
- proxmox - fixes incorrect parsing for bind-only mounts (https://github.com/ansible-collections/community.general/pull/9225, https://github.com/ansible-collections/community.general/issues/8982).
- proxmox module utils - fixes ignoring of ``choose_first_if_multiple`` argument in ``get_vmid`` (https://github.com/ansible-collections/community.general/pull/9225).
deprecated_features:
- proxmox - removes default value ``false`` of ``update`` parameter. This will be changed to a default of ``true`` in community.general 11.0.0 (https://github.com/ansible-collections/community.general/pull/9225).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox inventory plugin - strip whitespace from ``user``, ``token_id``, and ``token_secret`` (https://github.com/ansible-collections/community.general/issues/9227, https://github.com/ansible-collections/community.general/pull/9228/).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox_backup - refactor permission checking to improve code readability and maintainability (https://github.com/ansible-collections/community.general/pull/9239).

View file

@ -1,4 +0,0 @@
bugfixes:
- proxmox_disk - fix async method and make ``resize_disk`` method handle errors correctly (https://github.com/ansible-collections/community.general/pull/9256).
minor_changes:
- proxmox module utils - add method ``api_task_complete`` that can wait for task completion and return error message (https://github.com/ansible-collections/community.general/pull/9256).

View file

@ -1,2 +0,0 @@
bugfixes:
- proxmox_template - fix the wrong path called on ``proxmox_template.task_status`` (https://github.com/ansible-collections/community.general/issues/9276, https://github.com/ansible-collections/community.general/pull/9277).

View file

@ -7,7 +7,6 @@ minor_changes:
- nmap inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - nmap inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- online inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - online inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- opennebula inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - opennebula inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- proxmox inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- scaleway inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - scaleway inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- stackpath_compute inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - stackpath_compute inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).
- virtualbox inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323). - virtualbox inventory plugin - use f-strings instead of interpolations or ``format`` (https://github.com/ansible-collections/community.general/pull/9323).

View file

@ -13,7 +13,6 @@ minor_changes:
- "lxd inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "lxd inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "nmap inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "nmap inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "opennebula inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "opennebula inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "proxmox inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "scaleway inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "scaleway inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "virtualbox inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "virtualbox inventory plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."
- "cyberarkpassword lookup plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)." - "cyberarkpassword lookup plugin - clean up string conversions (https://github.com/ansible-collections/community.general/pull/9379)."

View file

@ -6,7 +6,6 @@ minor_changes:
- jail connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - jail connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- lxc connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - lxc connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- lxd connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - lxd connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- proxmox_pct_remote connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- qubes connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - qubes connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- saltstack connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - saltstack connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- zone connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - zone connection plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
@ -19,7 +18,6 @@ minor_changes:
- nmap inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - nmap inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- online inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - online inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- opennebula inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - opennebula inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- proxmox inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- scaleway inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - scaleway inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- stackpath_compute inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - stackpath_compute inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).
- virtualbox inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584). - virtualbox inventory plugin - adjust standard preamble for Python 3 (https://github.com/ansible-collections/community.general/pull/9584).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox_template - add support for checksum validation with new options ``checksum_algorithm`` and ``checksum`` (https://github.com/ansible-collections/community.general/issues/9553, https://github.com/ansible-collections/community.general/pull/9601).

View file

@ -1,3 +0,0 @@
bugfixes:
- proxmox - fixes a typo in the translation of the ``pubkey`` parameter to proxmox' ``ssh-public-keys`` (https://github.com/ansible-collections/community.general/issues/9642, https://github.com/ansible-collections/community.general/pull/9645).
- proxmox - adds the ``pubkey`` parameter (back to) the ``update`` state (https://github.com/ansible-collections/community.general/issues/9642, https://github.com/ansible-collections/community.general/pull/9645).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox_kvm - allow hibernation and suspending of VMs (https://github.com/ansible-collections/community.general/issues/9620, https://github.com/ansible-collections/community.general/pull/9653).

View file

@ -1,2 +0,0 @@
bugfixes:
- "proxmox inventory plugin - plugin did not update cache correctly after ``meta: refresh_inventory`` (https://github.com/ansible-collections/community.general/issues/9710, https://github.com/ansible-collections/community.general/pull/9760)."

View file

@ -1,2 +0,0 @@
bugfixes:
- proxmox - add missing key selection of ``'status'`` key to ``get_lxc_status`` (https://github.com/ansible-collections/community.general/issues/9696, https://github.com/ansible-collections/community.general/pull/9809).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox_kvm - add missing audio hardware device handling (https://github.com/ansible-collections/community.general/issues/5192, https://github.com/ansible-collections/community.general/pull/9847).

View file

@ -1,2 +0,0 @@
bugfixes:
- proxmox_vm_info - the module no longer expects that the key ``template`` exists in a dictionary returned by Proxmox (https://github.com/ansible-collections/community.general/issues/9875, https://github.com/ansible-collections/community.general/pull/9910).

View file

@ -1,2 +0,0 @@
minor_changes:
- proxmox and proxmox_kvm modules - allow uppercase characters in VM/container tags (https://github.com/ansible-collections/community.general/issues/9895, https://github.com/ansible-collections/community.general/pull/10024).

View file

@ -1,2 +0,0 @@
bugfixes:
- proxmox inventory plugin - fix ``ansible_host`` staying empty for certain Proxmox nodes (https://github.com/ansible-collections/community.general/issues/5906, https://github.com/ansible-collections/community.general/pull/9952).

View file

@ -1,2 +0,0 @@
bugfixes:
- "proxmox_disk - fail gracefully if ``storage`` is required but not provided by the user (https://github.com/ansible-collections/community.general/issues/9941, https://github.com/ansible-collections/community.general/pull/9963)."

View file

@ -15,5 +15,3 @@ removed_features:
- "mh.module_helper module utils - ``AnsibleModule`` and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)." - "mh.module_helper module utils - ``AnsibleModule`` and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)."
- "mh.module_helper module utils - the attributes ``use_old_vardict`` and ``mute_vardict_deprecation`` from ``ModuleHelper`` have been removed. We suggest to remove them from your modules if you no longer support community.general < 11.0.0 (https://github.com/ansible-collections/community.general/pull/10126)." - "mh.module_helper module utils - the attributes ``use_old_vardict`` and ``mute_vardict_deprecation`` from ``ModuleHelper`` have been removed. We suggest to remove them from your modules if you no longer support community.general < 11.0.0 (https://github.com/ansible-collections/community.general/pull/10126)."
- "module_helper module utils - ``StateMixin``, ``DependencyCtxMgr``, ``VarMeta``, ``VarDict``, and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)." - "module_helper module utils - ``StateMixin``, ``DependencyCtxMgr``, ``VarMeta``, ``VarDict``, and ``VarsMixin`` are no longer provided (https://github.com/ansible-collections/community.general/pull/10126)."
breaking_changes:
- "proxmox - the default of ``update`` changed from ``false`` to ``true`` (https://github.com/ansible-collections/community.general/pull/10126)."

View file

@ -0,0 +1,7 @@
removed_features:
- "The Proxmox content (modules and plugins) has been moved to the `new collection community.proxmox <https://github.com/ansible-collections/community.proxmox>`__.
Since community.general 11.0.0, these modules and plugins have been replaced by deprecated redirections to community.proxmox.
You need to explicitly install community.proxmox, for example with ``ansible-galaxy collection install community.proxmox``,
or by installing a new enough version of the Ansible community package.
We suggest to update your roles and playbooks to use the new FQCNs as soon as possible to avoid getting deprecation messages
(https://github.com/ansible-collections/community.general/pull/10110)."

View file

@ -15,24 +15,9 @@ action_groups:
- consul_session - consul_session
- consul_token - consul_token
proxmox: proxmox:
- proxmox - metadata:
- proxmox_backup extend_group:
- proxmox_backup_info - community.proxmox.proxmox
- proxmox_disk
- proxmox_domain_info
- proxmox_group_info
- proxmox_kvm
- proxmox_nic
- proxmox_node_info
- proxmox_pool
- proxmox_pool_member
- proxmox_snap
- proxmox_storage_contents_info
- proxmox_storage_info
- proxmox_tasks_info
- proxmox_template
- proxmox_user_info
- proxmox_vm_info
keycloak: keycloak:
- keycloak_authentication - keycloak_authentication
- keycloak_authentication_required_actions - keycloak_authentication_required_actions
@ -94,6 +79,11 @@ plugin_routing:
redirect: community.docker.docker redirect: community.docker.docker
oc: oc:
redirect: community.okd.oc redirect: community.okd.oc
proxmox_pct_remote:
redirect: community.proxmox.proxmox_pct_remote
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
lookup: lookup:
gcp_storage_file: gcp_storage_file:
redirect: community.google.gcp_storage_file redirect: community.google.gcp_storage_file
@ -664,6 +654,96 @@ plugin_routing:
tombstone: tombstone:
removal_version: 11.0.0 removal_version: 11.0.0
warning_text: Supporting library is unsupported since 2021. warning_text: Supporting library is unsupported since 2021.
proxmox:
redirect: community.proxmox.proxmox
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_backup:
redirect: community.proxmox.proxmox_backup
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_backup_info:
redirect: community.proxmox.proxmox_backup_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_disk:
redirect: community.proxmox.proxmox_disk
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_domain_info:
redirect: community.proxmox.proxmox_domain_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_group_info:
redirect: community.proxmox.proxmox_group_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_kvm:
redirect: community.proxmox.proxmox_kvm
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_nic:
redirect: community.proxmox.proxmox_nic
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_node_info:
redirect: community.proxmox.proxmox_node_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_pool:
redirect: community.proxmox.proxmox_pool
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_pool_member:
redirect: community.proxmox.proxmox_pool_member
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_snap:
redirect: community.proxmox.proxmox_snap
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_storage_contents_info:
redirect: community.proxmox.proxmox_storage_contents_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_storage_info:
redirect: community.proxmox.proxmox_storage_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_tasks_info:
redirect: community.proxmox.proxmox_tasks_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_template:
redirect: community.proxmox.proxmox_template
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_user_info:
redirect: community.proxmox.proxmox_user_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
proxmox_vm_info:
redirect: community.proxmox.proxmox_vm_info
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
purefa_facts: purefa_facts:
tombstone: tombstone:
removal_version: 3.0.0 removal_version: 3.0.0
@ -922,6 +1002,11 @@ plugin_routing:
redirect: infoblox.nios_modules.nios redirect: infoblox.nios_modules.nios
postgresql: postgresql:
redirect: community.postgresql.postgresql redirect: community.postgresql.postgresql
proxmox:
redirect: community.proxmox.proxmox
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
purestorage: purestorage:
deprecation: deprecation:
removal_version: 12.0.0 removal_version: 12.0.0
@ -950,6 +1035,11 @@ plugin_routing:
redirect: infoblox.nios_modules.api redirect: infoblox.nios_modules.api
postgresql: postgresql:
redirect: community.postgresql.postgresql redirect: community.postgresql.postgresql
proxmox:
redirect: community.proxmox.proxmox
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
pure: pure:
deprecation: deprecation:
removal_version: 12.0.0 removal_version: 12.0.0
@ -967,6 +1057,11 @@ plugin_routing:
redirect: community.docker.docker_machine redirect: community.docker.docker_machine
docker_swarm: docker_swarm:
redirect: community.docker.docker_swarm redirect: community.docker.docker_swarm
proxmox:
redirect: community.proxmox.proxmox
deprecation:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
kubevirt: kubevirt:
redirect: community.kubevirt.kubevirt redirect: community.kubevirt.kubevirt
stackpath_compute: stackpath_compute:

View file

@ -1,857 +0,0 @@
# -*- coding: utf-8 -*-
# Derived from ansible/plugins/connection/paramiko_ssh.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright (c) 2024 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2024 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = r"""
author: Nils Stein (@mietzen) <github.nstein@mailbox.org>
name: proxmox_pct_remote
short_description: Run tasks in Proxmox LXC container instances using pct CLI via SSH
requirements:
- paramiko
description:
- Run commands or put/fetch files to an existing Proxmox LXC container using pct CLI via SSH.
- Uses the Python SSH implementation (Paramiko) to connect to the Proxmox host.
version_added: "10.3.0"
options:
remote_addr:
description:
- Address of the remote target.
default: inventory_hostname
type: string
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: ansible_paramiko_host
port:
description: Remote port to connect to.
type: int
default: 22
ini:
- section: defaults
key: remote_port
- section: paramiko_connection
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
- name: ANSIBLE_REMOTE_PARAMIKO_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
- name: ansible_paramiko_port
keyword:
- name: port
remote_user:
description:
- User to login/authenticate as.
- Can be set from the CLI via the C(--user) or C(-u) options.
type: string
vars:
- name: ansible_user
- name: ansible_ssh_user
- name: ansible_paramiko_user
env:
- name: ANSIBLE_REMOTE_USER
- name: ANSIBLE_PARAMIKO_REMOTE_USER
ini:
- section: defaults
key: remote_user
- section: paramiko_connection
key: remote_user
keyword:
- name: remote_user
password:
description:
- Secret used to either login the SSH server or as a passphrase for SSH keys that require it.
- Can be set from the CLI via the C(--ask-pass) option.
type: string
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
- name: ansible_paramiko_pass
- name: ansible_paramiko_password
use_rsa_sha2_algorithms:
description:
- Whether or not to enable RSA SHA2 algorithms for pubkeys and hostkeys.
- On paramiko versions older than 2.9, this only affects hostkeys.
- For behavior matching paramiko<2.9 set this to V(false).
vars:
- name: ansible_paramiko_use_rsa_sha2_algorithms
ini:
- {key: use_rsa_sha2_algorithms, section: paramiko_connection}
env:
- {name: ANSIBLE_PARAMIKO_USE_RSA_SHA2_ALGORITHMS}
default: true
type: boolean
host_key_auto_add:
description: "Automatically add host keys to C(~/.ssh/known_hosts)."
env:
- name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD
ini:
- key: host_key_auto_add
section: paramiko_connection
type: boolean
look_for_keys:
default: True
description: "Set to V(false) to disable searching for private key files in C(~/.ssh/)."
env:
- name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
proxy_command:
default: ""
description:
- Proxy information for running the connection via a jumphost.
type: string
env:
- name: ANSIBLE_PARAMIKO_PROXY_COMMAND
ini:
- {key: proxy_command, section: paramiko_connection}
vars:
- name: ansible_paramiko_proxy_command
pty:
default: True
description: "C(sudo) usually requires a PTY, V(true) to give a PTY and V(false) to not give a PTY."
env:
- name: ANSIBLE_PARAMIKO_PTY
ini:
- section: paramiko_connection
key: pty
type: boolean
record_host_keys:
default: True
description: "Save the host keys to a file."
env:
- name: ANSIBLE_PARAMIKO_RECORD_HOST_KEYS
ini:
- section: paramiko_connection
key: record_host_keys
type: boolean
host_key_checking:
description: "Set this to V(false) if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host."
type: boolean
default: true
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
- name: ANSIBLE_PARAMIKO_HOST_KEY_CHECKING
ini:
- section: defaults
key: host_key_checking
- section: paramiko_connection
key: host_key_checking
vars:
- name: ansible_host_key_checking
- name: ansible_ssh_host_key_checking
- name: ansible_paramiko_host_key_checking
use_persistent_connections:
description: "Toggles the use of persistence for connections."
type: boolean
default: False
env:
- name: ANSIBLE_USE_PERSISTENT_CONNECTIONS
ini:
- section: defaults
key: use_persistent_connections
banner_timeout:
type: float
default: 30
description:
- Configures, in seconds, the amount of time to wait for the SSH
banner to be presented. This option is supported by paramiko
version 1.15.0 or newer.
ini:
- section: paramiko_connection
key: banner_timeout
env:
- name: ANSIBLE_PARAMIKO_BANNER_TIMEOUT
timeout:
type: int
default: 10
description: Number of seconds until the plugin gives up on failing to establish a TCP connection.
ini:
- section: defaults
key: timeout
- section: ssh_connection
key: timeout
- section: paramiko_connection
key: timeout
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
- name: ANSIBLE_PARAMIKO_TIMEOUT
vars:
- name: ansible_ssh_timeout
- name: ansible_paramiko_timeout
cli:
- name: timeout
lock_file_timeout:
type: int
default: 60
description: Number of seconds until the plugin gives up on trying to write a lock file when writing SSH known host keys.
vars:
- name: ansible_lock_file_timeout
env:
- name: ANSIBLE_LOCK_FILE_TIMEOUT
private_key_file:
description:
- Path to private key file to use for authentication.
type: string
ini:
- section: defaults
key: private_key_file
- section: paramiko_connection
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
- name: ANSIBLE_PARAMIKO_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
- name: ansible_paramiko_private_key_file
cli:
- name: private_key_file
option: "--private-key"
vmid:
description:
- LXC Container ID
type: int
vars:
- name: proxmox_vmid
proxmox_become_method:
description:
- Become command used in proxmox
type: str
default: sudo
vars:
- name: proxmox_become_method
notes:
- >
When NOT using this plugin as root, you need to have a become mechanism,
e.g. C(sudo), installed on Proxmox and setup so we can run it without prompting for the password.
Inside the container, we need a shell, for example C(sh) and the C(cat) command to be available in the C(PATH) for this plugin to work.
"""
EXAMPLES = r"""
# --------------------------------------------------------------
# Setup sudo with password less access to pct for user 'ansible':
# --------------------------------------------------------------
#
# Open a Proxmox root shell and execute:
# $ useradd -d /opt/ansible-pct -r -m -s /bin/sh ansible
# $ mkdir -p /opt/ansible-pct/.ssh
# $ ssh-keygen -t ed25519 -C 'ansible' -N "" -f /opt/ansible-pct/.ssh/ansible <<< y > /dev/null
# $ cat /opt/ansible-pct/.ssh/ansible
# $ mv /opt/ansible-pct/.ssh/ansible.pub /opt/ansible-pct/.ssh/authorized-keys
# $ rm /opt/ansible-pct/.ssh/ansible*
# $ chown -R ansible:ansible /opt/ansible-pct/.ssh
# $ chmod 700 /opt/ansible-pct/.ssh
# $ chmod 600 /opt/ansible-pct/.ssh/authorized-keys
# $ echo 'ansible ALL = (root) NOPASSWD: /usr/sbin/pct' > /etc/sudoers.d/ansible_pct
#
# Save the displayed private key and add it to your ssh-agent
#
# Or use ansible:
# ---
# - name: Setup ansible-pct user and configure environment on Proxmox host
# hosts: proxmox
# become: true
# gather_facts: false
#
# tasks:
# - name: Create ansible user
# ansible.builtin.user:
# name: ansible
# comment: Ansible User
# home: /opt/ansible-pct
# shell: /bin/sh
# create_home: true
# system: true
#
# - name: Create .ssh directory
# ansible.builtin.file:
# path: /opt/ansible-pct/.ssh
# state: directory
# owner: ansible
# group: ansible
# mode: '0700'
#
# - name: Generate SSH key for ansible user
# community.crypto.openssh_keypair:
# path: /opt/ansible-pct/.ssh/ansible
# type: ed25519
# comment: 'ansible'
# force: true
# mode: '0600'
# owner: ansible
# group: ansible
#
# - name: Set public key as authorized key
# ansible.builtin.copy:
# src: /opt/ansible-pct/.ssh/ansible.pub
# dest: /opt/ansible-pct/.ssh/authorized-keys
# remote_src: yes
# owner: ansible
# group: ansible
# mode: '0600'
#
# - name: Add sudoers entry for ansible user
# ansible.builtin.copy:
# content: 'ansible ALL = (root) NOPASSWD: /usr/sbin/pct'
# dest: /etc/sudoers.d/ansible_pct
# owner: root
# group: root
# mode: '0440'
#
# - name: Fetch private SSH key to localhost
# ansible.builtin.fetch:
# src: /opt/ansible-pct/.ssh/ansible
# dest: ~/.ssh/proxmox_ansible_private_key
# flat: yes
# fail_on_missing: true
#
# - name: Clean up generated SSH keys
# ansible.builtin.file:
# path: /opt/ansible-pct/.ssh/ansible*
# state: absent
#
# - name: Configure private key permissions on localhost
# hosts: localhost
# tasks:
# - name: Set permissions for fetched private key
# ansible.builtin.file:
# path: ~/.ssh/proxmox_ansible_private_key
# mode: '0600'
#
# --------------------------------
# Static inventory file: hosts.yml
# --------------------------------
# all:
# children:
# lxc:
# hosts:
# container-1:
# ansible_host: 10.0.0.10
# proxmox_vmid: 100
# ansible_connection: community.general.proxmox_pct_remote
# ansible_user: ansible
# container-2:
# ansible_host: 10.0.0.10
# proxmox_vmid: 200
# ansible_connection: community.general.proxmox_pct_remote
# ansible_user: ansible
# proxmox:
# hosts:
# proxmox-1:
# ansible_host: 10.0.0.10
#
#
# ---------------------------------------------
# Dynamic inventory file: inventory.proxmox.yml
# ---------------------------------------------
# plugin: community.general.proxmox
# url: https://10.0.0.10:8006
# validate_certs: false
# user: ansible@pam
# token_id: ansible
# token_secret: !vault |
# $ANSIBLE_VAULT;1.1;AES256
# ...
# want_facts: true
# exclude_nodes: true
# filters:
# - proxmox_vmtype == "lxc"
# want_proxmox_nodes_ansible_host: false
# compose:
# ansible_host: "'10.0.0.10'"
# ansible_connection: "'community.general.proxmox_pct_remote'"
# ansible_user: "'ansible'"
#
#
# ----------------------
# Playbook: playbook.yml
# ----------------------
---
- hosts: lxc
# On nodes with many containers you might want to deactivate the devices facts
# or set `gather_facts: false` if you don't need them.
# More info on gathering fact subsets:
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html
#
# gather_facts: true
# gather_subset:
# - "!devices"
tasks:
- name: Ping LXC container
ansible.builtin.ping:
"""
import os
import pathlib
import socket
import tempfile
import typing as t
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
)
from ansible_collections.community.general.plugins.module_utils._filelock import FileLock, LockTimeout
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.compat.paramiko import PARAMIKO_IMPORT_ERR, paramiko
from ansible.module_utils.compat.version import LooseVersion
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe
from binascii import hexlify
display = Display()
def authenticity_msg(hostname: str, ktype: str, fingerprint: str) -> str:
msg = f"""
paramiko: The authenticity of host '{hostname}' can't be established.
The {ktype} key fingerprint is {fingerprint}.
Are you sure you want to continue connecting (yes/no)?
"""
return msg
MissingHostKeyPolicy: type = object
if paramiko:
MissingHostKeyPolicy = paramiko.MissingHostKeyPolicy
class MyAddPolicy(MissingHostKeyPolicy):
"""
Based on AutoAddPolicy in paramiko so we can determine when keys are added
and also prompt for input.
Policy for automatically adding the hostname and new host key to the
local L{HostKeys} object, and saving it. This is used by L{SSHClient}.
"""
def __init__(self, connection: Connection) -> None:
self.connection = connection
self._options = connection._options
def missing_host_key(self, client, hostname, key) -> None:
if all((self.connection.get_option('host_key_checking'), not self.connection.get_option('host_key_auto_add'))):
fingerprint = hexlify(key.get_fingerprint())
ktype = key.get_name()
if self.connection.get_option('use_persistent_connections') or self.connection.force_persistence:
# don't print the prompt string since the user cannot respond
# to the question anyway
raise AnsibleError(authenticity_msg(hostname, ktype, fingerprint)[1:92])
inp = to_text(
display.prompt_until(authenticity_msg(hostname, ktype, fingerprint), private=False),
errors='surrogate_or_strict'
)
if inp.lower() not in ['yes', 'y', '']:
raise AnsibleError('host connection rejected by user')
key._added_by_ansible_this_time = True
# existing implementation below:
client._host_keys.add(hostname, key.get_name(), key)
# host keys are actually saved in close() function below
# in order to control ordering.
class Connection(ConnectionBase):
""" SSH based connections (paramiko) to Proxmox pct """
transport = 'community.general.proxmox_pct_remote'
_log_channel: str | None = None
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
def _set_log_channel(self, name: str) -> None:
""" Mimic paramiko.SSHClient.set_log_channel """
self._log_channel = name
def _parse_proxy_command(self, port: int = 22) -> dict[str, t.Any]:
proxy_command = self.get_option('proxy_command') or None
sock_kwarg = {}
if proxy_command:
replacers = {
'%h': self.get_option('remote_addr'),
'%p': port,
'%r': self.get_option('remote_user')
}
for find, replace in replacers.items():
proxy_command = proxy_command.replace(find, str(replace))
try:
sock_kwarg = {'sock': paramiko.ProxyCommand(proxy_command)}
display.vvv(f'CONFIGURE PROXY COMMAND FOR CONNECTION: {proxy_command}', host=self.get_option('remote_addr'))
except AttributeError:
display.warning('Paramiko ProxyCommand support unavailable. '
'Please upgrade to Paramiko 1.9.0 or newer. '
'Not using configured ProxyCommand')
return sock_kwarg
def _connect(self) -> Connection:
""" activates the connection object """
if paramiko is None:
raise AnsibleError(f'paramiko is not installed: {to_native(PARAMIKO_IMPORT_ERR)}')
port = self.get_option('port')
display.vvv(f'ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {self.get_option("remote_user")} on PORT {to_text(port)} TO {self.get_option("remote_addr")}',
host=self.get_option('remote_addr'))
ssh = paramiko.SSHClient()
# Set pubkey and hostkey algorithms to disable, the only manipulation allowed currently
# is keeping or omitting rsa-sha2 algorithms
# default_keys: t.Tuple[str] = ()
paramiko_preferred_pubkeys = getattr(paramiko.Transport, '_preferred_pubkeys', ())
paramiko_preferred_hostkeys = getattr(paramiko.Transport, '_preferred_keys', ())
use_rsa_sha2_algorithms = self.get_option('use_rsa_sha2_algorithms')
disabled_algorithms: t.Dict[str, t.Iterable[str]] = {}
if not use_rsa_sha2_algorithms:
if paramiko_preferred_pubkeys:
disabled_algorithms['pubkeys'] = tuple(a for a in paramiko_preferred_pubkeys if 'rsa-sha2' in a)
if paramiko_preferred_hostkeys:
disabled_algorithms['keys'] = tuple(a for a in paramiko_preferred_hostkeys if 'rsa-sha2' in a)
# override paramiko's default logger name
if self._log_channel is not None:
ssh.set_log_channel(self._log_channel)
self.keyfile = os.path.expanduser('~/.ssh/known_hosts')
if self.get_option('host_key_checking'):
for ssh_known_hosts in ('/etc/ssh/ssh_known_hosts', '/etc/openssh/ssh_known_hosts'):
try:
ssh.load_system_host_keys(ssh_known_hosts)
break
except IOError:
pass # file was not found, but not required to function
except paramiko.hostkeys.InvalidHostKey as e:
raise AnsibleConnectionFailure(f'Invalid host key: {to_text(e.line)}')
try:
ssh.load_system_host_keys()
except paramiko.hostkeys.InvalidHostKey as e:
raise AnsibleConnectionFailure(f'Invalid host key: {to_text(e.line)}')
ssh_connect_kwargs = self._parse_proxy_command(port)
ssh.set_missing_host_key_policy(MyAddPolicy(self))
conn_password = self.get_option('password')
allow_agent = True
if conn_password is not None:
allow_agent = False
try:
key_filename = None
if self.get_option('private_key_file'):
key_filename = os.path.expanduser(self.get_option('private_key_file'))
# paramiko 2.2 introduced auth_timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('2.2.0'):
ssh_connect_kwargs['auth_timeout'] = self.get_option('timeout')
# paramiko 1.15 introduced banner timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('1.15.0'):
ssh_connect_kwargs['banner_timeout'] = self.get_option('banner_timeout')
ssh.connect(
self.get_option('remote_addr').lower(),
username=self.get_option('remote_user'),
allow_agent=allow_agent,
look_for_keys=self.get_option('look_for_keys'),
key_filename=key_filename,
password=conn_password,
timeout=self.get_option('timeout'),
port=port,
disabled_algorithms=disabled_algorithms,
**ssh_connect_kwargs,
)
except paramiko.ssh_exception.BadHostKeyException as e:
raise AnsibleConnectionFailure(f'host key mismatch for {to_text(e.hostname)}')
except paramiko.ssh_exception.AuthenticationException as e:
msg = f'Failed to authenticate: {e}'
raise AnsibleAuthenticationFailure(msg)
except Exception as e:
msg = to_text(e)
if u'PID check failed' in msg:
raise AnsibleError('paramiko version issue, please upgrade paramiko on the machine running ansible')
elif u'Private key file is encrypted' in msg:
msg = f'ssh {self.get_option("remote_user")}@{self.get_options("remote_addr")}:{port} : ' + \
f'{msg}\nTo connect as a different user, use -u <username>.'
raise AnsibleConnectionFailure(msg)
else:
raise AnsibleConnectionFailure(msg)
self.ssh = ssh
self._connected = True
return self
def _any_keys_added(self) -> bool:
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
return True
return False
def _save_ssh_host_keys(self, filename: str) -> None:
"""
not using the paramiko save_ssh_host_keys function as we want to add new SSH keys at the bottom so folks
don't complain about it :)
"""
if not self._any_keys_added():
return
path = os.path.expanduser('~/.ssh')
makedirs_safe(path)
with open(filename, 'w') as f:
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
# was f.write
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if not added_this_time:
f.write(f'{hostname} {keytype} {key.get_base64()}\n')
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
f.write(f'{hostname} {keytype} {key.get_base64()}\n')
def _build_pct_command(self, cmd: str) -> str:
cmd = ['/usr/sbin/pct', 'exec', str(self.get_option('vmid')), '--', cmd]
if self.get_option('remote_user') != 'root':
cmd = [self.get_option('proxmox_become_method')] + cmd
display.vvv(f'INFO Running as non root user: {self.get_option("remote_user")}, trying to run pct with become method: ' +
f'{self.get_option("proxmox_become_method")}',
host=self.get_option('remote_addr'))
return ' '.join(cmd)
def exec_command(self, cmd: str, in_data: bytes | None = None, sudoable: bool = True) -> tuple[int, bytes, bytes]:
""" run a command on inside the LXC container """
cmd = self._build_pct_command(cmd)
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
bufsize = 4096
try:
self.ssh.get_transport().set_keepalive(5)
chan = self.ssh.get_transport().open_session()
except Exception as e:
text_e = to_text(e)
msg = 'Failed to open session'
if text_e:
msg += f': {text_e}'
raise AnsibleConnectionFailure(to_native(msg))
# sudo usually requires a PTY (cf. requiretty option), therefore
# we give it one by default (pty=True in ansible.cfg), and we try
# to initialise from the calling environment when sudoable is enabled
if self.get_option('pty') and sudoable:
chan.get_pty(term=os.getenv('TERM', 'vt100'), width=int(os.getenv('COLUMNS', 0)), height=int(os.getenv('LINES', 0)))
display.vvv(f'EXEC {cmd}', host=self.get_option('remote_addr'))
cmd = to_bytes(cmd, errors='surrogate_or_strict')
no_prompt_out = b''
no_prompt_err = b''
become_output = b''
try:
chan.exec_command(cmd)
if self.become and self.become.expect_prompt():
password_prompt = False
become_success = False
while not (become_success or password_prompt):
display.debug('Waiting for Privilege Escalation input')
chunk = chan.recv(bufsize)
display.debug(f'chunk is: {to_text(chunk)}')
if not chunk:
if b'unknown user' in become_output:
n_become_user = to_native(self.become.get_option('become_user'))
raise AnsibleError(f'user {n_become_user} does not exist')
else:
break
# raise AnsibleError('ssh connection closed waiting for password prompt')
become_output += chunk
# need to check every line because we might get lectured
# and we might get the middle of a line in a chunk
for line in become_output.splitlines(True):
if self.become.check_success(line):
become_success = True
break
elif self.become.check_password_prompt(line):
password_prompt = True
break
if password_prompt:
if self.become:
become_pass = self.become.get_option('become_pass')
chan.sendall(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
else:
raise AnsibleError('A password is required but none was supplied')
else:
no_prompt_out += become_output
no_prompt_err += become_output
if in_data:
for i in range(0, len(in_data), bufsize):
chan.send(in_data[i:i + bufsize])
chan.shutdown_write()
elif in_data == b'':
chan.shutdown_write()
except socket.timeout:
raise AnsibleError('ssh timed out waiting for privilege escalation.\n' + to_text(become_output))
stdout = b''.join(chan.makefile('rb', bufsize))
stderr = b''.join(chan.makefile_stderr('rb', bufsize))
returncode = chan.recv_exit_status()
if 'pct: not found' in stderr.decode('utf-8'):
raise AnsibleError(
f'pct not found in path of host: {to_text(self.get_option("remote_addr"))}')
return (returncode, no_prompt_out + stdout, no_prompt_out + stderr)
def put_file(self, in_path: str, out_path: str) -> None:
""" transfer a file from local to remote """
display.vvv(f'PUT {in_path} TO {out_path}', host=self.get_option('remote_addr'))
try:
with open(in_path, 'rb') as f:
data = f.read()
returncode, stdout, stderr = self.exec_command(
' '.join([
self._shell.executable, '-c',
self._shell.quote(f'cat > {out_path}')]),
in_data=data,
sudoable=False)
if returncode != 0:
if 'cat: not found' in stderr.decode('utf-8'):
raise AnsibleError(
f'cat not found in path of container: {to_text(self.get_option("vmid"))}')
raise AnsibleError(
f'{to_text(stdout)}\n{to_text(stderr)}')
except Exception as e:
raise AnsibleError(
f'error occurred while putting file from {in_path} to {out_path}!\n{to_text(e)}')
def fetch_file(self, in_path: str, out_path: str) -> None:
""" save a remote file to the specified path """
display.vvv(f'FETCH {in_path} TO {out_path}', host=self.get_option('remote_addr'))
try:
returncode, stdout, stderr = self.exec_command(
' '.join([
self._shell.executable, '-c',
self._shell.quote(f'cat {in_path}')]),
sudoable=False)
if returncode != 0:
if 'cat: not found' in stderr.decode('utf-8'):
raise AnsibleError(
f'cat not found in path of container: {to_text(self.get_option("vmid"))}')
raise AnsibleError(
f'{to_text(stdout)}\n{to_text(stderr)}')
with open(out_path, 'wb') as f:
f.write(stdout)
except Exception as e:
raise AnsibleError(
f'error occurred while fetching file from {in_path} to {out_path}!\n{to_text(e)}')
def reset(self) -> None:
""" reset the connection """
if not self._connected:
return
self.close()
self._connect()
def close(self) -> None:
""" terminate the connection """
if self.get_option('host_key_checking') and self.get_option('record_host_keys') and self._any_keys_added():
# add any new SSH host keys -- warning -- this could be slow
# (This doesn't acquire the connection lock because it needs
# to exclude only other known_hosts writers, not connections
# that are starting up.)
lockfile = os.path.basename(self.keyfile)
dirname = os.path.dirname(self.keyfile)
makedirs_safe(dirname)
tmp_keyfile_name = None
try:
with FileLock().lock_file(lockfile, dirname, self.get_option('lock_file_timeout')):
# just in case any were added recently
self.ssh.load_system_host_keys()
self.ssh._host_keys.update(self.ssh._system_host_keys)
# gather information about the current key file, so
# we can ensure the new file has the correct mode/owner
key_dir = os.path.dirname(self.keyfile)
if os.path.exists(self.keyfile):
key_stat = os.stat(self.keyfile)
mode = key_stat.st_mode & 0o777
uid = key_stat.st_uid
gid = key_stat.st_gid
else:
mode = 0o644
uid = os.getuid()
gid = os.getgid()
# Save the new keys to a temporary file and move it into place
# rather than rewriting the file. We set delete=False because
# the file will be moved into place rather than cleaned up.
with tempfile.NamedTemporaryFile(dir=key_dir, delete=False) as tmp_keyfile:
tmp_keyfile_name = tmp_keyfile.name
os.chmod(tmp_keyfile_name, mode)
os.chown(tmp_keyfile_name, uid, gid)
self._save_ssh_host_keys(tmp_keyfile_name)
os.rename(tmp_keyfile_name, self.keyfile)
except LockTimeout:
raise AnsibleError(
f'writing lock file for {self.keyfile} ran in to the timeout of {self.get_option("lock_file_timeout")}s')
except paramiko.hostkeys.InvalidHostKey as e:
raise AnsibleConnectionFailure(f'Invalid host key: {e.line}')
except Exception as e:
# unable to save keys, including scenario when key was invalid
# and caught earlier
raise AnsibleError(
f'error occurred while writing SSH host keys!\n{to_text(e)}')
finally:
if tmp_keyfile_name is not None:
pathlib.Path(tmp_keyfile_name).unlink(missing_ok=True)
self.ssh.close()
self._connected = False

View file

@ -1,84 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Common parameters for Proxmox VE modules
DOCUMENTATION = r"""
options:
api_host:
description:
- Specify the target host of the Proxmox VE cluster.
type: str
required: true
api_port:
description:
- Specify the target port of the Proxmox VE cluster.
- Uses the E(PROXMOX_PORT) environment variable if not specified.
type: int
required: false
version_added: 9.1.0
api_user:
description:
- Specify the user to authenticate with.
type: str
required: true
api_password:
description:
- Specify the password to authenticate with.
- You can use E(PROXMOX_PASSWORD) environment variable.
type: str
api_token_id:
description:
- Specify the token ID.
- Requires C(proxmoxer>=1.1.0) to work.
type: str
version_added: 1.3.0
api_token_secret:
description:
- Specify the token secret.
- Requires C(proxmoxer>=1.1.0) to work.
type: str
version_added: 1.3.0
validate_certs:
description:
- If V(false), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: false
requirements: ["proxmoxer", "requests"]
"""
SELECTION = r"""
options:
vmid:
description:
- Specifies the instance ID.
- If not set the next available ID will be fetched from ProxmoxAPI.
type: int
node:
description:
- Proxmox VE node on which to operate.
- Only required for O(state=present).
- For every other states it will be autodiscovered.
type: str
pool:
description:
- Add the new VM to the specified pool.
type: str
"""
ACTIONGROUP_PROXMOX = r"""
options: {}
attributes:
action_group:
description: Use C(group/community.general.proxmox) in C(module_defaults) to set defaults for this module.
support: full
membership:
- community.general.proxmox
"""

View file

@ -1,715 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>, Daniel Lobato Garcia <dlobatog@redhat.com>
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = '''
name: proxmox
short_description: Proxmox inventory source
version_added: "1.2.0"
author:
- Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
requirements:
- requests >= 1.1
description:
- Get inventory hosts from a Proxmox PVE cluster.
- "Uses a configuration file as an inventory source, it must end in C(.proxmox.yml) or C(.proxmox.yaml)"
- Will retrieve the first network interface with an IP for Proxmox nodes.
- Can retrieve LXC/QEMU configuration as facts.
extends_documentation_fragment:
- constructed
- inventory_cache
options:
plugin:
description: The name of this plugin, it should always be set to V(community.general.proxmox) for this plugin to recognize it as its own.
required: true
choices: ['community.general.proxmox']
type: str
url:
description:
- URL to Proxmox cluster.
- If the value is not specified in the inventory configuration, the value of environment variable E(PROXMOX_URL) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the O(url).
default: 'http://localhost:8006'
type: str
env:
- name: PROXMOX_URL
version_added: 2.0.0
user:
description:
- Proxmox authentication user.
- If the value is not specified in the inventory configuration, the value of environment variable E(PROXMOX_USER) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the O(user).
required: true
type: str
env:
- name: PROXMOX_USER
version_added: 2.0.0
password:
description:
- Proxmox authentication password.
- If the value is not specified in the inventory configuration, the value of environment variable E(PROXMOX_PASSWORD) will be used instead.
- Since community.general 4.7.0 you can also use templating to specify the value of the O(password).
- If you do not specify a password, you must set O(token_id) and O(token_secret) instead.
type: str
env:
- name: PROXMOX_PASSWORD
version_added: 2.0.0
token_id:
description:
- Proxmox authentication token ID.
- If the value is not specified in the inventory configuration, the value of environment variable E(PROXMOX_TOKEN_ID) will be used instead.
- To use token authentication, you must also specify O(token_secret). If you do not specify O(token_id) and O(token_secret),
you must set a password instead.
- Make sure to grant explicit pve permissions to the token or disable 'privilege separation' to use the users' privileges instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_ID
token_secret:
description:
- Proxmox authentication token secret.
- If the value is not specified in the inventory configuration, the value of environment variable E(PROXMOX_TOKEN_SECRET) will be used instead.
- To use token authentication, you must also specify O(token_id). If you do not specify O(token_id) and O(token_secret),
you must set a password instead.
version_added: 4.8.0
type: str
env:
- name: PROXMOX_TOKEN_SECRET
validate_certs:
description: Verify SSL certificate if using HTTPS.
type: boolean
default: true
group_prefix:
description: Prefix to apply to Proxmox groups.
default: proxmox_
type: str
facts_prefix:
description: Prefix to apply to LXC/QEMU config facts.
default: proxmox_
type: str
want_facts:
description:
- Gather LXC/QEMU configuration facts.
- When O(want_facts) is set to V(true) more details about QEMU VM status are possible, besides the running and stopped states.
Currently if the VM is running and it is suspended, the status will be running and the machine will be in C(running) group,
but its actual state will be paused. See O(qemu_extended_statuses) for how to retrieve the real status.
default: false
type: bool
qemu_extended_statuses:
description:
- Requires O(want_facts) to be set to V(true) to function. This will allow you to differentiate between C(paused) and C(prelaunch)
statuses of the QEMU VMs.
- This introduces multiple groups [prefixed with O(group_prefix)] C(prelaunch) and C(paused).
default: false
type: bool
version_added: 5.1.0
want_proxmox_nodes_ansible_host:
version_added: 3.0.0
description:
- Whether to set C(ansible_host) for proxmox nodes.
- When set to V(true) (default), will use the first available interface. This can be different from what you expect.
- The default of this option changed from V(true) to V(false) in community.general 6.0.0.
type: bool
default: false
exclude_nodes:
description: Exclude proxmox nodes and the nodes-group from the inventory output.
type: bool
default: false
version_added: 8.1.0
filters:
version_added: 4.6.0
description: A list of Jinja templates that allow filtering hosts.
type: list
elements: str
default: []
strict:
version_added: 2.5.0
compose:
version_added: 2.5.0
groups:
version_added: 2.5.0
keyed_groups:
version_added: 2.5.0
'''
EXAMPLES = '''
---
# Minimal example which will not gather additional facts for QEMU/LXC guests
# By not specifying a URL the plugin will attempt to connect to the controller host on port 8006
# my.proxmox.yml
plugin: community.general.proxmox
user: ansible@pve
password: secure
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
---
# Instead of login with password, proxmox supports api token authentication since release 6.2.
plugin: community.general.proxmox
user: ci@pve
token_id: gitlab-1
token_secret: fa256e9c-26ab-41ec-82da-707a2c079829
---
# The secret can also be a vault string or passed via the environment variable TOKEN_SECRET.
plugin: community.general.proxmox
user: ci@pve
token_id: gitlab-1
token_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62353634333163633336343265623632626339313032653563653165313262343931643431656138
6134333736323265656466646539663134306166666237630a653363623262636663333762316136
34616361326263383766366663393837626437316462313332663736623066656237386531663731
3037646432383064630a663165303564623338666131353366373630656661333437393937343331
32643131386134396336623736393634373936356332623632306561356361323737313663633633
6231313333666361656537343562333337323030623732323833
---
# More complete example demonstrating the use of 'want_facts' and the constructed options
# Note that using facts returned by 'want_facts' in constructed options requires 'want_facts=true'
# my.proxmox.yml
plugin: community.general.proxmox
url: http://pve.domain.com:8006
user: ansible@pve
password: secure
want_facts: true
keyed_groups:
# proxmox_tags_parsed is an example of a fact only returned when 'want_facts=true'
- key: proxmox_tags_parsed
separator: ""
prefix: group
groups:
webservers: "'web' in (proxmox_tags_parsed|list)"
mailservers: "'mail' in (proxmox_tags_parsed|list)"
compose:
ansible_port: 2222
# Note that this can easily give you wrong values as ansible_host. See further below for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
---
# Using the inventory to allow ansible to connect via the first IP address of the VM / Container
# (Default is connection by name of QEMU/LXC guests)
# Note: my_inv_var demonstrates how to add a string variable to every host used by the inventory.
# my.proxmox.yml
plugin: community.general.proxmox
url: http://192.168.1.2:8006
user: ansible@pve
password: secure
validate_certs: false # only do this when you trust the network!
want_facts: true
want_proxmox_nodes_ansible_host: false
compose:
ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ipaddr('address')
my_inv_var_1: "'my_var1_value'"
my_inv_var_2: >
"my_var_2_value"
---
# Specify the url, user and password using templating
# my.proxmox.yml
plugin: community.general.proxmox
url: "{{ lookup('ansible.builtin.ini', 'url', section='proxmox', file='file.ini') }}"
user: "{{ lookup('ansible.builtin.env','PM_USER') | default('ansible@pve') }}"
password: "{{ lookup('community.general.random_string', base64=True) }}"
# Note that this can easily give you wrong values as ansible_host. See further up for
# an example where this is set to `false` and where ansible_host is set with `compose`.
want_proxmox_nodes_ansible_host: true
'''
import itertools
import re
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.errors import AnsibleError
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.utils.display import Display
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
from ansible_collections.community.general.plugins.plugin_utils.unsafe import make_unsafe
# 3rd party imports
try:
import requests
if LooseVersion(requests.__version__) < LooseVersion('1.1.0'):
raise ImportError
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
display = Display()
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
''' Host inventory parser for ansible using Proxmox as source. '''
NAME = 'community.general.proxmox'
def __init__(self):
super(InventoryModule, self).__init__()
# from config
self.proxmox_url = None
self.session = None
self.cache_key = None
self.use_cache = None
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('proxmox.yaml', 'proxmox.yml')):
valid = True
else:
self.display.vvv('Skipping due to inventory source not ending in "proxmox.yaml" nor "proxmox.yml"')
return valid
def _get_session(self):
if not self.session:
self.session = requests.session()
self.session.verify = self.get_option('validate_certs')
return self.session
def _get_auth(self):
validate_certs = self.get_option('validate_certs')
if validate_certs is False:
from requests.packages.urllib3 import disable_warnings
disable_warnings()
if self.proxmox_password:
credentials = urlencode({'username': self.proxmox_user, 'password': self.proxmox_password})
a = self._get_session()
ret = a.post(f'{self.proxmox_url}/api2/json/access/ticket', data=credentials)
json = ret.json()
self.headers = {
# only required for POST/PUT/DELETE methods, which we are not using currently
# 'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
'Cookie': f"PVEAuthCookie={json['data']['ticket']}"
}
else:
# Clean and format token components
user = self.proxmox_user.strip()
token_id = self.proxmox_token_id.strip()
token_secret = self.proxmox_token_secret.strip()
# Build token string without newlines
token = f'{user}!{token_id}={token_secret}'
# Set headers with clean token
self.headers = {'Authorization': f'PVEAPIToken={token}'}
def _get_json(self, url, ignore_errors=None):
data = []
has_data = False
if self.use_cache:
try:
data = self._cache[self.cache_key][url]
has_data = True
except KeyError:
self.update_cache = True
if not has_data:
s = self._get_session()
while True:
ret = s.get(url, headers=self.headers)
if ignore_errors and ret.status_code in ignore_errors:
break
ret.raise_for_status()
json = ret.json()
# process results
# FIXME: This assumes 'return type' matches a specific query,
# it will break if we expand the queries and they dont have different types
if 'data' not in json:
# /hosts/:id does not have a 'data' key
data = json
break
elif isinstance(json['data'], MutableMapping):
# /facts are returned as dict in 'data'
data = json['data']
break
else:
if json['data']:
# /hosts 's 'results' is a list of all hosts, returned is paginated
data = data + json['data']
break
self._results[url] = data
return make_unsafe(data)
def _get_nodes(self):
return self._get_json(f"{self.proxmox_url}/api2/json/nodes")
def _get_pools(self):
return self._get_json(f"{self.proxmox_url}/api2/json/pools")
def _get_lxc_per_node(self, node):
return self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/lxc")
def _get_qemu_per_node(self, node):
return self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/qemu")
def _get_members_per_pool(self, pool):
ret = self._get_json(f"{self.proxmox_url}/api2/json/pools/{pool}")
return ret['members']
def _get_node_ip(self, node):
ret = self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/network")
# sort interface by iface name to make selection as stable as possible
ret.sort(key=lambda x: x['iface'])
for iface in ret:
try:
# only process interfaces adhering to these rules
if 'active' not in iface:
self.display.vvv(f"Interface {iface['iface']} on node {node} does not have an active state")
continue
if 'address' not in iface:
self.display.vvv(f"Interface {iface['iface']} on node {node} does not have an address")
continue
if 'gateway' not in iface:
self.display.vvv(f"Interface {iface['iface']} on node {node} does not have a gateway")
continue
self.display.vv(f"Using interface {iface['iface']} on node {node} with address {iface['address']} as node ip for ansible_host")
return iface['address']
except Exception:
continue
return None
def _get_lxc_interfaces(self, properties, node, vmid):
status_key = self._fact('status')
if status_key not in properties or not properties[status_key] == 'running':
return
ret = self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/lxc/{vmid}/interfaces", ignore_errors=[501])
if not ret:
return
result = []
for iface in ret:
result_iface = {
'name': iface['name'],
'hwaddr': iface['hwaddr']
}
if 'inet' in iface:
result_iface['inet'] = iface['inet']
if 'inet6' in iface:
result_iface['inet6'] = iface['inet6']
result.append(result_iface)
properties[self._fact('lxc_interfaces')] = result
def _get_agent_network_interfaces(self, node, vmid, vmtype):
result = []
try:
ifaces = self._get_json(
f"{self.proxmox_url}/api2/json/nodes/{node}/{vmtype}/{vmid}/agent/network-get-interfaces"
)['result']
if "error" in ifaces:
if "class" in ifaces["error"]:
# This happens on Windows, even though qemu agent is running, the IP address
# cannot be fetched, as it is unsupported, also a command disabled can happen.
errorClass = ifaces["error"]["class"]
if errorClass in ["Unsupported"]:
self.display.v("Retrieving network interfaces from guest agents on windows with older qemu-guest-agents is not supported")
elif errorClass in ["CommandDisabled"]:
self.display.v("Retrieving network interfaces from guest agents has been disabled")
return result
for iface in ifaces:
result.append({
'name': iface['name'],
'mac-address': iface['hardware-address'] if 'hardware-address' in iface else '',
'ip-addresses': [f"{ip['ip-address']}/{ip['prefix']}" for ip in iface['ip-addresses']] if 'ip-addresses' in iface else []
})
except requests.HTTPError:
pass
return result
def _get_vm_config(self, properties, node, vmid, vmtype, name):
ret = self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/{vmtype}/{vmid}/config")
properties[self._fact('node')] = node
properties[self._fact('vmid')] = vmid
properties[self._fact('vmtype')] = vmtype
plaintext_configs = [
'description',
]
for config in ret:
key = self._fact(config)
value = ret[config]
try:
# fixup disk images as they have no key
if config == 'rootfs' or config.startswith(('virtio', 'sata', 'ide', 'scsi')):
value = f"disk_image={value}"
# Additional field containing parsed tags as list
if config == 'tags':
stripped_value = value.strip()
if stripped_value:
parsed_key = f"{key}_parsed"
properties[parsed_key] = [tag.strip() for tag in stripped_value.replace(',', ';').split(";")]
# The first field in the agent string tells you whether the agent is enabled
# the rest of the comma separated string is extra config for the agent.
# In some (newer versions of proxmox) instances it can be 'enabled=1'.
if config == 'agent':
agent_enabled = 0
try:
agent_enabled = int(value.split(',')[0])
except ValueError:
if value.split(',')[0] == "enabled=1":
agent_enabled = 1
if agent_enabled:
agent_iface_value = self._get_agent_network_interfaces(node, vmid, vmtype)
if agent_iface_value:
agent_iface_key = self.to_safe(f'{key}_interfaces')
properties[agent_iface_key] = agent_iface_value
if config == 'lxc':
out_val = {}
for k, v in value:
if k.startswith('lxc.'):
k = k[len('lxc.'):]
out_val[k] = v
value = out_val
if config not in plaintext_configs and isinstance(value, string_types) \
and all("=" in v for v in value.split(",")):
# split off strings with commas to a dict
# skip over any keys that cannot be processed
try:
value = dict(key.split("=", 1) for key in value.split(","))
except Exception:
continue
properties[key] = value
except NameError:
return None
def _get_vm_status(self, properties, node, vmid, vmtype, name):
ret = self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/{vmtype}/{vmid}/status/current")
properties[self._fact('status')] = ret['status']
if vmtype == 'qemu':
properties[self._fact('qmpstatus')] = ret['qmpstatus']
def _get_vm_snapshots(self, properties, node, vmid, vmtype, name):
ret = self._get_json(f"{self.proxmox_url}/api2/json/nodes/{node}/{vmtype}/{vmid}/snapshot")
snapshots = [snapshot['name'] for snapshot in ret if snapshot['name'] != 'current']
properties[self._fact('snapshots')] = snapshots
def to_safe(self, word):
'''Converts 'bad' characters in a string to underscores so they can be used as Ansible groups
#> ProxmoxInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = r"[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def _fact(self, name):
'''Generate a fact's full name from the common prefix and a name.'''
return self.to_safe(f'{self.facts_prefix}{name.lower()}')
def _group(self, name):
'''Generate a group's full name from the common prefix and a name.'''
return self.to_safe(f'{self.group_prefix}{name.lower()}')
def _can_add_host(self, name, properties):
'''Ensure that a host satisfies all defined hosts filters. If strict mode is
enabled, any error during host filter compositing will lead to an AnsibleError
being raised, otherwise the filter will be ignored.
'''
for host_filter in self.host_filters:
try:
if not self._compose(host_filter, properties):
return False
except Exception as e: # pylint: disable=broad-except
message = f"Could not evaluate host filter {host_filter} for host {name} - {e}"
if self.strict:
raise AnsibleError(message)
display.warning(message)
return True
def _add_host(self, name, variables):
self.inventory.add_host(name)
for k, v in variables.items():
self.inventory.set_variable(name, k, v)
variables = self.inventory.get_host(name).get_vars()
self._set_composite_vars(self.get_option('compose'), variables, name, strict=self.strict)
self._add_host_to_composed_groups(self.get_option('groups'), variables, name, strict=self.strict)
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), variables, name, strict=self.strict)
def _handle_item(self, node, ittype, item):
'''Handle an item from the list of LXC containers and Qemu VM. The
return value will be either None if the item was skipped or the name of
the item if it was added to the inventory.'''
if item.get('template'):
return None
properties = dict()
name, vmid = item['name'], item['vmid']
# get status, config and snapshots if want_facts == True
want_facts = self.get_option('want_facts')
if want_facts:
self._get_vm_status(properties, node, vmid, ittype, name)
self._get_vm_config(properties, node, vmid, ittype, name)
self._get_vm_snapshots(properties, node, vmid, ittype, name)
if ittype == 'lxc':
self._get_lxc_interfaces(properties, node, vmid)
# ensure the host satisfies filters
if not self._can_add_host(name, properties):
return None
# add the host to the inventory
self._add_host(name, properties)
node_type_group = self._group(f'{node}_{ittype}')
self.inventory.add_child(self._group(f"all_{ittype}"), name)
self.inventory.add_child(node_type_group, name)
item_status = item['status']
if item_status == 'running':
if want_facts and ittype == 'qemu' and self.get_option('qemu_extended_statuses'):
# get more details about the status of the qemu VM
item_status = properties.get(self._fact('qmpstatus'), item_status)
self.inventory.add_child(self._group(f'all_{item_status}'), name)
return name
def _populate_pool_groups(self, added_hosts):
'''Generate groups from Proxmox resource pools, ignoring VMs and
containers that were skipped.'''
for pool in self._get_pools():
poolid = pool.get('poolid')
if not poolid:
continue
pool_group = self._group(f"pool_{poolid}")
self.inventory.add_group(pool_group)
for member in self._get_members_per_pool(poolid):
name = member.get('name')
if name and name in added_hosts:
self.inventory.add_child(pool_group, name)
def _populate(self):
# create common groups
default_groups = ['lxc', 'qemu', 'running', 'stopped']
if self.get_option('qemu_extended_statuses'):
default_groups.extend(['prelaunch', 'paused'])
for group in default_groups:
self.inventory.add_group(self._group(f'all_{group}'))
nodes_group = self._group('nodes')
if not self.exclude_nodes:
self.inventory.add_group(nodes_group)
want_proxmox_nodes_ansible_host = self.get_option("want_proxmox_nodes_ansible_host")
# gather vm's on nodes
self._get_auth()
hosts = []
for node in self._get_nodes():
if not node.get('node'):
continue
if not self.exclude_nodes:
self.inventory.add_host(node['node'])
if node['type'] == 'node' and not self.exclude_nodes:
self.inventory.add_child(nodes_group, node['node'])
if node['status'] == 'offline':
continue
# get node IP address
if want_proxmox_nodes_ansible_host and not self.exclude_nodes:
ip = self._get_node_ip(node['node'])
self.inventory.set_variable(node['node'], 'ansible_host', ip)
# Setting composite variables
if not self.exclude_nodes:
variables = self.inventory.get_host(node['node']).get_vars()
self._set_composite_vars(self.get_option('compose'), variables, node['node'], strict=self.strict)
# add LXC/Qemu groups for the node
for ittype in ('lxc', 'qemu'):
node_type_group = self._group(f"{node['node']}_{ittype}")
self.inventory.add_group(node_type_group)
# get LXC containers and Qemu VMs for this node
lxc_objects = zip(itertools.repeat('lxc'), self._get_lxc_per_node(node['node']))
qemu_objects = zip(itertools.repeat('qemu'), self._get_qemu_per_node(node['node']))
for ittype, item in itertools.chain(lxc_objects, qemu_objects):
name = self._handle_item(node['node'], ittype, item)
if name is not None:
hosts.append(name)
# gather vm's in pools
self._populate_pool_groups(hosts)
def parse(self, inventory, loader, path, cache=True):
if not HAS_REQUESTS:
raise AnsibleError('This module requires Python Requests 1.1.0 or higher: '
'https://github.com/psf/requests.')
super(InventoryModule, self).parse(inventory, loader, path)
# read config from file, this sets 'options'
self._read_config_data(path)
# read and template auth options
for o in ('url', 'user', 'password', 'token_id', 'token_secret'):
v = self.get_option(o)
if self.templar.is_template(v):
v = self.templar.template(v, disable_lookups=False)
setattr(self, f'proxmox_{o}', v)
# some more cleanup and validation
self.proxmox_url = self.proxmox_url.rstrip('/')
if self.proxmox_password is None and (self.proxmox_token_id is None or self.proxmox_token_secret is None):
raise AnsibleError('You must specify either a password or both token_id and token_secret.')
if self.get_option('qemu_extended_statuses') and not self.get_option('want_facts'):
raise AnsibleError('You must set want_facts to True if you want to use qemu_extended_statuses.')
# read rest of options
self.exclude_nodes = self.get_option('exclude_nodes')
self.cache_key = self.get_cache_key(path)
self.use_cache = cache and self.get_option('cache')
self.update_cache = not cache and self.get_option('cache')
self.host_filters = self.get_option('filters')
self.group_prefix = self.get_option('group_prefix')
self.facts_prefix = self.get_option('facts_prefix')
self.strict = self.get_option('strict')
# actually populate inventory
self._results = {}
self._populate()
if self.update_cache:
self._cache[self.cache_key] = self._results

View file

@ -1,242 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020, Tristan Le Guern <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import traceback
from time import sleep
PROXMOXER_IMP_ERR = None
try:
from proxmoxer import ProxmoxAPI
from proxmoxer import __version__ as proxmoxer_version
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
PROXMOXER_IMP_ERR = traceback.format_exc()
from ansible.module_utils.basic import env_fallback, missing_required_lib
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
def proxmox_auth_argument_spec():
return dict(
api_host=dict(type='str',
required=True,
fallback=(env_fallback, ['PROXMOX_HOST'])
),
api_port=dict(type='int',
fallback=(env_fallback, ['PROXMOX_PORT'])
),
api_user=dict(type='str',
required=True,
fallback=(env_fallback, ['PROXMOX_USER'])
),
api_password=dict(type='str',
no_log=True,
fallback=(env_fallback, ['PROXMOX_PASSWORD'])
),
api_token_id=dict(type='str',
no_log=False
),
api_token_secret=dict(type='str',
no_log=True
),
validate_certs=dict(type='bool',
default=False
),
)
def proxmox_to_ansible_bool(value):
'''Convert Proxmox representation of a boolean to be ansible-friendly'''
return True if value == 1 else False
def ansible_to_proxmox_bool(value):
'''Convert Ansible representation of a boolean to be proxmox-friendly'''
if value is None:
return None
if not isinstance(value, bool):
raise ValueError("%s must be of type bool not %s" % (value, type(value)))
return 1 if value else 0
class ProxmoxAnsible(object):
"""Base class for Proxmox modules"""
TASK_TIMED_OUT = 'timeout expired'
def __init__(self, module):
if not HAS_PROXMOXER:
module.fail_json(msg=missing_required_lib('proxmoxer'), exception=PROXMOXER_IMP_ERR)
self.module = module
self.proxmoxer_version = proxmoxer_version
self.proxmox_api = self._connect()
# Test token validity
try:
self.proxmox_api.version.get()
except Exception as e:
module.fail_json(msg='%s' % e, exception=traceback.format_exc())
def _connect(self):
api_host = self.module.params['api_host']
api_port = self.module.params['api_port']
api_user = self.module.params['api_user']
api_password = self.module.params['api_password']
api_token_id = self.module.params['api_token_id']
api_token_secret = self.module.params['api_token_secret']
validate_certs = self.module.params['validate_certs']
auth_args = {'user': api_user}
if api_port:
auth_args['port'] = api_port
if api_password:
auth_args['password'] = api_password
else:
if self.proxmoxer_version < LooseVersion('1.1.0'):
self.module.fail_json('Using "token_name" and "token_value" require proxmoxer>=1.1.0')
auth_args['token_name'] = api_token_id
auth_args['token_value'] = api_token_secret
try:
return ProxmoxAPI(api_host, verify_ssl=validate_certs, **auth_args)
except Exception as e:
self.module.fail_json(msg='%s' % e, exception=traceback.format_exc())
def version(self):
try:
apiversion = self.proxmox_api.version.get()
return LooseVersion(apiversion['version'])
except Exception as e:
self.module.fail_json(msg='Unable to retrieve Proxmox VE version: %s' % e)
def get_node(self, node):
try:
nodes = [n for n in self.proxmox_api.nodes.get() if n['node'] == node]
except Exception as e:
self.module.fail_json(msg='Unable to retrieve Proxmox VE node: %s' % e)
return nodes[0] if nodes else None
def get_nextvmid(self):
try:
return self.proxmox_api.cluster.nextid.get()
except Exception as e:
self.module.fail_json(msg='Unable to retrieve next free vmid: %s' % e)
def get_vmid(self, name, ignore_missing=False, choose_first_if_multiple=False):
try:
vms = [vm['vmid'] for vm in self.proxmox_api.cluster.resources.get(type='vm') if vm.get('name') == name]
except Exception as e:
self.module.fail_json(msg='Unable to retrieve list of VMs filtered by name %s: %s' % (name, e))
if not vms:
if ignore_missing:
return None
self.module.fail_json(msg='No VM with name %s found' % name)
elif len(vms) > 1 and not choose_first_if_multiple:
self.module.fail_json(msg='Multiple VMs with name %s found, provide vmid instead' % name)
return vms[0]
def get_vm(self, vmid, ignore_missing=False):
try:
vms = [vm for vm in self.proxmox_api.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid)]
except Exception as e:
self.module.fail_json(msg='Unable to retrieve list of VMs filtered by vmid %s: %s' % (vmid, e))
if vms:
return vms[0]
else:
if ignore_missing:
return None
self.module.fail_json(msg='VM with vmid %s does not exist in cluster' % vmid)
def api_task_ok(self, node, taskid):
try:
status = self.proxmox_api.nodes(node).tasks(taskid).status.get()
return status['status'] == 'stopped' and status['exitstatus'] == 'OK'
except Exception as e:
self.module.fail_json(msg='Unable to retrieve API task ID from node %s: %s' % (node, e))
def api_task_failed(self, node, taskid):
""" Explicitly check if the task stops but exits with a failed status
"""
try:
status = self.proxmox_api.nodes(node).tasks(taskid).status.get()
return status['status'] == 'stopped' and status['exitstatus'] != 'OK'
except Exception as e:
self.module.fail_json(msg='Unable to retrieve API task ID from node %s: %s' % (node, e))
def api_task_complete(self, node_name, task_id, timeout):
"""Wait until the task stops or times out.
:param node_name: Proxmox node name where the task is running.
:param task_id: ID of the running task.
:param timeout: Timeout in seconds to wait for the task to complete.
:return: Task completion status (True/False) and ``exitstatus`` message when status=False.
"""
status = {}
while timeout:
try:
status = self.proxmox_api.nodes(node_name).tasks(task_id).status.get()
except Exception as e:
self.module.fail_json(msg='Unable to retrieve API task ID from node %s: %s' % (node_name, e))
if status['status'] == 'stopped':
if status['exitstatus'] == 'OK':
return True, None
else:
return False, status['exitstatus']
else:
timeout -= 1
if timeout <= 0:
return False, ProxmoxAnsible.TASK_TIMED_OUT
sleep(1)
def get_pool(self, poolid):
"""Retrieve pool information
:param poolid: str - name of the pool
:return: dict - pool information
"""
try:
return self.proxmox_api.pools(poolid).get()
except Exception as e:
self.module.fail_json(msg="Unable to retrieve pool %s information: %s" % (poolid, e))
def get_storages(self, type):
"""Retrieve storages information
:param type: str, optional - type of storages
:return: list of dicts - array of storages
"""
try:
return self.proxmox_api.storage.get(type=type)
except Exception as e:
self.module.fail_json(msg="Unable to retrieve storages information with type %s: %s" % (type, e))
def get_storage_content(self, node, storage, content=None, vmid=None):
try:
return (
self.proxmox_api.nodes(node)
.storage(storage)
.content()
.get(content=content, vmid=vmid)
)
except Exception as e:
self.module.fail_json(
msg="Unable to list content on %s, %s for %s and %s: %s"
% (node, storage, content, vmid, e)
)

File diff suppressed because it is too large Load diff

View file

@ -1,570 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024, IamLunchbox <r.grieger@hotmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_backup
author: "Raphael Grieger (@IamLunchbox) <r.grieger@hotmail.com>"
short_description: Start a VM backup in Proxmox VE cluster
version_added: 10.1.0
description:
- Allows you to create backups of KVM and LXC guests in Proxmox VE cluster.
- Offers the GUI functionality of creating a single backup as well as using the run-now functionality from the cluster backup
schedule.
- The mininum required privileges to use this module are C(VM.Backup) and C(Datastore.AllocateSpace) for the respective
VMs and storage.
- Most options are optional and if unspecified will be chosen by the Cluster and its default values.
- Note that this module B(is not idempotent). It always starts a new backup (when not in check mode).
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
backup_mode:
description:
- The mode how Proxmox performs backups. The default is, to create a runtime snapshot including memory.
- Check U(https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_backup_modes) for an explanation of the differences.
type: str
choices: ["snapshot", "suspend", "stop"]
default: snapshot
bandwidth:
description:
- Limit the I/O bandwidth (in KiB/s) to write backup. V(0) is unlimited.
type: int
change_detection_mode:
description:
- Set the change detection mode (available from Proxmox VE 8.3).
- It is only used when backing up containers, Proxmox silently ignores this option when applied to kvm guests.
type: str
choices: ["legacy", "data", "metadata"]
compress:
description:
- Enable additional compression of the backup archive.
- V(0) will use the Proxmox recommended value, depending on your storage target.
type: str
choices: ["0", "1", "gzip", "lzo", "zstd"]
compression_threads:
description:
- The number of threads zstd will use to compress the backup.
- V(0) uses 50% of the available cores, anything larger than V(0) will use exactly as many threads.
- Is ignored if you specify O(compress=gzip) or O(compress=lzo).
type: int
description:
description:
- Specify the description of the backup.
- Needs to be a single line, newline and backslash need to be escaped as V(\\n) and V(\\\\) respectively.
- If you need variable interpolation, you can set the content as usual through ansible jinja templating and/or let Proxmox
substitute templates.
- Proxmox currently supports V({{cluster}}), V({{guestname}}), V({{node}}), and V({{vmid}}) as templating variables.
Since this is also a jinja delimiter, you need to set these values as raw jinja.
default: "{{guestname}}"
type: str
fleecing:
description:
- Enable backup fleecing. Works only for virtual machines and their disks.
- Must be entered as a string, containing key-value pairs in a list.
type: str
mode:
description:
- Specifices the mode to select backup targets.
choices: ["include", "all", "pool"]
required: true
type: str
node:
description:
- Only execute the backup job for the given node.
- This option is usually used if O(mode=all).
- If you specify a node ID and your vmids or pool do not reside there, they will not be backed up!
type: str
notification_mode:
description:
- Determine which notification system to use.
type: str
choices: ["auto", "legacy-sendmail", "notification-system"]
default: auto
performance_tweaks:
description:
- Enable other performance-related settings.
- Must be entered as a string, containing comma separated key-value pairs.
- 'For example: V(max-workers=2,pbs-entries-max=2).'
type: str
pool:
description:
- Specify a pool name to limit backups to guests to the given pool.
- Required, when O(mode=pool).
- Also required, when your user only has VM.Backup permission for this single pool.
type: str
protected:
description:
- Marks backups as protected.
- '"Might fail, when the PBS backend has verify enabled due to this bug: U(https://bugzilla.proxmox.com/show_bug.cgi?id=4289)".'
type: bool
retention:
description:
- Use custom retention options instead of those from the default cluster configuration (which is usually V("keep-all=1")).
- Always requires Datastore.Allocate permission at the storage endpoint.
- Specifying a retention time other than V(keep-all=1) might trigger pruning on the datastore, if an existing backup
should be deleted due to your specified timeframe.
- Deleting requires C(Datastore.Modify) or C(Datastore.Prune) permissions on the backup storage.
type: str
storage:
description:
- Store the backup archive on this storage.
type: str
required: true
vmids:
description:
- The instance IDs to be backed up.
- Only valid, if O(mode=include).
type: list
elements: int
wait:
description:
- Wait for the backup to be finished.
- Fails, if job does not succeed successfully within the given timeout.
type: bool
default: false
wait_timeout:
description:
- Seconds to wait for the backup to be finished.
- Will only be evaluated, if O(wait=true).
type: int
default: 10
requirements: ["proxmoxer", "requests"]
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Backup all vms in the Proxmox cluster to storage mypbs
community.general.proxmox_backup:
api_user: root@pam
api_password: secret
api_host: node1
storage: mypbs
mode: all
- name: Backup VMID 100 by stopping it and set an individual retention
community.general.proxmox_backup:
api_user: root@pam
api_password: secret
api_host: node1
backup-mode: stop
mode: include
retention: keep-daily=5, keep-last=14, keep-monthly=4, keep-weekly=4, keep-yearly=0
storage: mypbs
vmid: [100]
- name: Backup all vms on node node2 to storage mypbs and wait for the task to finish
community.general.proxmox_backup:
api_user: test@pve
api_password: 1q2w3e
api_host: node2
storage: mypbs
mode: all
node: node2
wait: true
wait_timeout: 30
- name: Use all the options
community.general.proxmox_backup:
api_user: root@pam
api_password: secret
api_host: node1
bandwidth: 1000
backup_mode: suspend
compress: zstd
compression_threads: 0
description: A single backup for {% raw %}{{ guestname }}{% endraw %}
mode: include
notification_mode: notification-system
protected: true
retention: keep-monthly=1, keep-weekly=1
storage: mypbs
vmids:
- 100
- 101
"""
RETURN = r"""
backups:
description: List of nodes and their task IDs.
returned: on success
type: list
elements: dict
contains:
node:
description: Node ID.
returned: on success
type: str
status:
description: Last known task status. Will be unknown, if O(wait=false).
returned: on success
type: str
choices: ["unknown", "success", "failed"]
upid:
description: >-
Proxmox cluster UPID, which is needed to lookup task info. Returns OK, when a cluster node did not create a task after
being called, for example due to no matching targets.
returned: on success
type: str
"""
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.proxmox import ProxmoxAnsible, proxmox_auth_argument_spec
def has_permission(permission_tree, permission, search_scopes, default=0, expected=1):
return any(permission_tree.get(scope, {}).get(permission, default) == expected for scope in search_scopes)
class ProxmoxBackupAnsible(ProxmoxAnsible):
def _get_permissions(self):
return self.proxmox_api.access.permissions.get()
def _get_resources(self, resource_type=None):
return self.proxmox_api.cluster.resources.get(type=resource_type)
def _get_tasklog(self, node, upid):
return self.proxmox_api.nodes(node).tasks(upid).log.get()
def _get_taskok(self, node, upid):
return self.proxmox_api.nodes(node).tasks(upid).status.get()
def _post_vzdump(self, node, request_body):
return self.proxmox_api.nodes(node).vzdump.post(**request_body)
def request_backup(
self,
request_body,
node_endpoints):
task_ids = []
for node in node_endpoints:
upid = self._post_vzdump(node, request_body)
if upid != "OK":
tasklog = ", ".join(logentry["t"] for logentry in self._get_tasklog(node, upid))
else:
tasklog = ""
task_ids.extend([{"node": node, "upid": upid, "status": "unknown", "log": "%s" % tasklog}])
return task_ids
def check_relevant_nodes(self, node):
nodes = [
item["node"]
for item in self._get_resources("node")
if item["status"] == "online"
]
if node and node not in nodes:
self.module.fail_json(msg="Node %s was specified, but does not exist on the cluster" % node)
elif node:
return [node]
return nodes
def check_storage_permissions(
self,
permissions,
storage,
bandwidth,
performance,
retention):
# Check for Datastore.AllocateSpace in the permission tree
if not has_permission(permissions, "Datastore.AllocateSpace", search_scopes=["/", "/storage/", "/storage/" + storage]):
self.module.fail_json(changed=False, msg="Insufficient permission: Datastore.AllocateSpace is missing")
if (bandwidth or performance) and has_permission(permissions, "Sys.Modify", search_scopes=["/"], expected=0):
self.module.fail_json(changed=False, msg="Insufficient permission: Performance_tweaks and bandwidth require 'Sys.Modify' permission for '/'")
if retention:
if not has_permission(permissions, "Datastore.Allocate", search_scopes=["/", "/storage", "/storage/" + storage]):
self.module.fail_json(changed=False, msg="Insufficient permissions: Custom retention was requested, but Datastore.Allocate is missing")
def check_vmid_backup_permission(self, permissions, vmids, pool):
sufficient_permissions = has_permission(permissions, "VM.Backup", search_scopes=["/", "/vms"])
if pool and not sufficient_permissions:
sufficient_permissions = has_permission(permissions, "VM.Backup", search_scopes=["/pool/" + pool, "/pool/" + pool + "/vms"])
if not sufficient_permissions:
# Since VM.Backup can be given for each vmid at a time, iterate through all of them
# and check, if the permission is set
failed_vmids = []
for vm in vmids:
vm_path = "/vms/" + str(vm)
if has_permission(permissions, "VM.Backup", search_scopes=[vm_path], default=1, expected=0):
failed_vmids.append(str(vm))
if failed_vmids:
self.module.fail_json(
changed=False, msg="Insufficient permissions: "
"You dont have the VM.Backup permission for VMID %s" %
", ".join(failed_vmids))
sufficient_permissions = True
# Finally, when no check succeeded, fail
if not sufficient_permissions:
self.module.fail_json(changed=False, msg="Insufficient permissions: You do not have the VM.Backup permission")
def check_general_backup_permission(self, permissions, pool):
if not has_permission(permissions, "VM.Backup", search_scopes=["/", "/vms"] + (["/pool/" + pool] if pool else [])):
self.module.fail_json(changed=False, msg="Insufficient permissions: You dont have the VM.Backup permission")
def check_if_storage_exists(self, storage, node):
storages = self.get_storages(type=None)
# Loop through all cluster storages and get all matching storages
validated_storagepath = [storageentry for storageentry in storages if storageentry["storage"] == storage]
if not validated_storagepath:
self.module.fail_json(
changed=False,
msg="Storage %s does not exist in the cluster" %
storage)
def check_vmids(self, vmids):
cluster_vmids = [vm["vmid"] for vm in self._get_resources("vm")]
if not cluster_vmids:
self.module.warn(
"VM.Audit permission is missing or there are no VMs. This task might fail if one VMID does not exist")
return
vmids_not_found = [str(vm) for vm in vmids if vm not in cluster_vmids]
if vmids_not_found:
self.module.warn(
"VMIDs %s not found. This task will fail if one VMID does not exist" %
", ".join(vmids_not_found))
def wait_for_timeout(self, timeout, raw_tasks):
# filter all entries, which did not get a task id from the Cluster
tasks = []
ok_tasks = []
for node in raw_tasks:
if node["upid"] != "OK":
tasks.append(node)
else:
ok_tasks.append(node)
start_time = time.time()
# iterate through the task ids and check their values
while True:
for node in tasks:
if node["status"] == "unknown":
try:
# proxmox.api_task_ok does not suffice, since it only
# is true at `stopped` and `ok`
status = self._get_taskok(node["node"], node["upid"])
if status["status"] == "stopped" and status["exitstatus"] == "OK":
node["status"] = "success"
if status["status"] == "stopped" and status["exitstatus"] == "job errors":
node["status"] = "failed"
except Exception as e:
self.module.fail_json(msg="Unable to retrieve API task ID from node %s: %s" % (node["node"], e))
if len([item for item in tasks if item["status"] != "unknown"]) == len(tasks):
break
if time.time() > start_time + timeout:
timeouted_nodes = [
node["node"]
for node in tasks
if node["status"] == "unknown"
]
failed_nodes = [node["node"] for node in tasks if node["status"] == "failed"]
if failed_nodes:
self.module.fail_json(
msg="Reached timeout while waiting for backup task. "
"Nodes, who reached the timeout: %s. "
"Nodes, which failed: %s" %
(", ".join(timeouted_nodes), ", ".join(failed_nodes)))
self.module.fail_json(
msg="Reached timeout while waiting for creating VM snapshot. "
"Nodes who reached the timeout: %s" %
", ".join(timeouted_nodes))
time.sleep(1)
error_logs = []
for node in tasks:
if node["status"] == "failed":
tasklog = ", ".join([logentry["t"] for logentry in self._get_tasklog(node["node"], node["upid"])])
error_logs.append("%s: %s" % (node, tasklog))
if error_logs:
self.module.fail_json(
msg="An error occured creating the backups. "
"These are the last log lines from the failed nodes: %s" %
", ".join(error_logs))
for node in tasks:
tasklog = ", ".join([logentry["t"] for logentry in self._get_tasklog(node["node"], node["upid"])])
node["log"] = tasklog
# Finally, reattach ok tasks to show, that all nodes were contacted
tasks.extend(ok_tasks)
return tasks
def permission_check(
self,
storage,
mode,
node,
bandwidth,
performance_tweaks,
retention,
pool,
vmids):
permissions = self._get_permissions()
self.check_if_storage_exists(storage, node)
self.check_storage_permissions(
permissions, storage, bandwidth, performance_tweaks, retention)
if mode == "include":
self.check_vmid_backup_permission(permissions, vmids, pool)
else:
self.check_general_backup_permission(permissions, pool)
def prepare_request_parameters(self, module_arguments):
# ensure only valid post parameters are passed to proxmox
# list of dict items to replace with (new_val, old_val)
post_params = [("bwlimit", "bandwidth"),
("compress", "compress"),
("fleecing", "fleecing"),
("mode", "backup_mode"),
("notes-template", "description"),
("notification-mode", "notification_mode"),
("pbs-change-detection-mode", "change_detection_mode"),
("performance", "performance_tweaks"),
("pool", "pool"),
("protected", "protected"),
("prune-backups", "retention"),
("storage", "storage"),
("zstd", "compression_threads"),
("vmid", "vmids")]
request_body = {}
for new, old in post_params:
if module_arguments.get(old):
request_body.update({new: module_arguments[old]})
# Set mode specific values
if module_arguments["mode"] == "include":
request_body.pop("pool", None)
request_body["all"] = 0
elif module_arguments["mode"] == "all":
request_body.pop("vmid", None)
request_body.pop("pool", None)
request_body["all"] = 1
elif module_arguments["mode"] == "pool":
request_body.pop("vmid", None)
request_body["all"] = 0
# Create comma separated list from vmids, the API expects so
if request_body.get("vmid"):
request_body.update({"vmid": ",".join(str(vmid) for vmid in request_body["vmid"])})
# remove whitespaces from option strings
for key in ("prune-backups", "performance"):
if request_body.get(key):
request_body[key] = request_body[key].replace(" ", "")
# convert booleans to 0/1
for key in ("protected",):
if request_body.get(key):
request_body[key] = 1
return request_body
def backup_create(
self,
module_arguments,
check_mode,
node_endpoints):
request_body = self.prepare_request_parameters(module_arguments)
# stop here, before anything gets changed
if check_mode:
return []
task_ids = self.request_backup(request_body, node_endpoints)
updated_task_ids = []
if module_arguments["wait"]:
updated_task_ids = self.wait_for_timeout(
module_arguments["wait_timeout"], task_ids)
return updated_task_ids if updated_task_ids else task_ids
def main():
module_args = proxmox_auth_argument_spec()
backup_args = {
"backup_mode": {"type": "str", "default": "snapshot", "choices": ["snapshot", "suspend", "stop"]},
"bandwidth": {"type": "int"},
"change_detection_mode": {"type": "str", "choices": ["legacy", "data", "metadata"]},
"compress": {"type": "str", "choices": ["0", "1", "gzip", "lzo", "zstd"]},
"compression_threads": {"type": "int"},
"description": {"type": "str", "default": "{{guestname}}"},
"fleecing": {"type": "str"},
"mode": {"type": "str", "required": True, "choices": ["include", "all", "pool"]},
"node": {"type": "str"},
"notification_mode": {"type": "str", "default": "auto", "choices": ["auto", "legacy-sendmail", "notification-system"]},
"performance_tweaks": {"type": "str"},
"pool": {"type": "str"},
"protected": {"type": "bool"},
"retention": {"type": "str"},
"storage": {"type": "str", "required": True},
"vmids": {"type": "list", "elements": "int"},
"wait": {"type": "bool", "default": False},
"wait_timeout": {"type": "int", "default": 10}}
module_args.update(backup_args)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True,
required_if=[
("mode", "include", ("vmids",), True),
("mode", "pool", ("pool",))
]
)
proxmox = ProxmoxBackupAnsible(module)
bandwidth = module.params["bandwidth"]
mode = module.params["mode"]
node = module.params["node"]
performance_tweaks = module.params["performance_tweaks"]
pool = module.params["pool"]
retention = module.params["retention"]
storage = module.params["storage"]
vmids = module.params["vmids"]
proxmox.permission_check(
storage,
mode,
node,
bandwidth,
performance_tweaks,
retention,
pool,
vmids)
if module.params["mode"] == "include":
proxmox.check_vmids(module.params["vmids"])
node_endpoints = proxmox.check_relevant_nodes(module.params["node"])
try:
result = proxmox.backup_create(module.params, module.check_mode, node_endpoints)
except Exception as e:
module.fail_json(msg="Creating backups failed with exception: %s" % to_native(e))
if module.check_mode:
module.exit_json(backups=result, changed=True, msg="Backups would be created")
elif len([entry for entry in result if entry["upid"] == "OK"]) == len(result):
module.exit_json(backups=result, changed=False, msg="Backup request sent to proxmox, no tasks created")
elif module.params["wait"]:
module.exit_json(backups=result, changed=True, msg="Backups succeeded")
else:
module.exit_json(backups=result, changed=True,
msg="Backup tasks created")
if __name__ == "__main__":
main()

View file

@ -1,244 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Marzieh Raoufnezhad <raoufnezhad at gmail.com>
# Copyright (c) 2024 Maryam Mayabi <mayabi.ahm at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: proxmox_backup_info
short_description: Retrieve information on Proxmox scheduled backups
version_added: 10.3.0
description:
- Retrieve information such as backup times, VM name, VM ID, mode, backup type, and backup schedule using the Proxmox Server API.
author:
- "Marzieh Raoufnezhad (@raoufnezhad) <raoufnezhad@gmail.com>"
- "Maryam Mayabi (@mmayabi) <mayabi.ahm@gmail.com>"
options:
vm_name:
description:
- The name of the Proxmox VM.
- If defined, the returned list will contain backup jobs that have been parsed and filtered based on O(vm_name) value.
- Mutually exclusive with O(vm_id) and O(backup_jobs).
type: str
vm_id:
description:
- The ID of the Proxmox VM.
- If defined, the returned list will contain backup jobs that have been parsed and filtered based on O(vm_id) value.
- Mutually exclusive with O(vm_name) and O(backup_jobs).
type: str
backup_jobs:
description:
- If V(true), the module will return all backup jobs information.
- If V(false), the module will parse all backup jobs based on VM IDs and return a list of VMs' backup information.
- Mutually exclusive with O(vm_id) and O(vm_name).
default: false
type: bool
extends_documentation_fragment:
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
- community.general.proxmox.actiongroup_proxmox
"""
EXAMPLES = """
- name: Print all backup information by VM ID and VM name
community.general.proxmox_backup_info:
api_user: 'myUser@pam'
api_password: '*******'
api_host: '192.168.20.20'
- name: Print Proxmox backup information for a specific VM based on its name
community.general.proxmox_backup_info:
api_user: 'myUser@pam'
api_password: '*******'
api_host: '192.168.20.20'
vm_name: 'mailsrv'
- name: Print Proxmox backup information for a specific VM based on its VM ID
community.general.proxmox_backup_info:
api_user: 'myUser@pam'
api_password: '*******'
api_host: '192.168.20.20'
vm_id: '150'
- name: Print Proxmox all backup job information
community.general.proxmox_backup_info:
api_user: 'myUser@pam'
api_password: '*******'
api_host: '192.168.20.20'
backup_jobs: true
"""
RETURN = """
---
backup_info:
description: The return value provides backup job information based on VM ID or VM name, or total backup job information.
returned: on success, but can be empty
type: list
elements: dict
contains:
bktype:
description: The type of the backup.
returned: on success
type: str
sample: vzdump
enabled:
description: V(1) if backup is enabled else V(0).
returned: on success
type: int
sample: 1
id:
description: The backup job ID.
returned: on success
type: str
sample: backup-83831498-c631
mode:
description: The backup job mode such as snapshot.
returned: on success
type: str
sample: snapshot
next-run:
description: The next backup time.
returned: on success
type: str
sample: "2024-12-28 11:30:00"
schedule:
description: The backup job schedule.
returned: on success
type: str
sample: "sat 15:00"
storage:
description: The backup storage location.
returned: on success
type: str
sample: local
vm_name:
description: The VM name.
returned: on success
type: str
sample: test01
vmid:
description: The VM ID.
returned: on success
type: str
sample: "100"
"""
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible, HAS_PROXMOXER, PROXMOXER_IMP_ERR)
class ProxmoxBackupInfoAnsible(ProxmoxAnsible):
# Get all backup information
def get_jobs_list(self):
try:
backupJobs = self.proxmox_api.cluster.backup.get()
except Exception as e:
self.module.fail_json(msg="Getting backup jobs failed: %s" % e)
return backupJobs
# Get VM information
def get_vms_list(self):
try:
vms = self.proxmox_api.cluster.resources.get(type='vm')
except Exception as e:
self.module.fail_json(msg="Getting VMs info from cluster failed: %s" % e)
return vms
# Get all backup information by VM ID and VM name
def vms_backup_info(self):
backupList = self.get_jobs_list()
vmInfo = self.get_vms_list()
bkInfo = []
for backupItem in backupList:
nextrun = datetime.fromtimestamp(backupItem['next-run'])
vmids = backupItem['vmid'].split(',')
for vmid in vmids:
for vm in vmInfo:
if vm['vmid'] == int(vmid):
vmName = vm['name']
break
bkInfoData = {'id': backupItem['id'],
'schedule': backupItem['schedule'],
'storage': backupItem['storage'],
'mode': backupItem['mode'],
'next-run': nextrun.strftime("%Y-%m-%d %H:%M:%S"),
'enabled': backupItem['enabled'],
'bktype': backupItem['type'],
'vmid': vmid,
'vm_name': vmName}
bkInfo.append(bkInfoData)
return bkInfo
# Get proxmox backup information for a specific VM based on its VM ID or VM name
def specific_vmbackup_info(self, vm_name_id):
fullBackupInfo = self.vms_backup_info()
vmBackupJobs = []
for vm in fullBackupInfo:
if (vm["vm_name"] == vm_name_id or vm["vmid"] == vm_name_id):
vmBackupJobs.append(vm)
return vmBackupJobs
def main():
# Define module args
args = proxmox_auth_argument_spec()
backup_info_args = dict(
vm_id=dict(type='str'),
vm_name=dict(type='str'),
backup_jobs=dict(type='bool', default=False)
)
args.update(backup_info_args)
module = AnsibleModule(
argument_spec=args,
mutually_exclusive=[('backup_jobs', 'vm_id', 'vm_name')],
supports_check_mode=True
)
# Define (init) result value
result = dict(
changed=False
)
# Check if proxmoxer exist
if not HAS_PROXMOXER:
module.fail_json(msg=missing_required_lib('proxmoxer'), exception=PROXMOXER_IMP_ERR)
# Start to connect to proxmox to get backup data
proxmox = ProxmoxBackupInfoAnsible(module)
vm_id = module.params['vm_id']
vm_name = module.params['vm_name']
backup_jobs = module.params['backup_jobs']
# Update result value based on what requested (module args)
if backup_jobs:
result['backup_info'] = proxmox.get_jobs_list()
elif vm_id:
result['backup_info'] = proxmox.specific_vmbackup_info(vm_id)
elif vm_name:
result['backup_info'] = proxmox.specific_vmbackup_info(vm_name)
else:
result['backup_info'] = proxmox.vms_backup_info()
# Return result value
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,877 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2022, Castor Sky (@castorsky) <csky57@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_disk
short_description: Management of a disk of a Qemu(KVM) VM in a Proxmox VE cluster
version_added: 5.7.0
description:
- Allows you to perform some supported operations on a disk in Qemu(KVM) Virtual Machines in a Proxmox VE cluster.
author: "Castor Sky (@castorsky) <csky57@gmail.com>"
attributes:
check_mode:
support: none
diff_mode:
support: none
action_group:
version_added: 9.0.0
options:
name:
description:
- The unique name of the VM.
- You can specify either O(name) or O(vmid) or both of them.
type: str
vmid:
description:
- The unique ID of the VM.
- You can specify either O(vmid) or O(name) or both of them.
type: int
disk:
description:
- The disk key (V(unused[n]), V(ide[n]), V(sata[n]), V(scsi[n]) or V(virtio[n])) you want to operate on.
- Disk buses (IDE, SATA and so on) have fixed ranges of V(n) that accepted by Proxmox API.
- 'For IDE: 0-3; for SCSI: 0-30; for SATA: 0-5; for VirtIO: 0-15; for Unused: 0-255.'
type: str
required: true
state:
description:
- Indicates desired state of the disk.
- O(state=present) can be used to create, replace disk or update options in existing disk. It will create missing disk
or update options in existing one by default. See the O(create) parameter description to control behavior of this
option.
- Some updates on options (like O(cache)) are not being applied instantly and require VM restart.
- Use O(state=detached) to detach existing disk from VM but do not remove it entirely. When O(state=detached) and disk
is V(unused[n]) it will be left in same state (not removed).
- O(state=moved) may be used to change backing storage for the disk in bounds of the same VM or to send the disk to
another VM (using the same backing storage).
- O(state=resized) intended to change the disk size. As of Proxmox 7.2 you can only increase the disk size because shrinking
disks is not supported by the PVE API and has to be done manually.
- To entirely remove the disk from backing storage use O(state=absent).
type: str
choices: ['present', 'resized', 'detached', 'moved', 'absent']
default: present
create:
description:
- With O(create) flag you can control behavior of O(state=present).
- When O(create=disabled) it will not create new disk (if not exists) but will update options in existing disk.
- When O(create=regular) it will either create new disk (if not exists) or update options in existing disk.
- When O(create=forced) it will always create new disk (if disk exists it will be detached and left unused).
type: str
choices: ['disabled', 'regular', 'forced']
default: regular
storage:
description:
- The drive's backing storage.
- Used only when O(state) is V(present).
type: str
size:
description:
- Desired volume size in GB to allocate when O(state=present) (specify O(size) without suffix).
- New (or additional) size of volume when O(state=resized). With the V(+) sign the value is added to the actual size
of the volume and without it, the value is taken as an absolute one.
type: str
bwlimit:
description:
- Override I/O bandwidth limit (in KB/s).
- Used only when O(state=moved).
type: int
delete_moved:
description:
- Delete the original disk after successful copy.
- By default the original disk is kept as unused disk.
- Used only when O(state=moved).
type: bool
target_disk:
description:
- The config key the disk will be moved to on the target VM (for example, V(ide0) or V(scsi1)).
- Default is the source disk key.
- Used only when O(state=moved).
type: str
target_storage:
description:
- Move the disk to this storage when O(state=moved).
- You can move between storages only in scope of one VM.
- Mutually exclusive with O(target_vmid).
- Consider increasing O(timeout) in case of large disk images or slow storage backend.
type: str
target_vmid:
description:
- The (unique) ID of the VM where disk will be placed when O(state=moved).
- You can move disk between VMs only when the same storage is used.
- Mutually exclusive with O(target_vmid).
type: int
timeout:
description:
- Timeout in seconds to wait for slow operations such as importing disk or moving disk between storages.
- Used only when O(state) is V(present) or V(moved).
type: int
default: 600
aio:
description:
- AIO type to use.
type: str
choices: ['native', 'threads', 'io_uring']
backup:
description:
- Whether the drive should be included when making backups.
type: bool
bps_max_length:
description:
- Maximum length of total r/w I/O bursts in seconds.
type: int
bps_rd_max_length:
description:
- Maximum length of read I/O bursts in seconds.
type: int
bps_wr_max_length:
description:
- Maximum length of write I/O bursts in seconds.
type: int
cache:
description:
- The drive's cache mode.
type: str
choices: ['none', 'writethrough', 'writeback', 'unsafe', 'directsync']
cyls:
description:
- Force the drive's physical geometry to have a specific cylinder count.
type: int
detect_zeroes:
description:
- Control whether to detect and try to optimize writes of zeroes.
type: bool
discard:
description:
- Control whether to pass discard/trim requests to the underlying storage.
type: str
choices: ['ignore', 'on']
format:
description:
- The drive's backing file's data format.
type: str
choices: ['raw', 'cow', 'qcow', 'qed', 'qcow2', 'vmdk', 'cloop']
heads:
description:
- Force the drive's physical geometry to have a specific head count.
type: int
import_from:
description:
- Import volume from this existing one.
- Volume string format.
- V(<STORAGE>:<VMID>/<FULL_NAME>) or V(<ABSOLUTE_PATH>/<FULL_NAME>).
- Attention! Only root can use absolute paths.
- This parameter is mutually exclusive with O(size).
- Increase O(timeout) parameter when importing large disk images or using slow storage.
type: str
iops:
description:
- Maximum total r/w I/O in operations per second.
- You can specify either total limit or per operation (mutually exclusive with O(iops_rd) and O(iops_wr)).
type: int
iops_max:
description:
- Maximum unthrottled total r/w I/O pool in operations per second.
type: int
iops_max_length:
description:
- Maximum length of total r/w I/O bursts in seconds.
type: int
iops_rd:
description:
- Maximum read I/O in operations per second.
- You can specify either read or total limit (mutually exclusive with O(iops)).
type: int
iops_rd_max:
description:
- Maximum unthrottled read I/O pool in operations per second.
type: int
iops_rd_max_length:
description:
- Maximum length of read I/O bursts in seconds.
type: int
iops_wr:
description:
- Maximum write I/O in operations per second.
- You can specify either write or total limit (mutually exclusive with O(iops)).
type: int
iops_wr_max:
description:
- Maximum unthrottled write I/O pool in operations per second.
type: int
iops_wr_max_length:
description:
- Maximum length of write I/O bursts in seconds.
type: int
iothread:
description:
- Whether to use iothreads for this drive (only for SCSI and VirtIO).
type: bool
mbps:
description:
- Maximum total r/w speed in megabytes per second.
- Can be fractional but use with caution - fractionals less than 1 are not supported officially.
- You can specify either total limit or per operation (mutually exclusive with O(mbps_rd) and O(mbps_wr)).
type: float
mbps_max:
description:
- Maximum unthrottled total r/w pool in megabytes per second.
type: float
mbps_rd:
description:
- Maximum read speed in megabytes per second.
- You can specify either read or total limit (mutually exclusive with O(mbps)).
type: float
mbps_rd_max:
description:
- Maximum unthrottled read pool in megabytes per second.
type: float
mbps_wr:
description:
- Maximum write speed in megabytes per second.
- You can specify either write or total limit (mutually exclusive with O(mbps)).
type: float
mbps_wr_max:
description:
- Maximum unthrottled write pool in megabytes per second.
type: float
media:
description:
- The drive's media type.
type: str
choices: ['cdrom', 'disk']
iso_image:
description:
- The ISO image to be mounted on the specified in O(disk) CD-ROM.
- O(media=cdrom) needs to be specified for this option to work.
- Use V(<STORAGE>:iso/<ISO_NAME>) to mount ISO.
- Use V(cdrom) to access the physical CD/DVD drive.
- Use V(none) to unmount image from existent CD-ROM or create empty CD-ROM drive.
type: str
version_added: 8.1.0
queues:
description:
- Number of queues (SCSI only).
type: int
replicate:
description:
- Whether the drive should considered for replication jobs.
type: bool
rerror:
description:
- Read error action.
type: str
choices: ['ignore', 'report', 'stop']
ro:
description:
- Whether the drive is read-only.
type: bool
scsiblock:
description:
- Whether to use scsi-block for full passthrough of host block device.
- Can lead to I/O errors in combination with low memory or high memory fragmentation on host.
type: bool
secs:
description:
- Force the drive's physical geometry to have a specific sector count.
type: int
serial:
description:
- The drive's reported serial number, url-encoded, up to 20 bytes long.
type: str
shared:
description:
- Mark this locally-managed volume as available on all nodes.
- This option does not share the volume automatically, it assumes it is shared already!
type: bool
snapshot:
description:
- Control qemu's snapshot mode feature.
- If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown.
type: bool
ssd:
description:
- Whether to expose this drive as an SSD, rather than a rotational hard disk.
type: bool
trans:
description:
- Force disk geometry bios translation mode.
type: str
choices: ['auto', 'lba', 'none']
werror:
description:
- Write error action.
type: str
choices: ['enospc', 'ignore', 'report', 'stop']
wwn:
description:
- The drive's worldwide name, encoded as 16 bytes hex string, prefixed by V(0x).
type: str
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Create new disk in VM (do not rewrite in case it exists already)
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
name: vm-name
disk: scsi3
backup: true
cache: none
storage: local-zfs
size: 5
state: present
- name: Create new disk in VM (force rewrite in case it exists already)
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
vmid: 101
disk: scsi3
format: qcow2
storage: local
size: 16
create: forced
state: present
- name: Update existing disk
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
vmid: 101
disk: ide0
backup: false
ro: true
aio: native
state: present
- name: Grow existing disk
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
vmid: 101
disk: sata4
size: +5G
state: resized
- name: Detach disk (leave it unused)
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
name: vm-name
disk: virtio0
state: detached
- name: Move disk to another storage
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_password: secret
vmid: 101
disk: scsi7
target_storage: local
format: qcow2
state: moved
- name: Move disk from one VM to another
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
vmid: 101
disk: scsi7
target_vmid: 201
state: moved
- name: Remove disk permanently
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_password: secret
vmid: 101
disk: scsi4
state: absent
- name: Mount ISO image on CD-ROM (create drive if missing)
community.general.proxmox_disk:
api_host: node1
api_user: root@pam
api_token_id: token1
api_token_secret: some-token-data
vmid: 101
disk: ide2
media: cdrom
iso_image: local:iso/favorite_distro_amd64.iso
state: present
"""
RETURN = r"""
vmid:
description: The VM vmid.
returned: success
type: int
sample: 101
msg:
description: A short message on what the module did.
returned: always
type: str
sample: "Disk scsi3 created in VM 101"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec,
ProxmoxAnsible)
from re import compile, match, sub
def disk_conf_str_to_dict(config_string):
"""
Transform Proxmox configuration string for disk element into dictionary which has
volume option parsed in '{ storage }:{ volume }' format and other options parsed
in '{ option }={ value }' format. This dictionary will be compared afterward with
attributes that user passed to this module in playbook.\n
config_string examples:
- local-lvm:vm-100-disk-0,ssd=1,discard=on,size=25G
- local:iso/new-vm-ignition.iso,media=cdrom,size=70k
- none,media=cdrom
:param config_string: Retrieved from Proxmox API configuration string
:return: Dictionary with volume option divided into parts ('volume_name', 'storage_name', 'volume') \n
and other options as key:value.
"""
config = config_string.split(',')
# When empty CD-ROM drive present, the volume part of config string is "none".
storage_volume = config.pop(0)
if storage_volume in ["none", "cdrom"]:
config_current = dict(
volume=storage_volume,
storage_name=None,
volume_name=None,
size=None,
)
else:
storage_volume = storage_volume.split(':')
storage_name = storage_volume[0]
volume_name = storage_volume[1]
config_current = dict(
volume='%s:%s' % (storage_name, volume_name),
storage_name=storage_name,
volume_name=volume_name,
)
config.sort()
for option in config:
k, v = option.split('=')
config_current[k] = v
return config_current
class ProxmoxDiskAnsible(ProxmoxAnsible):
create_update_fields = [
'aio', 'backup', 'bps_max_length', 'bps_rd_max_length', 'bps_wr_max_length',
'cache', 'cyls', 'detect_zeroes', 'discard', 'format', 'heads', 'import_from', 'iops', 'iops_max',
'iops_max_length', 'iops_rd', 'iops_rd_max', 'iops_rd_max_length', 'iops_wr', 'iops_wr_max',
'iops_wr_max_length', 'iothread', 'mbps', 'mbps_max', 'mbps_rd', 'mbps_rd_max', 'mbps_wr', 'mbps_wr_max',
'media', 'queues', 'replicate', 'rerror', 'ro', 'scsiblock', 'secs', 'serial', 'shared', 'snapshot',
'ssd', 'trans', 'werror', 'wwn'
]
supported_bus_num_ranges = dict(
ide=range(0, 4),
scsi=range(0, 31),
sata=range(0, 6),
virtio=range(0, 16),
unused=range(0, 256)
)
def get_create_attributes(self):
# Sanitize parameters dictionary:
# - Remove not defined args
# - Ensure True and False converted to int.
# - Remove unnecessary parameters
params = {
k: int(v) if isinstance(v, bool) else v
for k, v in self.module.params.items()
if v is not None and k in self.create_update_fields
}
return params
def create_disk(self, disk, vmid, vm, vm_config):
"""Create a disk in the specified virtual machine. Check if creation is required,
and if so, compile the disk configuration and create it by updating the virtual
machine configuration. After calling the API function, wait for the result.
:param disk: ID of the disk in format "<bus><number>".
:param vmid: ID of the virtual machine where the disk will be created.
:param vm: Name of the virtual machine where the disk will be created.
:param vm_config: Configuration of the virtual machine.
:return: (bool, string) Whether the task was successful or not
and the message to return to Ansible.
"""
create = self.module.params['create']
if create == 'disabled' and disk not in vm_config:
# NOOP
return False, "Disk %s not found in VM %s and creation was disabled in parameters." % (disk, vmid)
timeout_str = "Reached timeout. Last line in task before timeout: %s"
if (create == 'regular' and disk not in vm_config) or (create == 'forced'):
# CREATE
playbook_config = self.get_create_attributes()
import_string = playbook_config.pop('import_from', None)
iso_image = self.module.params.get('iso_image', None)
if import_string:
# When 'import_from' option is present in task options.
config_str = "%s:%s,import-from=%s" % (self.module.params["storage"], "0", import_string)
timeout_str = "Reached timeout while importing VM disk. Last line in task before timeout: %s"
ok_str = "Disk %s imported into VM %s"
elif iso_image is not None:
# disk=<busN>, media=cdrom, iso_image=<ISO_NAME>
config_str = iso_image
ok_str = "CD-ROM was created on %s bus in VM %s"
else:
config_str = self.module.params["storage"]
if not config_str:
self.module.fail_json(msg="The storage option must be specified.")
if self.module.params.get("media") != "cdrom":
config_str += ":%s" % (self.module.params["size"])
ok_str = "Disk %s created in VM %s"
timeout_str = "Reached timeout while creating VM disk. Last line in task before timeout: %s"
for k, v in playbook_config.items():
config_str += ',%s=%s' % (k, v)
disk_config_to_apply = {self.module.params["disk"]: config_str}
if create in ['disabled', 'regular'] and disk in vm_config:
# UPDATE
ok_str = "Disk %s updated in VM %s"
iso_image = self.module.params.get('iso_image', None)
proxmox_config = disk_conf_str_to_dict(vm_config[disk])
# 'import_from' fails on disk updates
playbook_config = self.get_create_attributes()
playbook_config.pop('import_from', None)
# Begin composing configuration string
if iso_image is not None:
config_str = iso_image
else:
config_str = proxmox_config["volume"]
# Append all mandatory fields from playbook_config
for k, v in playbook_config.items():
config_str += ',%s=%s' % (k, v)
# Append to playbook_config fields which are constants for disk images
for option in ['size', 'storage_name', 'volume', 'volume_name']:
playbook_config.update({option: proxmox_config[option]})
# CD-ROM is special disk device and its disk image is subject to change
if iso_image is not None:
playbook_config['volume'] = iso_image
# Values in params are numbers, but strings are needed to compare with disk_config
playbook_config = {k: str(v) for k, v in playbook_config.items()}
# Now compare old and new config to detect if changes are needed
if proxmox_config == playbook_config:
return False, "Disk %s is up to date in VM %s" % (disk, vmid)
disk_config_to_apply = {self.module.params["disk"]: config_str}
current_task_id = self.proxmox_api.nodes(vm['node']).qemu(vmid).config.post(**disk_config_to_apply)
task_success, fail_reason = self.api_task_complete(vm['node'], current_task_id, self.module.params['timeout'])
if task_success:
return True, ok_str % (disk, vmid)
else:
if fail_reason == ProxmoxAnsible.TASK_TIMED_OUT:
self.module.fail_json(
msg=timeout_str % self.proxmox_api.nodes(vm['node']).tasks(current_task_id).log.get()[:1]
)
else:
self.module.fail_json(msg="Error occurred on task execution: %s" % fail_reason)
def move_disk(self, disk, vmid, vm, vm_config):
"""Call the `move_disk` API function that moves the disk to another storage and wait for the result.
:param disk: ID of disk in format "<bus><number>".
:param vmid: ID of virtual machine which disk will be moved.
:param vm: Name of virtual machine which disk will be moved.
:param vm_config: Virtual machine configuration.
:return: (bool, string) Whether the task was successful or not
and the message to return to Ansible.
"""
disk_config = disk_conf_str_to_dict(vm_config[disk])
disk_storage = disk_config["storage_name"]
params = dict()
params['disk'] = disk
params['vmid'] = vmid
params['bwlimit'] = self.module.params['bwlimit']
params['storage'] = self.module.params['target_storage']
params['target-disk'] = self.module.params['target_disk']
params['target-vmid'] = self.module.params['target_vmid']
params['format'] = self.module.params['format']
params['delete'] = 1 if self.module.params.get('delete_moved', False) else 0
# Remove not defined args
params = {k: v for k, v in params.items() if v is not None}
if params.get('storage', False):
# Check if the disk is already in the target storage.
disk_config = disk_conf_str_to_dict(vm_config[disk])
if params['storage'] == disk_config['storage_name']:
return False, "Disk %s already at %s storage" % (disk, disk_storage)
current_task_id = self.proxmox_api.nodes(vm['node']).qemu(vmid).move_disk.post(**params)
task_success, fail_reason = self.api_task_complete(vm['node'], current_task_id, self.module.params['timeout'])
if task_success:
return True, "Disk %s moved from VM %s storage %s" % (disk, vmid, disk_storage)
else:
if fail_reason == ProxmoxAnsible.TASK_TIMED_OUT:
self.module.fail_json(
msg='Reached timeout while waiting for moving VM disk. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(current_task_id).log.get()[:1]
)
else:
self.module.fail_json(msg="Error occurred on task execution: %s" % fail_reason)
def resize_disk(self, disk, vmid, vm, vm_config):
"""Call the `resize` API function to change the disk size and wait for the result.
:param disk: ID of disk in format "<bus><number>".
:param vmid: ID of virtual machine which disk will be resized.
:param vm: Name of virtual machine which disk will be resized.
:param vm_config: Virtual machine configuration.
:return: (Bool, string) Whether the task was successful or not
and the message to return to Ansible.
"""
size = self.module.params['size']
if not match(r'^\+?\d+(\.\d+)?[KMGT]?$', size):
self.module.fail_json(msg="Unrecognized size pattern for disk %s: %s" % (disk, size))
disk_config = disk_conf_str_to_dict(vm_config[disk])
actual_size = disk_config['size']
if size == actual_size:
return False, "Disk %s is already %s size" % (disk, size)
# Resize disk API endpoint has changed at v8.0: PUT method become async.
version = self.version()
pve_major_version = 3 if version < LooseVersion('4.0') else version.version[0]
if pve_major_version >= 8:
current_task_id = self.proxmox_api.nodes(vm['node']).qemu(vmid).resize.set(disk=disk, size=size)
task_success, fail_reason = self.api_task_complete(vm['node'], current_task_id, self.module.params['timeout'])
if task_success:
return True, "Disk %s resized in VM %s" % (disk, vmid)
else:
if fail_reason == ProxmoxAnsible.TASK_TIMED_OUT:
self.module.fail_json(
msg="Reached timeout while resizing disk. Last line in task before timeout: %s" %
self.proxmox_api.nodes(vm['node']).tasks(current_task_id).log.get()[:1]
)
else:
self.module.fail_json(msg="Error occurred on task execution: %s" % fail_reason)
else:
self.proxmox_api.nodes(vm['node']).qemu(vmid).resize.set(disk=disk, size=size)
return True, "Disk %s resized in VM %s" % (disk, vmid)
def main():
module_args = proxmox_auth_argument_spec()
disk_args = dict(
# Proxmox native parameters
aio=dict(type='str', choices=['native', 'threads', 'io_uring']),
backup=dict(type='bool'),
bps_max_length=dict(type='int'),
bps_rd_max_length=dict(type='int'),
bps_wr_max_length=dict(type='int'),
cache=dict(type='str', choices=['none', 'writethrough', 'writeback', 'unsafe', 'directsync']),
cyls=dict(type='int'),
detect_zeroes=dict(type='bool'),
discard=dict(type='str', choices=['ignore', 'on']),
format=dict(type='str', choices=['raw', 'cow', 'qcow', 'qed', 'qcow2', 'vmdk', 'cloop']),
heads=dict(type='int'),
import_from=dict(type='str'),
iops=dict(type='int'),
iops_max=dict(type='int'),
iops_max_length=dict(type='int'),
iops_rd=dict(type='int'),
iops_rd_max=dict(type='int'),
iops_rd_max_length=dict(type='int'),
iops_wr=dict(type='int'),
iops_wr_max=dict(type='int'),
iops_wr_max_length=dict(type='int'),
iothread=dict(type='bool'),
iso_image=dict(type='str'),
mbps=dict(type='float'),
mbps_max=dict(type='float'),
mbps_rd=dict(type='float'),
mbps_rd_max=dict(type='float'),
mbps_wr=dict(type='float'),
mbps_wr_max=dict(type='float'),
media=dict(type='str', choices=['cdrom', 'disk']),
queues=dict(type='int'),
replicate=dict(type='bool'),
rerror=dict(type='str', choices=['ignore', 'report', 'stop']),
ro=dict(type='bool'),
scsiblock=dict(type='bool'),
secs=dict(type='int'),
serial=dict(type='str'),
shared=dict(type='bool'),
snapshot=dict(type='bool'),
ssd=dict(type='bool'),
trans=dict(type='str', choices=['auto', 'lba', 'none']),
werror=dict(type='str', choices=['enospc', 'ignore', 'report', 'stop']),
wwn=dict(type='str'),
# Disk moving relates parameters
bwlimit=dict(type='int'),
target_storage=dict(type='str'),
target_disk=dict(type='str'),
target_vmid=dict(type='int'),
delete_moved=dict(type='bool'),
timeout=dict(type='int', default='600'),
# Module related parameters
name=dict(type='str'),
vmid=dict(type='int'),
disk=dict(type='str', required=True),
storage=dict(type='str'),
size=dict(type='str'),
state=dict(type='str', choices=['present', 'resized', 'detached', 'moved', 'absent'],
default='present'),
create=dict(type='str', choices=['disabled', 'regular', 'forced'], default='regular'),
)
module_args.update(disk_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('name', 'vmid'), ('api_password', 'api_token_id')],
required_if=[
('create', 'forced', ['storage']),
('state', 'resized', ['size']),
],
required_by={
'target_disk': 'target_vmid',
'mbps_max': 'mbps',
'mbps_rd_max': 'mbps_rd',
'mbps_wr_max': 'mbps_wr',
'bps_max_length': 'mbps_max',
'bps_rd_max_length': 'mbps_rd_max',
'bps_wr_max_length': 'mbps_wr_max',
'iops_max': 'iops',
'iops_rd_max': 'iops_rd',
'iops_wr_max': 'iops_wr',
'iops_max_length': 'iops_max',
'iops_rd_max_length': 'iops_rd_max',
'iops_wr_max_length': 'iops_wr_max',
'iso_image': 'media',
},
supports_check_mode=False,
mutually_exclusive=[
('target_vmid', 'target_storage'),
('mbps', 'mbps_rd'),
('mbps', 'mbps_wr'),
('iops', 'iops_rd'),
('iops', 'iops_wr'),
('import_from', 'size'),
]
)
proxmox = ProxmoxDiskAnsible(module)
disk = module.params['disk']
# Verify disk name has appropriate name
disk_regex = compile(r'^([a-z]+)([0-9]+)$')
disk_bus = sub(disk_regex, r'\1', disk)
disk_number = int(sub(disk_regex, r'\2', disk))
if disk_bus not in proxmox.supported_bus_num_ranges:
proxmox.module.fail_json(msg='Unsupported disk bus: %s' % disk_bus)
elif disk_number not in proxmox.supported_bus_num_ranges[disk_bus]:
bus_range = proxmox.supported_bus_num_ranges[disk_bus]
proxmox.module.fail_json(msg='Disk %s number not in range %s..%s ' % (disk, bus_range[0], bus_range[-1]))
name = module.params['name']
state = module.params['state']
vmid = module.params['vmid'] or proxmox.get_vmid(name)
# Ensure VM id exists and retrieve its config
vm = None
vm_config = None
try:
vm = proxmox.get_vm(vmid)
vm_config = proxmox.proxmox_api.nodes(vm['node']).qemu(vmid).config.get()
except Exception as e:
proxmox.module.fail_json(msg='Getting information for VM %s failed with exception: %s' % (vmid, str(e)))
# Do not try to perform actions on missing disk
if disk not in vm_config and state in ['resized', 'moved']:
module.fail_json(vmid=vmid, msg='Unable to process missing disk %s in VM %s' % (disk, vmid))
if state == 'present':
try:
changed, message = proxmox.create_disk(disk, vmid, vm, vm_config)
module.exit_json(changed=changed, vmid=vmid, msg=message)
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to create/update disk %s in VM %s: %s' % (disk, vmid, str(e)))
elif state == 'detached':
try:
if disk_bus == 'unused':
module.exit_json(changed=False, vmid=vmid, msg='Disk %s already detached in VM %s' % (disk, vmid))
if disk not in vm_config:
module.exit_json(changed=False, vmid=vmid, msg="Disk %s not present in VM %s config" % (disk, vmid))
proxmox.proxmox_api.nodes(vm['node']).qemu(vmid).unlink.put(idlist=disk, force=0)
module.exit_json(changed=True, vmid=vmid, msg="Disk %s detached from VM %s" % (disk, vmid))
except Exception as e:
module.fail_json(msg="Failed to detach disk %s from VM %s with exception: %s" % (disk, vmid, str(e)))
elif state == 'moved':
try:
changed, message = proxmox.move_disk(disk, vmid, vm, vm_config)
module.exit_json(changed=changed, vmid=vmid, msg=message)
except Exception as e:
module.fail_json(msg="Failed to move disk %s in VM %s with exception: %s" % (disk, vmid, str(e)))
elif state == 'resized':
try:
changed, message = proxmox.resize_disk(disk, vmid, vm, vm_config)
module.exit_json(changed=changed, vmid=vmid, msg=message)
except Exception as e:
module.fail_json(msg="Failed to resize disk %s in VM %s with exception: %s" % (disk, vmid, str(e)))
elif state == 'absent':
try:
if disk not in vm_config:
module.exit_json(changed=False, vmid=vmid, msg="Disk %s is already absent in VM %s" % (disk, vmid))
proxmox.proxmox_api.nodes(vm['node']).qemu(vmid).unlink.put(idlist=disk, force=1)
module.exit_json(changed=True, vmid=vmid, msg="Disk %s removed from VM %s" % (disk, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to remove disk %s from VM %s: %s' % (disk, vmid, str(e)))
if __name__ == '__main__':
main()

View file

@ -1,137 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Tristan Le Guern (@tleguern) <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_domain_info
short_description: Retrieve information about one or more Proxmox VE domains
version_added: 1.3.0
description:
- Retrieve information about one or more Proxmox VE domains.
attributes:
action_group:
version_added: 9.0.0
options:
domain:
description:
- Restrict results to a specific authentication realm.
aliases: ['realm', 'name']
type: str
author: Tristan Le Guern (@tleguern)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List existing domains
community.general.proxmox_domain_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_domains
- name: Retrieve information about the pve domain
community.general.proxmox_domain_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
domain: pve
register: proxmox_domain_pve
"""
RETURN = r"""
proxmox_domains:
description: List of authentication domains.
returned: always, but can be empty
type: list
elements: dict
contains:
comment:
description: Short description of the realm.
returned: on success
type: str
realm:
description: Realm name.
returned: on success
type: str
type:
description: Realm type.
returned: on success
type: str
digest:
description: Realm hash.
returned: on success, can be absent
type: str
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxDomainInfoAnsible(ProxmoxAnsible):
def get_domain(self, realm):
try:
domain = self.proxmox_api.access.domains.get(realm)
except Exception:
self.module.fail_json(msg="Domain '%s' does not exist" % realm)
domain['realm'] = realm
return domain
def get_domains(self):
domains = self.proxmox_api.access.domains.get()
return domains
def proxmox_domain_info_argument_spec():
return dict(
domain=dict(type='str', aliases=['realm', 'name']),
)
def main():
module_args = proxmox_auth_argument_spec()
domain_info_args = proxmox_domain_info_argument_spec()
module_args.update(domain_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[('api_password', 'api_token_id')],
required_together=[('api_token_id', 'api_token_secret')],
supports_check_mode=True
)
result = dict(
changed=False
)
proxmox = ProxmoxDomainInfoAnsible(module)
domain = module.params['domain']
if domain:
domains = [proxmox.get_domain(realm=domain)]
else:
domains = proxmox.get_domains()
result['proxmox_domains'] = domains
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,147 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Tristan Le Guern <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_group_info
short_description: Retrieve information about one or more Proxmox VE groups
version_added: 1.3.0
description:
- Retrieve information about one or more Proxmox VE groups.
attributes:
action_group:
version_added: 9.0.0
options:
group:
description:
- Restrict results to a specific group.
aliases: ['groupid', 'name']
type: str
author: Tristan Le Guern (@tleguern)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List existing groups
community.general.proxmox_group_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_groups
- name: Retrieve information about the admin group
community.general.proxmox_group_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
group: admin
register: proxmox_group_admin
"""
RETURN = r"""
proxmox_groups:
description: List of groups.
returned: always, but can be empty
type: list
elements: dict
contains:
comment:
description: Short description of the group.
returned: on success, can be absent
type: str
groupid:
description: Group name.
returned: on success
type: str
users:
description: List of users in the group.
returned: on success
type: list
elements: str
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxGroupInfoAnsible(ProxmoxAnsible):
def get_group(self, groupid):
try:
group = self.proxmox_api.access.groups.get(groupid)
except Exception:
self.module.fail_json(msg="Group '%s' does not exist" % groupid)
group['groupid'] = groupid
return ProxmoxGroup(group)
def get_groups(self):
groups = self.proxmox_api.access.groups.get()
return [ProxmoxGroup(group) for group in groups]
class ProxmoxGroup:
def __init__(self, group):
self.group = dict()
# Data representation is not the same depending on API calls
for k, v in group.items():
if k == 'users' and isinstance(v, str):
self.group['users'] = v.split(',')
elif k == 'members':
self.group['users'] = group['members']
else:
self.group[k] = v
def proxmox_group_info_argument_spec():
return dict(
group=dict(type='str', aliases=['groupid', 'name']),
)
def main():
module_args = proxmox_auth_argument_spec()
group_info_args = proxmox_group_info_argument_spec()
module_args.update(group_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[('api_password', 'api_token_id')],
required_together=[('api_token_id', 'api_token_secret')],
supports_check_mode=True
)
result = dict(
changed=False
)
proxmox = ProxmoxGroupInfoAnsible(module)
group = module.params['group']
if group:
groups = [proxmox.get_group(groupid=group)]
else:
groups = proxmox.get_groups()
result['proxmox_groups'] = [group.group for group in groups]
module.exit_json(**result)
if __name__ == '__main__':
main()

File diff suppressed because it is too large Load diff

View file

@ -1,313 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2021, Lammert Hellinga (@Kogelvis) <lammert@hellinga.it>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_nic
short_description: Management of a NIC of a Qemu(KVM) VM in a Proxmox VE cluster
version_added: 3.1.0
description:
- Allows you to create/update/delete a NIC on Qemu(KVM) Virtual Machines in a Proxmox VE cluster.
author: "Lammert Hellinga (@Kogelvis) <lammert@hellinga.it>"
attributes:
check_mode:
support: full
diff_mode:
support: none
action_group:
version_added: 9.0.0
options:
bridge:
description:
- Add this interface to the specified bridge device. The Proxmox VE default bridge is called V(vmbr0).
type: str
firewall:
description:
- Whether this interface should be protected by the firewall.
type: bool
default: false
interface:
description:
- Name of the interface, should be V(net[n]) where C(1 n 31).
type: str
required: true
link_down:
description:
- Whether this interface should be disconnected (like pulling the plug).
type: bool
default: false
mac:
description:
- V(XX:XX:XX:XX:XX:XX) should be a unique MAC address. This is automatically generated if not specified.
- When not specified this module will keep the MAC address the same when changing an existing interface.
type: str
model:
description:
- The NIC emulator model.
type: str
choices: ['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em', 'i82551', 'i82557b', 'i82559er', 'ne2k_isa', 'ne2k_pci',
'pcnet', 'rtl8139', 'virtio', 'vmxnet3']
default: virtio
mtu:
description:
- Force MTU, for C(virtio) model only, setting will be ignored otherwise.
- Set to V(1) to use the bridge MTU.
- Value should be C(1 n 65520).
type: int
name:
description:
- Specifies the VM name. Only used on the configuration web interface.
- Required only for O(state=present).
type: str
queues:
description:
- Number of packet queues to be used on the device.
- Value should be C(0 n 16).
type: int
rate:
description:
- Rate limit in MBps (MegaBytes per second) as floating point number.
type: float
state:
description:
- Indicates desired state of the NIC.
type: str
choices: ['present', 'absent']
default: present
tag:
description:
- VLAN tag to apply to packets on this interface.
- Value should be C(1 n 4094).
type: int
trunks:
description:
- List of VLAN trunks to pass through this interface.
type: list
elements: int
vmid:
description:
- Specifies the instance ID.
type: int
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Create NIC net0 targeting the vm by name
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
name: my_vm
interface: net0
bridge: vmbr0
tag: 3
- name: Create NIC net0 targeting the vm by id
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
vmid: 103
interface: net0
bridge: vmbr0
mac: "12:34:56:C0:FF:EE"
firewall: true
- name: Delete NIC net0 targeting the vm by name
community.general.proxmox_nic:
api_user: root@pam
api_password: secret
api_host: proxmoxhost
name: my_vm
interface: net0
state: absent
"""
RETURN = r"""
vmid:
description: The VM vmid.
returned: success
type: int
sample: 115
msg:
description: A short message.
returned: always
type: str
sample: "Nic net0 unchanged on VM with vmid 103"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxNicAnsible(ProxmoxAnsible):
def update_nic(self, vmid, interface, model, **kwargs):
vm = self.get_vm(vmid)
try:
vminfo = self.proxmox_api.nodes(vm['node']).qemu(vmid).config.get()
except Exception as e:
self.module.fail_json(msg='Getting information for VM with vmid = %s failed with exception: %s' % (vmid, e))
if interface in vminfo:
# Convert the current config to a dictionary
config = vminfo[interface].split(',')
config.sort()
config_current = {}
for i in config:
kv = i.split('=')
try:
config_current[kv[0]] = kv[1]
except IndexError:
config_current[kv[0]] = ''
# determine the current model nic and mac-address
models = ['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em', 'i82551', 'i82557b',
'i82559er', 'ne2k_isa', 'ne2k_pci', 'pcnet', 'rtl8139', 'virtio', 'vmxnet3']
current_model = set(models) & set(config_current.keys())
current_model = current_model.pop()
current_mac = config_current[current_model]
# build nic config string
config_provided = "{0}={1}".format(model, current_mac)
else:
config_provided = model
if kwargs['mac']:
config_provided = "{0}={1}".format(model, kwargs['mac'])
if kwargs['bridge']:
config_provided += ",bridge={0}".format(kwargs['bridge'])
if kwargs['firewall']:
config_provided += ",firewall=1"
if kwargs['link_down']:
config_provided += ',link_down=1'
if kwargs['mtu']:
config_provided += ",mtu={0}".format(kwargs['mtu'])
if model != 'virtio':
self.module.warn(
'Ignoring MTU for nic {0} on VM with vmid {1}, '
'model should be set to \'virtio\': '.format(interface, vmid))
if kwargs['queues']:
config_provided += ",queues={0}".format(kwargs['queues'])
if kwargs['rate']:
config_provided += ",rate={0}".format(kwargs['rate'])
if kwargs['tag']:
config_provided += ",tag={0}".format(kwargs['tag'])
if kwargs['trunks']:
config_provided += ",trunks={0}".format(';'.join(str(x) for x in kwargs['trunks']))
net = {interface: config_provided}
vm = self.get_vm(vmid)
if ((interface not in vminfo) or (vminfo[interface] != config_provided)):
if not self.module.check_mode:
self.proxmox_api.nodes(vm['node']).qemu(vmid).config.set(**net)
return True
return False
def delete_nic(self, vmid, interface):
vm = self.get_vm(vmid)
vminfo = self.proxmox_api.nodes(vm['node']).qemu(vmid).config.get()
if interface in vminfo:
if not self.module.check_mode:
self.proxmox_api.nodes(vm['node']).qemu(vmid).config.set(delete=interface)
return True
return False
def main():
module_args = proxmox_auth_argument_spec()
nic_args = dict(
bridge=dict(type='str'),
firewall=dict(type='bool', default=False),
interface=dict(type='str', required=True),
link_down=dict(type='bool', default=False),
mac=dict(type='str'),
model=dict(choices=['e1000', 'e1000-82540em', 'e1000-82544gc', 'e1000-82545em',
'i82551', 'i82557b', 'i82559er', 'ne2k_isa', 'ne2k_pci', 'pcnet',
'rtl8139', 'virtio', 'vmxnet3'], default='virtio'),
mtu=dict(type='int'),
name=dict(type='str'),
queues=dict(type='int'),
rate=dict(type='float'),
state=dict(default='present', choices=['present', 'absent']),
tag=dict(type='int'),
trunks=dict(type='list', elements='int'),
vmid=dict(type='int'),
)
module_args.update(nic_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('name', 'vmid'), ('api_password', 'api_token_id')],
supports_check_mode=True,
)
proxmox = ProxmoxNicAnsible(module)
interface = module.params['interface']
model = module.params['model']
name = module.params['name']
state = module.params['state']
vmid = module.params['vmid']
# If vmid is not defined then retrieve its value from the vm name,
if not vmid:
vmid = proxmox.get_vmid(name)
# Ensure VM id exists
proxmox.get_vm(vmid)
if state == 'present':
try:
if proxmox.update_nic(vmid, interface, model,
bridge=module.params['bridge'],
firewall=module.params['firewall'],
link_down=module.params['link_down'],
mac=module.params['mac'],
mtu=module.params['mtu'],
queues=module.params['queues'],
rate=module.params['rate'],
tag=module.params['tag'],
trunks=module.params['trunks']):
module.exit_json(changed=True, vmid=vmid, msg="Nic {0} updated on VM with vmid {1}".format(interface, vmid))
else:
module.exit_json(vmid=vmid, msg="Nic {0} unchanged on VM with vmid {1}".format(interface, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to change nic {0} on VM with vmid {1}: '.format(interface, vmid) + str(e))
elif state == 'absent':
try:
if proxmox.delete_nic(vmid, interface):
module.exit_json(changed=True, vmid=vmid, msg="Nic {0} deleted on VM with vmid {1}".format(interface, vmid))
else:
module.exit_json(vmid=vmid, msg="Nic {0} does not exist on VM with vmid {1}".format(interface, vmid))
except Exception as e:
module.fail_json(vmid=vmid, msg='Unable to delete nic {0} on VM with vmid {1}: '.format(interface, vmid) + str(e))
if __name__ == '__main__':
main()

View file

@ -1,143 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright John Berninger (@jberning) <john.berninger at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_node_info
short_description: Retrieve information about one or more Proxmox VE nodes
version_added: 8.2.0
description:
- Retrieve information about one or more Proxmox VE nodes.
author: John Berninger (@jwbernin)
attributes:
action_group:
version_added: 9.0.0
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List existing nodes
community.general.proxmox_node_info:
api_host: proxmox1
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_nodes
"""
RETURN = r"""
proxmox_nodes:
description: List of Proxmox VE nodes.
returned: always, but can be empty
type: list
elements: dict
contains:
cpu:
description: Current CPU usage in fractional shares of this host's total available CPU.
returned: on success
type: float
disk:
description: Current local disk usage of this host.
returned: on success
type: int
id:
description: Identity of the node.
returned: on success
type: str
level:
description: Support level. Can be blank if not under a paid support contract.
returned: on success
type: str
maxcpu:
description: Total number of available CPUs on this host.
returned: on success
type: int
maxdisk:
description: Size of local disk in bytes.
returned: on success
type: int
maxmem:
description: Memory size in bytes.
returned: on success
type: int
mem:
description: Used memory in bytes.
returned: on success
type: int
node:
description: Short hostname of this node.
returned: on success
type: str
ssl_fingerprint:
description: SSL fingerprint of the node certificate.
returned: on success
type: str
status:
description: Node status.
returned: on success
type: str
type:
description: Object type being returned.
returned: on success
type: str
uptime:
description: Node uptime in seconds.
returned: on success
type: int
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxNodeInfoAnsible(ProxmoxAnsible):
def get_nodes(self):
nodes = self.proxmox_api.nodes.get()
return nodes
def proxmox_node_info_argument_spec():
return dict()
def main():
module_args = proxmox_auth_argument_spec()
node_info_args = proxmox_node_info_argument_spec()
module_args.update(node_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[('api_password', 'api_token_id')],
required_together=[('api_token_id', 'api_token_secret')],
supports_check_mode=True,
)
result = dict(
changed=False
)
proxmox = ProxmoxNodeInfoAnsible(module)
nodes = proxmox.get_nodes()
result['proxmox_nodes'] = nodes
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,182 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2023, Sergei Antipov (UnderGreen) <greendayonfire@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_pool
short_description: Pool management for Proxmox VE cluster
description:
- Create or delete a pool for Proxmox VE clusters.
- For pool members management please consult M(community.general.proxmox_pool_member) module.
version_added: 7.1.0
author: "Sergei Antipov (@UnderGreen) <greendayonfire@gmail.com>"
attributes:
check_mode:
support: full
diff_mode:
support: none
action_group:
version_added: 9.0.0
options:
poolid:
description:
- The pool ID.
type: str
aliases: ["name"]
required: true
state:
description:
- Indicate desired state of the pool.
- The pool must be empty prior deleting it with O(state=absent).
choices: ['present', 'absent']
default: present
type: str
comment:
description:
- Specify the description for the pool.
- Parameter is ignored when pool already exists or O(state=absent).
type: str
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Create new Proxmox VE pool
community.general.proxmox_pool:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
comment: 'New pool'
- name: Delete the Proxmox VE pool
community.general.proxmox_pool:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
state: absent
"""
RETURN = r"""
poolid:
description: The pool ID.
returned: success
type: str
sample: test
msg:
description: A short message on what the module did.
returned: always
type: str
sample: "Pool test successfully created"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxPoolAnsible(ProxmoxAnsible):
def is_pool_existing(self, poolid):
"""Check whether pool already exist
:param poolid: str - name of the pool
:return: bool - is pool exists?
"""
try:
pools = self.proxmox_api.pools.get()
for pool in pools:
if pool['poolid'] == poolid:
return True
return False
except Exception as e:
self.module.fail_json(msg="Unable to retrieve pools: {0}".format(e))
def is_pool_empty(self, poolid):
"""Check whether pool has members
:param poolid: str - name of the pool
:return: bool - is pool empty?
"""
return True if not self.get_pool(poolid)['members'] else False
def create_pool(self, poolid, comment=None):
"""Create Proxmox VE pool
:param poolid: str - name of the pool
:param comment: str, optional - Description of a pool
:return: None
"""
if self.is_pool_existing(poolid):
self.module.exit_json(changed=False, poolid=poolid, msg="Pool {0} already exists".format(poolid))
if self.module.check_mode:
return
try:
self.proxmox_api.pools.post(poolid=poolid, comment=comment)
except Exception as e:
self.module.fail_json(msg="Failed to create pool with ID {0}: {1}".format(poolid, e))
def delete_pool(self, poolid):
"""Delete Proxmox VE pool
:param poolid: str - name of the pool
:return: None
"""
if not self.is_pool_existing(poolid):
self.module.exit_json(changed=False, poolid=poolid, msg="Pool {0} doesn't exist".format(poolid))
if self.is_pool_empty(poolid):
if self.module.check_mode:
return
try:
self.proxmox_api.pools(poolid).delete()
except Exception as e:
self.module.fail_json(msg="Failed to delete pool with ID {0}: {1}".format(poolid, e))
else:
self.module.fail_json(msg="Can't delete pool {0} with members. Please remove members from pool first.".format(poolid))
def main():
module_args = proxmox_auth_argument_spec()
pools_args = dict(
poolid=dict(type="str", aliases=["name"], required=True),
comment=dict(type="str"),
state=dict(default="present", choices=["present", "absent"]),
)
module_args.update(pools_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[("api_token_id", "api_token_secret")],
required_one_of=[("api_password", "api_token_id")],
supports_check_mode=True
)
poolid = module.params["poolid"]
comment = module.params["comment"]
state = module.params["state"]
proxmox = ProxmoxPoolAnsible(module)
if state == "present":
proxmox.create_pool(poolid, comment)
module.exit_json(changed=True, poolid=poolid, msg="Pool {0} successfully created".format(poolid))
else:
proxmox.delete_pool(poolid)
module.exit_json(changed=True, poolid=poolid, msg="Pool {0} successfully deleted".format(poolid))
if __name__ == "__main__":
main()

View file

@ -1,240 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2023, Sergei Antipov (UnderGreen) <greendayonfire@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_pool_member
short_description: Add or delete members from Proxmox VE cluster pools
description:
- Create or delete a pool member in Proxmox VE clusters.
version_added: 7.1.0
author: "Sergei Antipov (@UnderGreen) <greendayonfire@gmail.com>"
attributes:
check_mode:
support: full
diff_mode:
support: full
action_group:
version_added: 9.0.0
options:
poolid:
description:
- The pool ID.
type: str
aliases: ["name"]
required: true
member:
description:
- Specify the member name.
- For O(type=storage) it is a storage name.
- For O(type=vm) either vmid or vm name could be used.
type: str
required: true
type:
description:
- Member type to add/remove from the pool.
choices: ["vm", "storage"]
default: vm
type: str
state:
description:
- Indicate desired state of the pool member.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Add new VM to Proxmox VE pool
community.general.proxmox_pool_member:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
member: 101
- name: Add new storage to Proxmox VE pool
community.general.proxmox_pool_member:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
member: zfs-data
type: storage
- name: Remove VM from the Proxmox VE pool using VM name
community.general.proxmox_pool_member:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
member: pxe.home.arpa
state: absent
- name: Remove storage from the Proxmox VE pool
community.general.proxmox_pool_member:
api_host: node1
api_user: root@pam
api_password: password
poolid: test
member: zfs-storage
type: storage
state: absent
"""
RETURN = r"""
poolid:
description: The pool ID.
returned: success
type: str
sample: test
member:
description: Member name.
returned: success
type: str
sample: 101
msg:
description: A short message on what the module did.
returned: always
type: str
sample: "Member 101 deleted from the pool test"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxPoolMemberAnsible(ProxmoxAnsible):
def pool_members(self, poolid):
vms = []
storage = []
for member in self.get_pool(poolid)["members"]:
if member["type"] == "storage":
storage.append(member["storage"])
else:
vms.append(member["vmid"])
return (vms, storage)
def add_pool_member(self, poolid, member, member_type):
current_vms_members, current_storage_members = self.pool_members(poolid)
all_members_before = current_storage_members + current_vms_members
all_members_after = all_members_before.copy()
diff = {"before": {"members": all_members_before}, "after": {"members": all_members_after}}
try:
if member_type == "storage":
storages = self.get_storages(type=None)
if member not in [storage["storage"] for storage in storages]:
self.module.fail_json(msg="Storage {0} doesn't exist in the cluster".format(member))
if member in current_storage_members:
self.module.exit_json(changed=False, poolid=poolid, member=member,
diff=diff, msg="Member {0} is already part of the pool {1}".format(member, poolid))
all_members_after.append(member)
if self.module.check_mode:
return diff
self.proxmox_api.pools(poolid).put(storage=[member])
return diff
else:
try:
vmid = int(member)
except ValueError:
vmid = self.get_vmid(member)
if vmid in current_vms_members:
self.module.exit_json(changed=False, poolid=poolid, member=member,
diff=diff, msg="VM {0} is already part of the pool {1}".format(member, poolid))
all_members_after.append(member)
if not self.module.check_mode:
self.proxmox_api.pools(poolid).put(vms=[vmid])
return diff
except Exception as e:
self.module.fail_json(msg="Failed to add a new member ({0}) to the pool {1}: {2}".format(member, poolid, e))
def delete_pool_member(self, poolid, member, member_type):
current_vms_members, current_storage_members = self.pool_members(poolid)
all_members_before = current_storage_members + current_vms_members
all_members_after = all_members_before.copy()
diff = {"before": {"members": all_members_before}, "after": {"members": all_members_after}}
try:
if member_type == "storage":
if member not in current_storage_members:
self.module.exit_json(changed=False, poolid=poolid, member=member,
diff=diff, msg="Member {0} is not part of the pool {1}".format(member, poolid))
all_members_after.remove(member)
if self.module.check_mode:
return diff
self.proxmox_api.pools(poolid).put(storage=[member], delete=1)
return diff
else:
try:
vmid = int(member)
except ValueError:
vmid = self.get_vmid(member)
if vmid not in current_vms_members:
self.module.exit_json(changed=False, poolid=poolid, member=member,
diff=diff, msg="VM {0} is not part of the pool {1}".format(member, poolid))
all_members_after.remove(vmid)
if not self.module.check_mode:
self.proxmox_api.pools(poolid).put(vms=[vmid], delete=1)
return diff
except Exception as e:
self.module.fail_json(msg="Failed to delete a member ({0}) from the pool {1}: {2}".format(member, poolid, e))
def main():
module_args = proxmox_auth_argument_spec()
pool_members_args = dict(
poolid=dict(type="str", aliases=["name"], required=True),
member=dict(type="str", required=True),
type=dict(default="vm", choices=["vm", "storage"]),
state=dict(default="present", choices=["present", "absent"]),
)
module_args.update(pool_members_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[("api_token_id", "api_token_secret")],
required_one_of=[("api_password", "api_token_id")],
supports_check_mode=True
)
poolid = module.params["poolid"]
member = module.params["member"]
member_type = module.params["type"]
state = module.params["state"]
proxmox = ProxmoxPoolMemberAnsible(module)
if state == "present":
diff = proxmox.add_pool_member(poolid, member, member_type)
module.exit_json(changed=True, poolid=poolid, member=member, diff=diff, msg="New member {0} added to the pool {1}".format(member, poolid))
else:
diff = proxmox.delete_pool_member(poolid, member, member_type)
module.exit_json(changed=True, poolid=poolid, member=member, diff=diff, msg="Member {0} deleted from the pool {1}".format(member, poolid))
if __name__ == "__main__":
main()

View file

@ -1,395 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020, Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_snap
short_description: Snapshot management of instances in Proxmox VE cluster
version_added: 2.0.0
description:
- Allows you to create/delete/restore snapshots from instances in Proxmox VE cluster.
- Supports both KVM and LXC, OpenVZ has not been tested, as it is no longer supported on Proxmox VE.
attributes:
check_mode:
support: full
diff_mode:
support: none
action_group:
version_added: 9.0.0
options:
hostname:
description:
- The instance name.
type: str
vmid:
description:
- The instance ID.
- If not set, will be fetched from PromoxAPI based on the hostname.
type: str
state:
description:
- Indicate desired state of the instance snapshot.
- The V(rollback) value was added in community.general 4.8.0.
choices: ['present', 'absent', 'rollback']
default: present
type: str
force:
description:
- For removal from config file, even if removing disk snapshot fails.
default: false
type: bool
unbind:
description:
- This option only applies to LXC containers.
- Allows to snapshot a container even if it has configured mountpoints.
- Temporarily disables all configured mountpoints, takes snapshot, and finally restores original configuration.
- If running, the container will be stopped and restarted to apply config changes.
- Due to restrictions in the Proxmox API this option can only be used authenticating as V(root@pam) with O(api_password),
API tokens do not work either.
- See U(https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/lxc/{vmid}/config) (PUT tab) for more details.
default: false
type: bool
version_added: 5.7.0
vmstate:
description:
- Snapshot includes RAM.
default: false
type: bool
description:
description:
- Specify the description for the snapshot. Only used on the configuration web interface.
- This is saved as a comment inside the configuration file.
type: str
timeout:
description:
- Timeout for operations.
default: 30
type: int
snapname:
description:
- Name of the snapshot that has to be created/deleted/restored.
default: 'ansible_snap'
type: str
retention:
description:
- Remove old snapshots if there are more than O(retention) snapshots.
- If O(retention) is set to V(0), all snapshots will be kept.
- This is only used when O(state=present) and when an actual snapshot is created. If no snapshot is created, all existing
snapshots will be kept.
default: 0
type: int
version_added: 7.1.0
notes:
- Requires proxmoxer and requests modules on host. These modules can be installed with pip.
requirements: ["proxmoxer", "requests"]
author: Jeffrey van Pelt (@Thulium-Drake)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
- name: Create new container snapshot
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: present
snapname: pre-updates
- name: Create new container snapshot and keep only the 2 newest snapshots
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: present
snapname: snapshot-42
retention: 2
- name: Create new snapshot for a container with configured mountpoints
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: present
unbind: true # requires root@pam+password auth, API tokens are not supported
snapname: pre-updates
- name: Remove container snapshot
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: absent
snapname: pre-updates
- name: Rollback container snapshot
community.general.proxmox_snap:
api_user: root@pam
api_password: 1q2w3e
api_host: node1
vmid: 100
state: rollback
snapname: pre-updates
"""
RETURN = r"""#"""
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxSnapAnsible(ProxmoxAnsible):
def snapshot(self, vm, vmid):
return getattr(self.proxmox_api.nodes(vm['node']), vm['type'])(vmid).snapshot
def vmconfig(self, vm, vmid):
return getattr(self.proxmox_api.nodes(vm['node']), vm['type'])(vmid).config
def vmstatus(self, vm, vmid):
return getattr(self.proxmox_api.nodes(vm['node']), vm['type'])(vmid).status
def _container_mp_get(self, vm, vmid):
cfg = self.vmconfig(vm, vmid).get()
mountpoints = {}
for key, value in cfg.items():
if key.startswith('mp'):
mountpoints[key] = value
return mountpoints
def _container_mp_disable(self, vm, vmid, timeout, unbind, mountpoints, vmstatus):
# shutdown container if running
if vmstatus == 'running':
self.shutdown_instance(vm, vmid, timeout)
# delete all mountpoints configs
self.vmconfig(vm, vmid).put(delete=' '.join(mountpoints))
def _container_mp_restore(self, vm, vmid, timeout, unbind, mountpoints, vmstatus):
# NOTE: requires auth as `root@pam`, API tokens are not supported
# see https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/lxc/{vmid}/config
# restore original config
self.vmconfig(vm, vmid).put(**mountpoints)
# start container (if was running before snap)
if vmstatus == 'running':
self.start_instance(vm, vmid, timeout)
def start_instance(self, vm, vmid, timeout):
taskid = self.vmstatus(vm, vmid).start.post()
while timeout >= 0:
if self.api_task_ok(vm['node'], taskid):
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for VM to start. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def shutdown_instance(self, vm, vmid, timeout):
taskid = self.vmstatus(vm, vmid).shutdown.post()
while timeout >= 0:
if self.api_task_ok(vm['node'], taskid):
return True
timeout -= 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for VM to stop. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def snapshot_retention(self, vm, vmid, retention):
# ignore the last snapshot, which is the current state
snapshots = self.snapshot(vm, vmid).get()[:-1]
if retention > 0 and len(snapshots) > retention:
# sort by age, oldest first
for snap in sorted(snapshots, key=lambda x: x['snaptime'])[:len(snapshots) - retention]:
self.snapshot(vm, vmid)(snap['name']).delete()
def snapshot_create(self, vm, vmid, timeout, snapname, description, vmstate, unbind, retention):
if self.module.check_mode:
return True
if vm['type'] == 'lxc':
if unbind is True:
# check if credentials will work
# WARN: it is crucial this check runs here!
# The correct permissions are required only to reconfig mounts.
# Not checking now would allow to remove the configuration BUT
# fail later, leaving the container in a misconfigured state.
if (
self.module.params['api_user'] != 'root@pam'
or not self.module.params['api_password']
):
self.module.fail_json(msg='`unbind=True` requires authentication as `root@pam` with `api_password`, API tokens are not supported.')
return False
mountpoints = self._container_mp_get(vm, vmid)
vmstatus = self.vmstatus(vm, vmid).current().get()['status']
if mountpoints:
self._container_mp_disable(vm, vmid, timeout, unbind, mountpoints, vmstatus)
taskid = self.snapshot(vm, vmid).post(snapname=snapname, description=description)
else:
taskid = self.snapshot(vm, vmid).post(snapname=snapname, description=description, vmstate=int(vmstate))
while timeout >= 0:
if self.api_task_ok(vm['node'], taskid):
break
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for creating VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
if vm['type'] == 'lxc' and unbind is True and mountpoints:
self._container_mp_restore(vm, vmid, timeout, unbind, mountpoints, vmstatus)
self.snapshot_retention(vm, vmid, retention)
return timeout > 0
def snapshot_remove(self, vm, vmid, timeout, snapname, force):
if self.module.check_mode:
return True
taskid = self.snapshot(vm, vmid).delete(snapname, force=int(force))
while timeout >= 0:
if self.api_task_ok(vm['node'], taskid):
return True
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for removing VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def snapshot_rollback(self, vm, vmid, timeout, snapname):
if self.module.check_mode:
return True
taskid = self.snapshot(vm, vmid)(snapname).post("rollback")
while timeout >= 0:
if self.api_task_ok(vm['node'], taskid):
return True
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for rolling back VM snapshot. Last line in task before timeout: %s' %
self.proxmox_api.nodes(vm['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
timeout -= 1
return False
def main():
module_args = proxmox_auth_argument_spec()
snap_args = dict(
vmid=dict(required=False),
hostname=dict(),
timeout=dict(type='int', default=30),
state=dict(default='present', choices=['present', 'absent', 'rollback']),
description=dict(type='str'),
snapname=dict(type='str', default='ansible_snap'),
force=dict(type='bool', default=False),
unbind=dict(type='bool', default=False),
vmstate=dict(type='bool', default=False),
retention=dict(type='int', default=0),
)
module_args.update(snap_args)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
proxmox = ProxmoxSnapAnsible(module)
state = module.params['state']
vmid = module.params['vmid']
hostname = module.params['hostname']
description = module.params['description']
snapname = module.params['snapname']
timeout = module.params['timeout']
force = module.params['force']
unbind = module.params['unbind']
vmstate = module.params['vmstate']
retention = module.params['retention']
# If hostname is set get the VM id from ProxmoxAPI
if not vmid and hostname:
vmid = proxmox.get_vmid(hostname)
elif not vmid:
module.exit_json(changed=False, msg="Vmid could not be fetched for the following action: %s" % state)
vm = proxmox.get_vm(vmid)
if state == 'present':
try:
for i in proxmox.snapshot(vm, vmid).get():
if i['name'] == snapname:
module.exit_json(changed=False, msg="Snapshot %s is already present" % snapname)
if proxmox.snapshot_create(vm, vmid, timeout, snapname, description, vmstate, unbind, retention):
if module.check_mode:
module.exit_json(changed=False, msg="Snapshot %s would be created" % snapname)
else:
module.exit_json(changed=True, msg="Snapshot %s created" % snapname)
except Exception as e:
module.fail_json(msg="Creating snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
elif state == 'absent':
try:
snap_exist = False
for i in proxmox.snapshot(vm, vmid).get():
if i['name'] == snapname:
snap_exist = True
continue
if not snap_exist:
module.exit_json(changed=False, msg="Snapshot %s does not exist" % snapname)
else:
if proxmox.snapshot_remove(vm, vmid, timeout, snapname, force):
if module.check_mode:
module.exit_json(changed=False, msg="Snapshot %s would be removed" % snapname)
else:
module.exit_json(changed=True, msg="Snapshot %s removed" % snapname)
except Exception as e:
module.fail_json(msg="Removing snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
elif state == 'rollback':
try:
snap_exist = False
for i in proxmox.snapshot(vm, vmid).get():
if i['name'] == snapname:
snap_exist = True
continue
if not snap_exist:
module.exit_json(changed=False, msg="Snapshot %s does not exist" % snapname)
if proxmox.snapshot_rollback(vm, vmid, timeout, snapname):
if module.check_mode:
module.exit_json(changed=True, msg="Snapshot %s would be rolled back" % snapname)
else:
module.exit_json(changed=True, msg="Snapshot %s rolled back" % snapname)
except Exception as e:
module.fail_json(msg="Rollback of snapshot %s of VM %s failed with exception: %s" % (snapname, vmid, to_native(e)))
if __name__ == '__main__':
main()

View file

@ -1,147 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Julian Vanden Broeck (@l00ptr) <julian.vandenbroeck at dalibo.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_storage_contents_info
short_description: List content from a Proxmox VE storage
version_added: 8.2.0
description:
- Retrieves information about stored objects on a specific storage attached to a node.
attributes:
action_group:
version_added: 9.0.0
options:
storage:
description:
- Only return content stored on that specific storage.
aliases: ['name']
type: str
required: true
node:
description:
- Proxmox node to which the storage is attached.
type: str
required: true
content:
description:
- Filter on a specific content type.
type: str
choices: ["all", "backup", "rootdir", "images", "iso"]
default: "all"
vmid:
description:
- Filter on a specific VMID.
type: int
author: Julian Vanden Broeck (@l00ptr)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List existing storages
community.general.proxmox_storage_contents_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
storage: lvm2
content: backup
vmid: 130
"""
RETURN = r"""
proxmox_storage_content:
description: Content of of storage attached to a node.
type: list
returned: success
elements: dict
contains:
content:
description: Proxmox content of listed objects on this storage.
type: str
returned: success
ctime:
description: Creation time of the listed objects.
type: str
returned: success
format:
description: Format of the listed objects (can be V(raw), V(pbs-vm), V(iso),...).
type: str
returned: success
size:
description: Size of the listed objects.
type: int
returned: success
subtype:
description: Subtype of the listed objects (can be V(qemu) or V(lxc)).
type: str
returned: When storage is dedicated to backup, typically on PBS storage.
verification:
description: Backup verification status of the listed objects.
type: dict
returned: When storage is dedicated to backup, typically on PBS storage.
sample: {
"state": "ok",
"upid": "UPID:backup-srv:00130F49:1A12D8375:00001CD7:657A2258:verificationjob:daily\\x3av\\x2dd0cc18c5\\x2d8707:root@pam:"
}
volid:
description: Volume identifier of the listed objects.
type: str
returned: success
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
ProxmoxAnsible, proxmox_auth_argument_spec)
def proxmox_storage_info_argument_spec():
return dict(
storage=dict(type="str", required=True, aliases=["name"]),
content=dict(type="str", required=False, default="all", choices=["all", "backup", "rootdir", "images", "iso"]),
vmid=dict(type="int"),
node=dict(required=True, type="str"),
)
def main():
module_args = proxmox_auth_argument_spec()
storage_info_args = proxmox_storage_info_argument_spec()
module_args.update(storage_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[("api_password", "api_token_id")],
required_together=[("api_token_id", "api_token_secret")],
supports_check_mode=True,
)
result = dict(changed=False)
proxmox = ProxmoxAnsible(module)
res = proxmox.get_storage_content(
node=module.params["node"],
storage=module.params["storage"],
content=None if module.params["content"] == "all" else module.params["content"],
vmid=module.params["vmid"],
)
result["proxmox_storage_content"] = res
module.exit_json(**result)
if __name__ == "__main__":
main()

View file

@ -1,194 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Tristan Le Guern (@tleguern) <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_storage_info
short_description: Retrieve information about one or more Proxmox VE storages
version_added: 2.2.0
description:
- Retrieve information about one or more Proxmox VE storages.
attributes:
action_group:
version_added: 9.0.0
options:
storage:
description:
- Only return information on a specific storage.
aliases: ['name']
type: str
type:
description:
- Filter on a specific storage type.
type: str
author: Tristan Le Guern (@tleguern)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
notes:
- Storage specific options can be returned by this module, please look at the documentation at U(https://pve.proxmox.com/wiki/Storage).
"""
EXAMPLES = r"""
- name: List existing storages
community.general.proxmox_storage_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_storages
- name: List NFS storages only
community.general.proxmox_storage_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
type: nfs
register: proxmox_storages_nfs
- name: Retrieve information about the lvm2 storage
community.general.proxmox_storage_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
storage: lvm2
register: proxmox_storage_lvm
"""
RETURN = r"""
proxmox_storages:
description: List of storage pools.
returned: on success
type: list
elements: dict
contains:
content:
description: Proxmox content types available in this storage.
returned: on success
type: list
elements: str
digest:
description: Storage's digest.
returned: on success
type: str
nodes:
description: List of nodes associated to this storage.
returned: on success, if storage is not local
type: list
elements: str
path:
description: Physical path to this storage.
returned: on success
type: str
prune-backups:
description: Backup retention options.
returned: on success
type: list
elements: dict
shared:
description: Is this storage shared.
returned: on success
type: bool
storage:
description: Storage name.
returned: on success
type: str
type:
description: Storage type.
returned: on success
type: str
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible, proxmox_to_ansible_bool)
class ProxmoxStorageInfoAnsible(ProxmoxAnsible):
def get_storage(self, storage):
try:
storage = self.proxmox_api.storage.get(storage)
except Exception:
self.module.fail_json(msg="Storage '%s' does not exist" % storage)
return ProxmoxStorage(storage)
def get_storages(self, type=None):
storages = self.proxmox_api.storage.get(type=type)
storages = [ProxmoxStorage(storage) for storage in storages]
return storages
class ProxmoxStorage:
def __init__(self, storage):
self.storage = storage
# Convert proxmox representation of lists, dicts and boolean for easier
# manipulation within ansible.
if 'shared' in self.storage:
self.storage['shared'] = proxmox_to_ansible_bool(self.storage['shared'])
if 'content' in self.storage:
self.storage['content'] = self.storage['content'].split(',')
if 'nodes' in self.storage:
self.storage['nodes'] = self.storage['nodes'].split(',')
if 'prune-backups' in storage:
options = storage['prune-backups'].split(',')
self.storage['prune-backups'] = dict()
for option in options:
k, v = option.split('=')
self.storage['prune-backups'][k] = v
def proxmox_storage_info_argument_spec():
return dict(
storage=dict(type='str', aliases=['name']),
type=dict(type='str'),
)
def main():
module_args = proxmox_auth_argument_spec()
storage_info_args = proxmox_storage_info_argument_spec()
module_args.update(storage_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[('api_password', 'api_token_id')],
required_together=[('api_token_id', 'api_token_secret')],
mutually_exclusive=[('storage', 'type')],
supports_check_mode=True
)
result = dict(
changed=False
)
proxmox = ProxmoxStorageInfoAnsible(module)
storage = module.params['storage']
storagetype = module.params['type']
if storage:
storages = [proxmox.get_storage(storage)]
else:
storages = proxmox.get_storages(type=storagetype)
result['proxmox_storages'] = [storage.storage for storage in storages]
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,188 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Andreas Botzner (@paginabianca) <andreas at botzner dot com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_tasks_info
short_description: Retrieve information about one or more Proxmox VE tasks
version_added: 3.8.0
description:
- Retrieve information about one or more Proxmox VE tasks.
author: 'Andreas Botzner (@paginabianca) <andreas at botzner dot com>'
attributes:
action_group:
version_added: 9.0.0
options:
node:
description:
- Node where to get tasks.
required: true
type: str
task:
description:
- Return specific task.
aliases: ['upid', 'name']
type: str
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List tasks on node01
community.general.proxmox_tasks_info:
api_host: proxmoxhost
api_user: root@pam
api_password: '{{ password | default(omit) }}'
api_token_id: '{{ token_id | default(omit) }}'
api_token_secret: '{{ token_secret | default(omit) }}'
node: node01
register: result
- name: Retrieve information about specific tasks on node01
community.general.proxmox_tasks_info:
api_host: proxmoxhost
api_user: root@pam
api_password: '{{ password | default(omit) }}'
api_token_id: '{{ token_id | default(omit) }}'
api_token_secret: '{{ token_secret | default(omit) }}'
task: 'UPID:node01:00003263:16167ACE:621EE230:srvreload:networking:root@pam:'
node: node01
register: proxmox_tasks
"""
RETURN = r"""
proxmox_tasks:
description: List of tasks.
returned: on success
type: list
elements: dict
contains:
id:
description: ID of the task.
returned: on success
type: str
node:
description: Node name.
returned: on success
type: str
pid:
description: PID of the task.
returned: on success
type: int
pstart:
description: Pastart of the task.
returned: on success
type: int
starttime:
description: Starting time of the task.
returned: on success
type: int
type:
description: Type of the task.
returned: on success
type: str
upid:
description: UPID of the task.
returned: on success
type: str
user:
description: User that owns the task.
returned: on success
type: str
endtime:
description: Endtime of the task.
returned: on success, can be absent
type: int
status:
description: Status of the task.
returned: on success, can be absent
type: str
failed:
description: If the task failed.
returned: when status is defined
type: bool
msg:
description: Short message.
returned: on failure
type: str
sample: 'Task: UPID:xyz:xyz does not exist on node: proxmoxnode'
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible)
class ProxmoxTaskInfoAnsible(ProxmoxAnsible):
def get_task(self, upid, node):
tasks = self.get_tasks(node)
for task in tasks:
if task.info['upid'] == upid:
return [task]
def get_tasks(self, node):
tasks = self.proxmox_api.nodes(node).tasks.get()
return [ProxmoxTask(task) for task in tasks]
class ProxmoxTask:
def __init__(self, task):
self.info = dict()
for k, v in task.items():
if k == 'status' and isinstance(v, str):
self.info[k] = v
if v != 'OK':
self.info['failed'] = True
else:
self.info[k] = v
def proxmox_task_info_argument_spec():
return dict(
task=dict(type='str', aliases=['upid', 'name'], required=False),
node=dict(type='str', required=True),
)
def main():
module_args = proxmox_auth_argument_spec()
task_info_args = proxmox_task_info_argument_spec()
module_args.update(task_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('api_password', 'api_token_id')],
supports_check_mode=True)
result = dict(changed=False)
proxmox = ProxmoxTaskInfoAnsible(module)
upid = module.params['task']
node = module.params['node']
if upid:
tasks = proxmox.get_task(upid=upid, node=node)
else:
tasks = proxmox.get_tasks(node=node)
if tasks is not None:
result['proxmox_tasks'] = [task.info for task in tasks]
module.exit_json(**result)
else:
result['msg'] = 'Task: {0} does not exist on node: {1}.'.format(
upid, node)
module.fail_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,374 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_template
short_description: Management of OS templates in Proxmox VE cluster
description:
- Allows you to upload/delete templates in Proxmox VE cluster.
attributes:
check_mode:
support: none
diff_mode:
support: none
action_group:
version_added: 9.0.0
options:
node:
description:
- Proxmox VE node on which to operate.
type: str
src:
description:
- Path to uploaded file.
- Exactly one of O(src) or O(url) is required for O(state=present).
type: path
url:
description:
- URL to file to download.
- Exactly one of O(src) or O(url) is required for O(state=present).
type: str
version_added: 10.1.0
template:
description:
- The template name.
- Required for O(state=absent) to delete a template.
- Required for O(state=present) to download an appliance container template (pveam).
type: str
content_type:
description:
- Content type.
- Required only for O(state=present).
type: str
default: 'vztmpl'
choices: ['vztmpl', 'iso']
storage:
description:
- Target storage.
type: str
default: 'local'
timeout:
description:
- Timeout for operations.
type: int
default: 30
force:
description:
- It can only be used with O(state=present), existing template will be overwritten.
type: bool
default: false
state:
description:
- Indicate desired state of the template.
type: str
choices: ['present', 'absent']
default: present
checksum_algorithm:
description:
- Algorithm used to verify the checksum.
- If specified, O(checksum) must also be specified.
type: str
choices: ['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512']
version_added: 10.3.0
checksum:
description:
- The checksum to validate against.
- Checksums are often provided by software distributors to verify that a download is not corrupted.
- Checksums can usually be found on the distributors download page in the form of a file or string.
- If specified, O(checksum_algorithm) must also be specified.
type: str
version_added: 10.3.0
notes:
- Requires C(proxmoxer) and C(requests) modules on host. Those modules can be installed with M(ansible.builtin.pip).
- C(proxmoxer) >= 1.2.0 requires C(requests_toolbelt) to upload files larger than 256 MB.
author: Sergei Antipov (@UnderGreen)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
"""
EXAMPLES = r"""
---
- name: Upload new openvz template with minimal options
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
src: ~/ubuntu-14.04-x86_64.tar.gz
- name: Pull new openvz template with minimal options
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
url: https://ubuntu-mirror/ubuntu-14.04-x86_64.tar.gz
- name: >
Upload new openvz template with minimal options use environment
PROXMOX_PASSWORD variable(you should export it before)
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_host: node1
src: ~/ubuntu-14.04-x86_64.tar.gz
- name: Upload new openvz template with all options and force overwrite
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
storage: local
content_type: vztmpl
src: ~/ubuntu-14.04-x86_64.tar.gz
force: true
- name: Pull new openvz template with all options and force overwrite
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
storage: local
content_type: vztmpl
url: https://ubuntu-mirror/ubuntu-14.04-x86_64.tar.gz
force: true
- name: Delete template with minimal options
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
template: ubuntu-14.04-x86_64.tar.gz
state: absent
- name: Download proxmox appliance container template
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
storage: local
content_type: vztmpl
template: ubuntu-20.04-standard_20.04-1_amd64.tar.gz
- name: Download and verify a template's checksum
community.general.proxmox_template:
node: uk-mc02
api_user: root@pam
api_password: 1q2w3e
api_host: node1
url: ubuntu-20.04-standard_20.04-1_amd64.tar.gz
checksum_algorithm: sha256
checksum: 65d860160bdc9b98abf72407e14ca40b609417de7939897d3b58d55787aaef69
"""
import os
import time
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible_collections.community.general.plugins.module_utils.proxmox import (proxmox_auth_argument_spec, ProxmoxAnsible)
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlencode
REQUESTS_TOOLBELT_ERR = None
try:
# requests_toolbelt is used internally by proxmoxer module
import requests_toolbelt # noqa: F401, pylint: disable=unused-import
HAS_REQUESTS_TOOLBELT = True
except ImportError:
HAS_REQUESTS_TOOLBELT = False
REQUESTS_TOOLBELT_ERR = traceback.format_exc()
class ProxmoxTemplateAnsible(ProxmoxAnsible):
def has_template(self, node, storage, content_type, template):
volid = '%s:%s/%s' % (storage, content_type, template)
try:
return any(tmpl['volid'] == volid for tmpl in self.proxmox_api.nodes(node).storage(storage).content.get())
except Exception as e:
self.module.fail_json(msg="Failed to retrieve template '%s': %s" % (volid, e))
def task_status(self, node, taskid, timeout):
"""
Check the task status and wait until the task is completed or the timeout is reached.
"""
while timeout:
if self.api_task_ok(node, taskid):
return True
elif self.api_task_failed(node, taskid):
self.module.fail_json(msg="Task error: %s" % self.proxmox_api.nodes(node).tasks(taskid).status.get()['exitstatus'])
timeout = timeout - 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for uploading/downloading template. Last line in task before timeout: %s' %
self.proxmox_api.nodes(node).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def upload_template(self, node, storage, content_type, realpath, timeout):
stats = os.stat(realpath)
if (LooseVersion(self.proxmoxer_version) >= LooseVersion('1.2.0') and
stats.st_size > 268435456 and not HAS_REQUESTS_TOOLBELT):
self.module.fail_json(msg="'requests_toolbelt' module is required to upload files larger than 256MB",
exception=missing_required_lib('requests_toolbelt'))
try:
taskid = self.proxmox_api.nodes(node).storage(storage).upload.post(content=content_type, filename=open(realpath, 'rb'))
return self.task_status(node, taskid, timeout)
except Exception as e:
self.module.fail_json(msg="Uploading template %s failed with error: %s" % (realpath, e))
def fetch_template(self, node, storage, content_type, url, timeout):
"""Fetch a template from a web url source using the proxmox download-url endpoint
"""
try:
taskid = self.proxmox_api.nodes(node).storage(storage)("download-url").post(
url=url, content=content_type, filename=os.path.basename(url)
)
return self.task_status(node, taskid, timeout)
except Exception as e:
self.module.fail_json(msg="Fetching template from url %s failed with error: %s" % (url, e))
def download_template(self, node, storage, template, timeout):
try:
taskid = self.proxmox_api.nodes(node).aplinfo.post(storage=storage, template=template)
return self.task_status(node, taskid, timeout)
except Exception as e:
self.module.fail_json(msg="Downloading template %s failed with error: %s" % (template, e))
def delete_template(self, node, storage, content_type, template, timeout):
volid = '%s:%s/%s' % (storage, content_type, template)
self.proxmox_api.nodes(node).storage(storage).content.delete(volid)
while timeout:
if not self.has_template(node, storage, content_type, template):
return True
timeout = timeout - 1
if timeout == 0:
self.module.fail_json(msg='Reached timeout while waiting for deleting template.')
time.sleep(1)
return False
def fetch_and_verify(self, node, storage, url, content_type, timeout, checksum, checksum_algorithm):
""" Fetch a template from a web url, then verify it using a checksum.
"""
data = {
'url': url,
'content': content_type,
'filename': os.path.basename(url),
'checksum': checksum,
'checksum-algorithm': checksum_algorithm}
try:
taskid = self.proxmox_api.nodes(node).storage(storage).post("download-url?{}".format(urlencode(data)))
return self.task_status(node, taskid, timeout)
except Exception as e:
self.module.fail_json(msg="Checksum mismatch: %s" % (e))
def main():
module_args = proxmox_auth_argument_spec()
template_args = dict(
node=dict(),
src=dict(type='path'),
url=dict(),
template=dict(),
content_type=dict(default='vztmpl', choices=['vztmpl', 'iso']),
storage=dict(default='local'),
timeout=dict(type='int', default=30),
force=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent']),
checksum_algorithm=dict(choices=['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512']),
checksum=dict(type='str'),
)
module_args.update(template_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret'), ('checksum', 'checksum_algorithm')],
required_one_of=[('api_password', 'api_token_id')],
required_if=[('state', 'absent', ['template'])],
mutually_exclusive=[("src", "url")],
)
proxmox = ProxmoxTemplateAnsible(module)
state = module.params['state']
node = module.params['node']
storage = module.params['storage']
timeout = module.params['timeout']
checksum = module.params['checksum']
checksum_algorithm = module.params['checksum_algorithm']
if state == 'present':
content_type = module.params['content_type']
src = module.params['src']
url = module.params['url']
# download appliance template
if content_type == 'vztmpl' and not (src or url):
template = module.params['template']
if not template:
module.fail_json(msg='template param for downloading appliance template is mandatory')
if proxmox.has_template(node, storage, content_type, template) and not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s already exists' % (storage, content_type, template))
if proxmox.download_template(node, storage, template, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s downloaded' % (storage, content_type, template))
if not src and not url:
module.fail_json(msg='src or url param for uploading template file is mandatory')
elif not url:
template = os.path.basename(src)
if proxmox.has_template(node, storage, content_type, template) and not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s already exists' % (storage, content_type, template))
elif not (os.path.exists(src) and os.path.isfile(src)):
module.fail_json(msg='template file on path %s not exists' % src)
if proxmox.upload_template(node, storage, content_type, src, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s uploaded' % (storage, content_type, template))
elif not src:
template = os.path.basename(urlparse(url).path)
if proxmox.has_template(node, storage, content_type, template):
if not module.params['force']:
module.exit_json(changed=False, msg='template with volid=%s:%s/%s already exists' % (storage, content_type, template))
elif not proxmox.delete_template(node, storage, content_type, template, timeout):
module.fail_json(changed=False, msg='failed to delete template with volid=%s:%s/%s' % (storage, content_type, template))
if checksum:
if proxmox.fetch_and_verify(node, storage, url, content_type, timeout, checksum, checksum_algorithm):
module.exit_json(changed=True, msg="Checksum verified, template with volid=%s:%s/%s uploaded" % (storage, content_type, template))
if proxmox.fetch_template(node, storage, content_type, url, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s uploaded' % (storage, content_type, template))
elif state == 'absent':
try:
content_type = module.params['content_type']
template = module.params['template']
if not proxmox.has_template(node, storage, content_type, template):
module.exit_json(changed=False, msg='template with volid=%s:%s/%s is already deleted' % (storage, content_type, template))
if proxmox.delete_template(node, storage, content_type, template, timeout):
module.exit_json(changed=True, msg='template with volid=%s:%s/%s deleted' % (storage, content_type, template))
except Exception as e:
module.fail_json(msg="deleting of template %s failed with exception: %s" % (template, e))
if __name__ == '__main__':
main()

View file

@ -1,260 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright Tristan Le Guern <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_user_info
short_description: Retrieve information about one or more Proxmox VE users
version_added: 1.3.0
description:
- Retrieve information about one or more Proxmox VE users.
attributes:
action_group:
version_added: 9.0.0
options:
domain:
description:
- Restrict results to a specific authentication realm.
aliases: ['realm']
type: str
user:
description:
- Restrict results to a specific user.
aliases: ['name']
type: str
userid:
description:
- Restrict results to a specific user ID, which is a concatenation of a user and domain parts.
type: str
author: Tristan Le Guern (@tleguern)
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List existing users
community.general.proxmox_user_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
register: proxmox_users
- name: List existing users in the pve authentication realm
community.general.proxmox_user_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
domain: pve
register: proxmox_users_pve
- name: Retrieve information about admin@pve
community.general.proxmox_user_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
userid: admin@pve
register: proxmox_user_admin
- name: Alternative way to retrieve information about admin@pve
community.general.proxmox_user_info:
api_host: helldorado
api_user: root@pam
api_password: "{{ password | default(omit) }}"
api_token_id: "{{ token_id | default(omit) }}"
api_token_secret: "{{ token_secret | default(omit) }}"
user: admin
domain: pve
register: proxmox_user_admin
"""
RETURN = r"""
proxmox_users:
description: List of users.
returned: always, but can be empty
type: list
elements: dict
contains:
comment:
description: Short description of the user.
returned: on success
type: str
domain:
description: User's authentication realm, also the right part of the user ID.
returned: on success
type: str
email:
description: User's email address.
returned: on success
type: str
enabled:
description: User's account state.
returned: on success
type: bool
expire:
description: Expiration date in seconds since EPOCH. Zero means no expiration.
returned: on success
type: int
firstname:
description: User's first name.
returned: on success
type: str
groups:
description: List of groups which the user is a member of.
returned: on success
type: list
elements: str
keys:
description: User's two factor authentication keys.
returned: on success
type: str
lastname:
description: User's last name.
returned: on success
type: str
tokens:
description: List of API tokens associated to the user.
returned: on success
type: list
elements: dict
contains:
comment:
description: Short description of the token.
returned: on success
type: str
expire:
description: Expiration date in seconds since EPOCH. Zero means no expiration.
returned: on success
type: int
privsep:
description: Describe if the API token is further restricted with ACLs or is fully privileged.
returned: on success
type: bool
tokenid:
description: Token name.
returned: on success
type: str
user:
description: User's login name, also the left part of the user ID.
returned: on success
type: str
userid:
description: Proxmox user ID, represented as user@realm.
returned: on success
type: str
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec, ProxmoxAnsible, proxmox_to_ansible_bool)
class ProxmoxUserInfoAnsible(ProxmoxAnsible):
def get_user(self, userid):
try:
user = self.proxmox_api.access.users.get(userid)
except Exception:
self.module.fail_json(msg="User '%s' does not exist" % userid)
user['userid'] = userid
return ProxmoxUser(user)
def get_users(self, domain=None):
users = self.proxmox_api.access.users.get(full=1)
users = [ProxmoxUser(user) for user in users]
if domain:
return [user for user in users if user.user['domain'] == domain]
return users
class ProxmoxUser:
def __init__(self, user):
self.user = dict()
# Data representation is not the same depending on API calls
for k, v in user.items():
if k == 'enable':
self.user['enabled'] = proxmox_to_ansible_bool(user['enable'])
elif k == 'userid':
self.user['user'] = user['userid'].split('@')[0]
self.user['domain'] = user['userid'].split('@')[1]
self.user[k] = v
elif k in ['groups', 'tokens'] and (v == '' or v is None):
self.user[k] = []
elif k == 'groups' and isinstance(v, str):
self.user['groups'] = v.split(',')
elif k == 'tokens' and isinstance(v, list):
for token in v:
if 'privsep' in token:
token['privsep'] = proxmox_to_ansible_bool(token['privsep'])
self.user['tokens'] = v
elif k == 'tokens' and isinstance(v, dict):
self.user['tokens'] = list()
for tokenid, tokenvalues in v.items():
t = tokenvalues
t['tokenid'] = tokenid
if 'privsep' in tokenvalues:
t['privsep'] = proxmox_to_ansible_bool(tokenvalues['privsep'])
self.user['tokens'].append(t)
else:
self.user[k] = v
def proxmox_user_info_argument_spec():
return dict(
domain=dict(type='str', aliases=['realm']),
user=dict(type='str', aliases=['name']),
userid=dict(type='str'),
)
def main():
module_args = proxmox_auth_argument_spec()
user_info_args = proxmox_user_info_argument_spec()
module_args.update(user_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_one_of=[('api_password', 'api_token_id')],
required_together=[('api_token_id', 'api_token_secret')],
mutually_exclusive=[('user', 'userid'), ('domain', 'userid')],
supports_check_mode=True
)
result = dict(
changed=False
)
proxmox = ProxmoxUserInfoAnsible(module)
domain = module.params['domain']
user = module.params['user']
if user and domain:
userid = user + '@' + domain
else:
userid = module.params['userid']
if userid:
users = [proxmox.get_user(userid=userid)]
else:
users = proxmox.get_users(domain=domain)
result['proxmox_users'] = [user.user for user in users]
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -1,285 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r"""
module: proxmox_vm_info
short_description: Retrieve information about one or more Proxmox VE virtual machines
version_added: 7.2.0
description:
- Retrieve information about one or more Proxmox VE virtual machines.
author: 'Sergei Antipov (@UnderGreen) <greendayonfire at gmail dot com>'
attributes:
action_group:
version_added: 9.0.0
options:
node:
description:
- Restrict results to a specific Proxmox VE node.
type: str
type:
description:
- Restrict results to a specific virtual machine(s) type.
type: str
choices:
- all
- qemu
- lxc
default: all
vmid:
description:
- Restrict results to a specific virtual machine by using its ID.
- If VM with the specified vmid does not exist in a cluster then resulting list will be empty.
type: int
name:
description:
- Restrict results to a specific virtual machine(s) by using their name.
- If VM(s) with the specified name do not exist in a cluster then the resulting list will be empty.
type: str
config:
description:
- Whether to retrieve the VM configuration along with VM status.
- If set to V(none) (default), no configuration will be returned.
- If set to V(current), the current running configuration will be returned.
- If set to V(pending), the configuration with pending changes applied will be returned.
type: str
choices:
- none
- current
- pending
default: none
version_added: 8.1.0
network:
description:
- Whether to retrieve the current network status.
- Requires enabled/running qemu-guest-agent on qemu VMs.
type: bool
default: false
version_added: 9.1.0
extends_documentation_fragment:
- community.general.proxmox.actiongroup_proxmox
- community.general.proxmox.documentation
- community.general.attributes
- community.general.attributes.info_module
"""
EXAMPLES = r"""
- name: List all existing virtual machines on node
community.general.proxmox_vm_info:
api_host: proxmoxhost
api_user: root@pam
api_token_id: '{{ token_id | default(omit) }}'
api_token_secret: '{{ token_secret | default(omit) }}'
node: node01
- name: List all QEMU virtual machines on node
community.general.proxmox_vm_info:
api_host: proxmoxhost
api_user: root@pam
api_password: '{{ password | default(omit) }}'
node: node01
type: qemu
- name: Retrieve information about specific VM by ID
community.general.proxmox_vm_info:
api_host: proxmoxhost
api_user: root@pam
api_password: '{{ password | default(omit) }}'
node: node01
type: qemu
vmid: 101
- name: Retrieve information about specific VM by name and get current configuration
community.general.proxmox_vm_info:
api_host: proxmoxhost
api_user: root@pam
api_password: '{{ password | default(omit) }}'
node: node01
type: lxc
name: lxc05.home.arpa
config: current
"""
RETURN = r"""
proxmox_vms:
description: List of virtual machines.
returned: on success
type: list
elements: dict
sample:
[
{
"cpu": 0.258944410905281,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35158379,
"name": "pxe.home.arpa",
"netin": 99715803,
"netout": 14237835,
"node": "pve",
"pid": 1947197,
"status": "running",
"template": False,
"type": "qemu",
"uptime": 135530,
"vmid": 100
},
{
"cpu": 0,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": "pve",
"status": "stopped",
"template": False,
"type": "qemu",
"uptime": 0,
"vmid": 101
}
]
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.proxmox import (
proxmox_auth_argument_spec,
ProxmoxAnsible,
proxmox_to_ansible_bool,
)
class ProxmoxVmInfoAnsible(ProxmoxAnsible):
def get_vms_from_cluster_resources(self):
try:
return self.proxmox_api.cluster().resources().get(type="vm")
except Exception as e:
self.module.fail_json(
msg="Failed to retrieve VMs information from cluster resources: %s" % e
)
def get_vms_from_nodes(self, cluster_machines, type, vmid=None, name=None, node=None, config=None, network=False):
# Leave in dict only machines that user wants to know about
filtered_vms = {
vm: info for vm, info in cluster_machines.items() if not (
type != info["type"]
or (node and info["node"] != node)
or (vmid and int(info["vmid"]) != vmid)
or (name is not None and info["name"] != name)
)
}
# Get list of unique node names and loop through it to get info about machines.
nodes = frozenset([info["node"] for vm, info in filtered_vms.items()])
for this_node in nodes:
# "type" is mandatory and can have only values of "qemu" or "lxc". Seems that use of reflection is safe.
call_vm_getter = getattr(self.proxmox_api.nodes(this_node), type)
vms_from_this_node = call_vm_getter().get()
for detected_vm in vms_from_this_node:
this_vm_id = int(detected_vm["vmid"])
desired_vm = filtered_vms.get(this_vm_id, None)
if desired_vm:
desired_vm.update(detected_vm)
desired_vm["vmid"] = this_vm_id
desired_vm["template"] = proxmox_to_ansible_bool(desired_vm.get("template", 0))
# When user wants to retrieve the VM configuration
if config != "none":
# pending = 0, current = 1
config_type = 0 if config == "pending" else 1
# GET /nodes/{node}/qemu/{vmid}/config current=[0/1]
desired_vm["config"] = call_vm_getter(this_vm_id).config().get(current=config_type)
if network:
if type == "qemu":
desired_vm["network"] = call_vm_getter(this_vm_id).agent("network-get-interfaces").get()['result']
elif type == "lxc":
desired_vm["network"] = call_vm_getter(this_vm_id).interfaces.get()
return filtered_vms
def get_qemu_vms(self, cluster_machines, vmid=None, name=None, node=None, config=None, network=False):
try:
return self.get_vms_from_nodes(cluster_machines, "qemu", vmid, name, node, config, network)
except Exception as e:
self.module.fail_json(msg="Failed to retrieve QEMU VMs information: %s" % e)
def get_lxc_vms(self, cluster_machines, vmid=None, name=None, node=None, config=None, network=False):
try:
return self.get_vms_from_nodes(cluster_machines, "lxc", vmid, name, node, config, network)
except Exception as e:
self.module.fail_json(msg="Failed to retrieve LXC VMs information: %s" % e)
def main():
module_args = proxmox_auth_argument_spec()
vm_info_args = dict(
node=dict(type="str", required=False),
type=dict(
type="str", choices=["lxc", "qemu", "all"], default="all", required=False
),
vmid=dict(type="int", required=False),
name=dict(type="str", required=False),
config=dict(
type="str", choices=["none", "current", "pending"],
default="none", required=False
),
network=dict(type="bool", default=False, required=False),
)
module_args.update(vm_info_args)
module = AnsibleModule(
argument_spec=module_args,
required_together=[("api_token_id", "api_token_secret")],
required_one_of=[("api_password", "api_token_id")],
supports_check_mode=True,
)
proxmox = ProxmoxVmInfoAnsible(module)
node = module.params["node"]
type = module.params["type"]
vmid = module.params["vmid"]
name = module.params["name"]
config = module.params["config"]
network = module.params["network"]
result = dict(changed=False)
if node and proxmox.get_node(node) is None:
module.fail_json(msg="Node %s doesn't exist in PVE cluster" % node)
vms_cluster_resources = proxmox.get_vms_from_cluster_resources()
cluster_machines = {int(machine["vmid"]): machine for machine in vms_cluster_resources}
vms = {}
if type == "lxc":
vms = proxmox.get_lxc_vms(cluster_machines, vmid, name, node, config, network)
elif type == "qemu":
vms = proxmox.get_qemu_vms(cluster_machines, vmid, name, node, config, network)
else:
vms = proxmox.get_qemu_vms(cluster_machines, vmid, name, node, config, network)
vms.update(proxmox.get_lxc_vms(cluster_machines, vmid, name, node, config, network))
result["proxmox_vms"] = [info for vm, info in sorted(vms.items())]
module.exit_json(**result)
if __name__ == "__main__":
main()

View file

@ -1,12 +0,0 @@
# Copyright (c) 2025 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
azp/posix/3
destructive
needs/root
needs/target/connection
skip/docker
skip/alpine
skip/macos

View file

@ -1,18 +0,0 @@
---
# Copyright (c) 2025 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- hosts: localhost
gather_facts: true
serial: 1
tasks:
- name: Copy pct mock
copy:
src: files/pct
dest: /usr/sbin/pct
mode: '0755'
- name: Install paramiko
pip:
name: "paramiko>=3.0.0"

View file

@ -1,33 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2025 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Shell script to mock proxmox pct behaviour
>&2 echo "[DEBUG] INPUT: $@"
pwd="$(pwd)"
# Get quoted parts and restore quotes
declare -a cmd=()
for arg in "$@"; do
if [[ $arg =~ [[:space:]] ]]; then
arg="'$arg'"
fi
cmd+=("$arg")
done
cmd="${cmd[@]:3}"
vmid="${@:2:1}"
>&2 echo "[INFO] MOCKING: pct ${@:1:3} ${cmd}"
tmp_dir="/tmp/ansible-remote/proxmox_pct_remote/integration_test/ct_${vmid}"
mkdir -p "$tmp_dir"
>&2 echo "[INFO] PWD: $tmp_dir"
>&2 echo "[INFO] CMD: ${cmd}"
cd "$tmp_dir"
eval "${cmd}"
cd "$pwd"

View file

@ -1,32 +0,0 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- hosts: "{{ target_hosts }}"
gather_facts: false
serial: 1
tasks:
- name: create file without content
copy:
content: ""
dest: "{{ remote_tmp }}/test_empty.txt"
force: no
mode: '0644'
- name: assert file without content exists
stat:
path: "{{ remote_tmp }}/test_empty.txt"
register: empty_file_stat
- name: verify file without content exists
assert:
that:
- empty_file_stat.stat.exists
fail_msg: "The file {{ remote_tmp }}/test_empty.txt does not exist."
- name: verify file without content is empty
assert:
that:
- empty_file_stat.stat.size == 0
fail_msg: "The file {{ remote_tmp }}/test_empty.txt is not empty."

View file

@ -1,19 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2025 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -eux
ANSIBLE_ROLES_PATH=../ \
ansible-playbook dependencies.yml -v "$@"
./test.sh "$@"
ansible-playbook plugin-specific-tests.yml -i "./test_connection.inventory" \
-e target_hosts="proxmox_pct_remote" \
-e action_prefix= \
-e local_tmp=/tmp/ansible-local \
-e remote_tmp=/tmp/ansible-remote \
"$@"

View file

@ -1 +0,0 @@
../connection_posix/test.sh

View file

@ -1,14 +0,0 @@
# Copyright (c) 2025 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
[proxmox_pct_remote]
proxmox_pct_remote-pipelining ansible_ssh_pipelining=true
proxmox_pct_remote-no-pipelining ansible_ssh_pipelining=false
[proxmox_pct_remote:vars]
ansible_host=localhost
ansible_user=root
ansible_python_interpreter="{{ ansible_playbook_python }}"
ansible_connection=community.general.proxmox_pct_remote
proxmox_vmid=123

View file

@ -1,9 +0,0 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
unsupported
proxmox_domain_info
proxmox_group_info
proxmox_user_info
proxmox_storage_info

View file

@ -1,616 +0,0 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# Copyright (c) 2020, Tristan Le Guern <tleguern at bouledef.eu>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: List domains
proxmox_domain_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
register: results
- assert:
that:
- results is not changed
- results.proxmox_domains is defined
- name: Retrieve info about pve
proxmox_domain_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
domain: pve
register: results
- assert:
that:
- results is not changed
- results.proxmox_domains is defined
- results.proxmox_domains|length == 1
- results.proxmox_domains[0].type == 'pve'
- name: List groups
proxmox_group_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
register: results
- assert:
that:
- results is not changed
- results.proxmox_groups is defined
- name: List users
proxmox_user_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
register: results
- assert:
that:
- results is not changed
- results.proxmox_users is defined
- name: Retrieve info about api_user using name and domain
proxmox_user_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
user: "{{ user }}"
domain: "{{ domain }}"
register: results_user_domain
- assert:
that:
- results_user_domain is not changed
- results_user_domain.proxmox_users is defined
- results_user_domain.proxmox_users|length == 1
- results_user_domain.proxmox_users[0].domain == "{{ domain }}"
- results_user_domain.proxmox_users[0].user == "{{ user }}"
- results_user_domain.proxmox_users[0].userid == "{{ user }}@{{ domain }}"
- name: Retrieve info about api_user using userid
proxmox_user_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
userid: "{{ user }}@{{ domain }}"
register: results_userid
- assert:
that:
- results_userid is not changed
- results_userid.proxmox_users is defined
- results_userid.proxmox_users|length == 1
- results_userid.proxmox_users[0].domain == "{{ domain }}"
- results_userid.proxmox_users[0].user == "{{ user }}"
- results_userid.proxmox_users[0].userid == "{{ user }}@{{ domain }}"
- name: Retrieve info about storage
proxmox_storage_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
storage: "{{ storage }}"
register: results_storage
- assert:
that:
- results_storage is not changed
- results_storage.proxmox_storages is defined
- results_storage.proxmox_storages|length == 1
- results_storage.proxmox_storages[0].storage == "{{ storage }}"
- name: List content on storage
proxmox_storage_contents_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
storage: "{{ storage }}"
node: "{{ node }}"
content: images
register: results_list_storage
- assert:
that:
- results_storage is not changed
- results_storage.proxmox_storage_content is defined
- results_storage.proxmox_storage_content |length == 1
- name: VM creation
tags: [ 'create' ]
block:
- name: Create test vm test-instance
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
storage: "{{ storage }}"
vmid: "{{ from_vmid }}"
name: test-instance
clone: 'yes'
state: present
tags:
- TagWithUppercaseChars
timeout: 500
register: results_kvm
- set_fact:
vmid: "{{ results_kvm.msg.split(' ')[-7] }}"
- assert:
that:
- results_kvm is changed
- results_kvm.vmid == from_vmid
- results_kvm.msg == "VM test-instance with newid {{ vmid }} cloned from vm with vmid {{ from_vmid }}"
- pause:
seconds: 30
- name: VM start
tags: [ 'start' ]
block:
- name: Start test VM
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: started
register: results_action_start
- assert:
that:
- results_action_start is changed
- results_action_start.status == 'stopped'
- results_action_start.vmid == {{ vmid }}
- results_action_start.msg == "VM {{ vmid }} started"
- pause:
seconds: 90
- name: Try to start test VM again
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: started
register: results_action_start_again
- assert:
that:
- results_action_start_again is not changed
- results_action_start_again.status == 'running'
- results_action_start_again.vmid == {{ vmid }}
- results_action_start_again.msg == "VM {{ vmid }} is already running"
- name: Check current status
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: current
register: results_action_current
- assert:
that:
- results_action_current is not changed
- results_action_current.status == 'running'
- results_action_current.vmid == {{ vmid }}
- results_action_current.msg == "VM test-instance with vmid = {{ vmid }} is running"
- name: VM add/change/delete NIC
tags: [ 'nic' ]
block:
- name: Add NIC to test VM
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 42
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 updated on VM with vmid {{ vmid }}"
- name: Update NIC no changes
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 42
register: results
- assert:
that:
- results is not changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 unchanged on VM with vmid {{ vmid }}"
- name: Update NIC with changes
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: present
interface: net5
bridge: vmbr0
tag: 24
firewall: true
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 updated on VM with vmid {{ vmid }}"
- name: Delete NIC
proxmox_nic:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
vmid: "{{ vmid }}"
state: absent
interface: net5
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Nic net5 deleted on VM with vmid {{ vmid }}"
- name: Create new disk in VM
tags: ['create_disk']
block:
- name: Add new disk (without force) to VM
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
storage: "{{ storage }}"
size: 1
state: present
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} created in VM {{ vmid }}"
- name: Try add disk again with same options (expect no-op)
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
storage: "{{ storage }}"
size: 1
state: present
register: results
- assert:
that:
- results is not changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} is up to date in VM {{ vmid }}"
- name: Add new disk replacing existing disk (detach old and leave unused)
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
storage: "{{ storage }}"
size: 2
create: forced
state: present
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} created in VM {{ vmid }}"
- name: Update existing disk in VM
tags: ['update_disk']
block:
- name: Update disk configuration
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
backup: false
ro: true
aio: native
state: present
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} updated in VM {{ vmid }}"
- name: Grow existing disk in VM
tags: ['grow_disk']
block:
- name: Increase disk size
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
size: +1G
state: resized
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} resized in VM {{ vmid }}"
- name: Detach disk and leave it unused
tags: ['detach_disk']
block:
- name: Detach disk
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
state: detached
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} detached from VM {{ vmid }}"
- name: Move disk to another storage or another VM
tags: ['move_disk']
block:
- name: Move disk to another storage inside same VM
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
target_storage: "{{ target_storage }}"
format: "{{ target_format }}"
state: moved
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} moved from VM {{ vmid }} storage {{ results.storage }}"
- name: Move disk to another VM (same storage)
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ vmid }}"
disk: "{{ disk }}"
target_vmid: "{{ target_vm }}"
target_disk: "{{ target_disk }}"
state: moved
register: results
- assert:
that:
- results is changed
- results.vmid == {{ vmid }}
- results.msg == "Disk {{ disk }} moved from VM {{ vmid }} storage {{ results.storage }}"
- name: Remove disk permanently
tags: ['remove_disk']
block:
- name: Remove disk
proxmox_disk:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
vmid: "{{ target_vm }}"
disk: "{{ target_disk }}"
state: absent
register: results
- assert:
that:
- results is changed
- results.vmid == {{ target_vm }}
- results.msg == "Disk {{ target_disk }} removed from VM {{ target_vm }}"
- name: VM stop
tags: [ 'stop' ]
block:
- name: Stop test VM
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: stopped
register: results_action_stop
- assert:
that:
- results_action_stop is changed
- results_action_stop.status == 'running'
- results_action_stop.vmid == {{ vmid }}
- results_action_stop.msg == "VM {{ vmid }} is shutting down"
- pause:
seconds: 5
- name: Check current status again
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: current
register: results_action_current
- assert:
that:
- results_action_current is not changed
- results_action_current.status == 'stopped'
- results_action_current.vmid == {{ vmid }}
- results_action_current.msg == "VM test-instance with vmid = {{ vmid }} is stopped"
- name: VM destroy
tags: [ 'destroy' ]
block:
- name: Destroy test VM
proxmox_kvm:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
node: "{{ node }}"
vmid: "{{ vmid }}"
state: absent
register: results_kvm_destroy
- assert:
that:
- results_kvm_destroy is changed
- results_kvm_destroy.vmid == {{ vmid }}
- results_kvm_destroy.msg == "VM {{ vmid }} removed"
- name: Retrieve information about nodes
proxmox_node_info:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
register: results
- assert:
that:
- results is not changed
- results.proxmox_nodes is defined
- results.proxmox_nodes|length >= 1
- results.proxmox_nodes[0].type == 'node'

View file

@ -1,7 +0,0 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
unsupported
proxmox_pool
proxmox_pool_member

View file

@ -1,7 +0,0 @@
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
poolid: test
member: local
member_type: storage

View file

@ -1,220 +0,0 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Proxmox VE pool and pool membership management
tags: ["pool"]
block:
- name: Make sure poolid parameter is not missing
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
ignore_errors: true
register: result
- assert:
that:
- result is failed
- "'missing required arguments: poolid' in result.msg"
- name: Create pool (Check)
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
check_mode: true
register: result
- assert:
that:
- result is changed
- result is success
- name: Create pool
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
register: result
- assert:
that:
- result is changed
- result is success
- result.poolid == "{{ poolid }}"
- name: Delete pool (Check)
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
state: absent
check_mode: true
register: result
- assert:
that:
- result is changed
- result is success
- name: Delete non-existing pool should do nothing
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "non-existing-poolid"
state: absent
register: result
- assert:
that:
- result is not changed
- result is success
- name: Deletion of non-empty pool fails
block:
- name: Add storage into pool
proxmox_pool_member:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
member: "{{ member }}"
type: "{{ member_type }}"
diff: true
register: result
- assert:
that:
- result is changed
- result is success
- "'{{ member }}' in result.diff.after.members"
- name: Add non-existing storage into pool should fail
proxmox_pool_member:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
member: "non-existing-storage"
type: "{{ member_type }}"
ignore_errors: true
register: result
- assert:
that:
- result is failed
- "'Storage non-existing-storage doesn\\'t exist in the cluster' in result.msg"
- name: Delete non-empty pool
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
state: absent
ignore_errors: true
register: result
- assert:
that:
- result is failed
- "'Please remove members from pool first.' in result.msg"
- name: Delete storage from the pool
proxmox_pool_member:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
member: "{{ member }}"
type: "{{ member_type }}"
state: absent
register: result
- assert:
that:
- result is success
- result is changed
rescue:
- name: Delete storage from the pool if it is added
proxmox_pool_member:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
member: "{{ member }}"
type: "{{ member_type }}"
state: absent
ignore_errors: true
- name: Delete pool
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
state: absent
register: result
- assert:
that:
- result is changed
- result is success
- result.poolid == "{{ poolid }}"
rescue:
- name: Delete test pool if it is created
proxmox_pool:
api_host: "{{ api_host }}"
api_user: "{{ user }}@{{ domain }}"
api_password: "{{ api_password | default(omit) }}"
api_token_id: "{{ api_token_id | default(omit) }}"
api_token_secret: "{{ api_token_secret | default(omit) }}"
validate_certs: "{{ validate_certs }}"
poolid: "{{ poolid }}"
state: absent
ignore_errors: true

View file

@ -1,6 +0,0 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
unsupported
proxmox_template

View file

@ -1,136 +0,0 @@
####################################################################
# WARNING: These are designed specifically for Ansible tests #
# and should not be used as examples of how to write Ansible roles #
####################################################################
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Proxmox VE virtual machines templates management
tags: ['template']
vars:
filename: /tmp/dummy.iso
block:
- name: Create dummy ISO file
ansible.builtin.command:
cmd: 'truncate -s 300M {{ filename }}'
- name: Delete requests_toolbelt module if it is installed
ansible.builtin.pip:
name: requests_toolbelt
state: absent
- name: Install latest proxmoxer
ansible.builtin.pip:
name: proxmoxer
state: latest
- name: Upload ISO as template to Proxmox VE cluster should fail
proxmox_template:
api_host: '{{ api_host }}'
api_user: '{{ user }}@{{ domain }}'
api_password: '{{ api_password | default(omit) }}'
api_token_id: '{{ api_token_id | default(omit) }}'
api_token_secret: '{{ api_token_secret | default(omit) }}'
validate_certs: '{{ validate_certs }}'
node: '{{ node }}'
src: '{{ filename }}'
content_type: iso
force: true
register: result
ignore_errors: true
- assert:
that:
- result is failed
- result.msg is match('\'requests_toolbelt\' module is required to upload files larger than 256MB')
- name: Install old (1.1.2) version of proxmoxer
ansible.builtin.pip:
name: proxmoxer==1.1.1
state: present
- name: Upload ISO as template to Proxmox VE cluster should be successful
proxmox_template:
api_host: '{{ api_host }}'
api_user: '{{ user }}@{{ domain }}'
api_password: '{{ api_password | default(omit) }}'
api_token_id: '{{ api_token_id | default(omit) }}'
api_token_secret: '{{ api_token_secret | default(omit) }}'
validate_certs: '{{ validate_certs }}'
node: '{{ node }}'
src: '{{ filename }}'
content_type: iso
force: true
register: result
- assert:
that:
- result is changed
- result is success
- result.msg is match('template with volid=local:iso/dummy.iso uploaded')
- name: Install latest proxmoxer
ansible.builtin.pip:
name: proxmoxer
state: latest
- name: Make smaller dummy file
ansible.builtin.command:
cmd: 'truncate -s 128M {{ filename }}'
- name: Upload ISO as template to Proxmox VE cluster should be successful
proxmox_template:
api_host: '{{ api_host }}'
api_user: '{{ user }}@{{ domain }}'
api_password: '{{ api_password | default(omit) }}'
api_token_id: '{{ api_token_id | default(omit) }}'
api_token_secret: '{{ api_token_secret | default(omit) }}'
validate_certs: '{{ validate_certs }}'
node: '{{ node }}'
src: '{{ filename }}'
content_type: iso
force: true
register: result
- assert:
that:
- result is changed
- result is success
- result.msg is match('template with volid=local:iso/dummy.iso uploaded')
- name: Install requests_toolbelt
ansible.builtin.pip:
name: requests_toolbelt
state: present
- name: Make big dummy file
ansible.builtin.command:
cmd: 'truncate -s 300M {{ filename }}'
- name: Upload ISO as template to Proxmox VE cluster should be successful
proxmox_template:
api_host: '{{ api_host }}'
api_user: '{{ user }}@{{ domain }}'
api_password: '{{ api_password | default(omit) }}'
api_token_id: '{{ api_token_id | default(omit) }}'
api_token_secret: '{{ api_token_secret | default(omit) }}'
validate_certs: '{{ validate_certs }}'
node: '{{ node }}'
src: '{{ filename }}'
content_type: iso
force: true
register: result
- assert:
that:
- result is changed
- result is success
- result.msg is match('template with volid=local:iso/dummy.iso uploaded')
always:
- name: Delete ISO file from host
ansible.builtin.file:
path: '{{ filename }}'
state: absent

View file

@ -5,7 +5,6 @@ plugins/inventory/iocage.py yamllint:unparsable-with-libyaml
plugins/inventory/linode.py yamllint:unparsable-with-libyaml plugins/inventory/linode.py yamllint:unparsable-with-libyaml
plugins/inventory/lxd.py yamllint:unparsable-with-libyaml plugins/inventory/lxd.py yamllint:unparsable-with-libyaml
plugins/inventory/nmap.py yamllint:unparsable-with-libyaml plugins/inventory/nmap.py yamllint:unparsable-with-libyaml
plugins/inventory/proxmox.py yamllint:unparsable-with-libyaml
plugins/inventory/scaleway.py yamllint:unparsable-with-libyaml plugins/inventory/scaleway.py yamllint:unparsable-with-libyaml
plugins/inventory/virtualbox.py yamllint:unparsable-with-libyaml plugins/inventory/virtualbox.py yamllint:unparsable-with-libyaml
plugins/lookup/dependent.py validate-modules:unidiomatic-typecheck plugins/lookup/dependent.py validate-modules:unidiomatic-typecheck

View file

@ -4,7 +4,6 @@ plugins/inventory/iocage.py yamllint:unparsable-with-libyaml
plugins/inventory/linode.py yamllint:unparsable-with-libyaml plugins/inventory/linode.py yamllint:unparsable-with-libyaml
plugins/inventory/lxd.py yamllint:unparsable-with-libyaml plugins/inventory/lxd.py yamllint:unparsable-with-libyaml
plugins/inventory/nmap.py yamllint:unparsable-with-libyaml plugins/inventory/nmap.py yamllint:unparsable-with-libyaml
plugins/inventory/proxmox.py yamllint:unparsable-with-libyaml
plugins/inventory/scaleway.py yamllint:unparsable-with-libyaml plugins/inventory/scaleway.py yamllint:unparsable-with-libyaml
plugins/inventory/virtualbox.py yamllint:unparsable-with-libyaml plugins/inventory/virtualbox.py yamllint:unparsable-with-libyaml
plugins/lookup/dependent.py validate-modules:unidiomatic-typecheck plugins/lookup/dependent.py validate-modules:unidiomatic-typecheck

View file

@ -1,585 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2024 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (annotations, absolute_import, division, print_function)
__metaclass__ = type
import os
import pytest
from ansible_collections.community.general.plugins.connection.proxmox_pct_remote import authenticity_msg, MyAddPolicy
from ansible_collections.community.general.plugins.module_utils._filelock import FileLock, LockTimeout
from ansible.errors import AnsibleError, AnsibleAuthenticationFailure, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.compat.paramiko import paramiko
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import connection_loader
from io import StringIO
from pathlib import Path
from unittest.mock import patch, MagicMock, mock_open
@pytest.fixture
def connection():
play_context = PlayContext()
in_stream = StringIO()
conn = connection_loader.get('community.general.proxmox_pct_remote', play_context, in_stream)
conn.set_option('remote_addr', '192.168.1.100')
conn.set_option('remote_user', 'root')
conn.set_option('password', 'password')
return conn
def test_connection_options(connection):
""" Test that connection options are properly set """
assert connection.get_option('remote_addr') == '192.168.1.100'
assert connection.get_option('remote_user') == 'root'
assert connection.get_option('password') == 'password'
def test_authenticity_msg():
""" Test authenticity message formatting """
msg = authenticity_msg('test.host', 'ssh-rsa', 'AA:BB:CC:DD')
assert 'test.host' in msg
assert 'ssh-rsa' in msg
assert 'AA:BB:CC:DD' in msg
def test_missing_host_key(connection):
""" Test MyAddPolicy missing_host_key method """
client = MagicMock()
key = MagicMock()
key.get_fingerprint.return_value = b'fingerprint'
key.get_name.return_value = 'ssh-rsa'
policy = MyAddPolicy(connection)
connection.set_option('host_key_auto_add', True)
policy.missing_host_key(client, 'test.host', key)
assert hasattr(key, '_added_by_ansible_this_time')
connection.set_option('host_key_auto_add', False)
connection.set_option('host_key_checking', False)
policy.missing_host_key(client, 'test.host', key)
connection.set_option('host_key_checking', True)
connection.set_option('host_key_auto_add', False)
connection.set_option('use_persistent_connections', False)
with patch('ansible.utils.display.Display.prompt_until', return_value='yes'):
policy.missing_host_key(client, 'test.host', key)
with patch('ansible.utils.display.Display.prompt_until', return_value='no'):
with pytest.raises(AnsibleError, match='host connection rejected by user'):
policy.missing_host_key(client, 'test.host', key)
def test_set_log_channel(connection):
""" Test setting log channel """
connection._set_log_channel('test_channel')
assert connection._log_channel == 'test_channel'
def test_parse_proxy_command(connection):
""" Test proxy command parsing """
connection.set_option('proxy_command', 'ssh -W %h:%p proxy.example.com')
connection.set_option('remote_addr', 'target.example.com')
connection.set_option('remote_user', 'testuser')
result = connection._parse_proxy_command(port=2222)
assert 'sock' in result
assert isinstance(result['sock'], paramiko.ProxyCommand)
@patch('paramiko.SSHClient')
def test_connect_with_rsa_sha2_disabled(mock_ssh, connection):
""" Test connection with RSA SHA2 algorithms disabled """
connection.set_option('use_rsa_sha2_algorithms', False)
mock_client = MagicMock()
mock_ssh.return_value = mock_client
connection._connect()
call_kwargs = mock_client.connect.call_args[1]
assert 'disabled_algorithms' in call_kwargs
assert 'pubkeys' in call_kwargs['disabled_algorithms']
@patch('paramiko.SSHClient')
def test_connect_with_bad_host_key(mock_ssh, connection):
""" Test connection with bad host key """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_client.connect.side_effect = paramiko.ssh_exception.BadHostKeyException(
'hostname', MagicMock(), MagicMock())
with pytest.raises(AnsibleConnectionFailure, match='host key mismatch'):
connection._connect()
@patch('paramiko.SSHClient')
def test_connect_with_invalid_host_key(mock_ssh, connection):
""" Test connection with bad host key """
connection.set_option('host_key_checking', True)
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_client.load_system_host_keys.side_effect = paramiko.hostkeys.InvalidHostKey(
"Bad Line!", Exception('Something crashed!'))
with pytest.raises(AnsibleConnectionFailure, match="Invalid host key: Bad Line!"):
connection._connect()
@patch('paramiko.SSHClient')
def test_connect_success(mock_ssh, connection):
""" Test successful SSH connection establishment """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
connection._connect()
assert mock_client.connect.called
assert connection._connected
@patch('paramiko.SSHClient')
def test_connect_authentication_failure(mock_ssh, connection):
""" Test SSH connection with authentication failure """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_client.connect.side_effect = paramiko.ssh_exception.AuthenticationException('Auth failed')
with pytest.raises(AnsibleAuthenticationFailure):
connection._connect()
def test_any_keys_added(connection):
""" Test checking for added host keys """
connection.ssh = MagicMock()
connection.ssh._host_keys = {
'host1': {
'ssh-rsa': MagicMock(_added_by_ansible_this_time=True),
'ssh-ed25519': MagicMock(_added_by_ansible_this_time=False)
}
}
assert connection._any_keys_added() is True
connection.ssh._host_keys = {
'host1': {
'ssh-rsa': MagicMock(_added_by_ansible_this_time=False)
}
}
assert connection._any_keys_added() is False
@patch('os.path.exists')
@patch('os.stat')
@patch('tempfile.NamedTemporaryFile')
def test_save_ssh_host_keys(mock_tempfile, mock_stat, mock_exists, connection):
""" Test saving SSH host keys """
mock_exists.return_value = True
mock_stat.return_value = MagicMock(st_mode=0o644, st_uid=1000, st_gid=1000)
mock_tempfile.return_value.__enter__.return_value.name = '/tmp/test_keys'
connection.ssh = MagicMock()
connection.ssh._host_keys = {
'host1': {
'ssh-rsa': MagicMock(
get_base64=lambda: 'KEY1',
_added_by_ansible_this_time=True
)
}
}
mock_open_obj = mock_open()
with patch('builtins.open', mock_open_obj):
connection._save_ssh_host_keys('/tmp/test_keys')
mock_open_obj().write.assert_called_with('host1 ssh-rsa KEY1\n')
def test_build_pct_command(connection):
""" Test PCT command building with different users """
connection.set_option('vmid', '100')
cmd = connection._build_pct_command('/bin/sh -c "ls -la"')
assert cmd == '/usr/sbin/pct exec 100 -- /bin/sh -c "ls -la"'
connection.set_option('remote_user', 'user')
connection.set_option('proxmox_become_method', 'sudo')
cmd = connection._build_pct_command('/bin/sh -c "ls -la"')
assert cmd == 'sudo /usr/sbin/pct exec 100 -- /bin/sh -c "ls -la"'
@patch('paramiko.SSHClient')
def test_exec_command_success(mock_ssh, connection):
""" Test successful command execution """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 0
mock_channel.makefile.return_value = [to_bytes('stdout')]
mock_channel.makefile_stderr.return_value = [to_bytes("")]
connection._connected = True
connection.ssh = mock_client
returncode, stdout, stderr = connection.exec_command('ls -la')
mock_transport.open_session.assert_called_once()
mock_channel.get_pty.assert_called_once()
mock_transport.set_keepalive.assert_called_once_with(5)
@patch('paramiko.SSHClient')
def test_exec_command_pct_not_found(mock_ssh, connection):
""" Test command execution when PCT is not found """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 1
mock_channel.makefile.return_value = [to_bytes("")]
mock_channel.makefile_stderr.return_value = [to_bytes('pct: not found')]
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleError, match='pct not found in path of host'):
connection.exec_command('ls -la')
@patch('paramiko.SSHClient')
def test_exec_command_session_open_failure(mock_ssh, connection):
""" Test exec_command when session opening fails """
mock_client = MagicMock()
mock_transport = MagicMock()
mock_transport.open_session.side_effect = Exception('Failed to open session')
mock_client.get_transport.return_value = mock_transport
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleConnectionFailure, match='Failed to open session'):
connection.exec_command('test command')
@patch('paramiko.SSHClient')
def test_exec_command_with_privilege_escalation(mock_ssh, connection):
""" Test exec_command with privilege escalation """
mock_client = MagicMock()
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
connection._connected = True
connection.ssh = mock_client
connection.become = MagicMock()
connection.become.expect_prompt.return_value = True
connection.become.check_success.return_value = False
connection.become.check_password_prompt.return_value = True
connection.become.get_option.return_value = 'sudo_password'
mock_channel.recv.return_value = b'[sudo] password:'
mock_channel.recv_exit_status.return_value = 0
mock_channel.makefile.return_value = [b""]
mock_channel.makefile_stderr.return_value = [b""]
returncode, stdout, stderr = connection.exec_command('sudo test command')
mock_channel.sendall.assert_called_once_with(b'sudo_password\n')
def test_put_file(connection):
""" Test putting a file to the remote system """
connection.exec_command = MagicMock()
connection.exec_command.return_value = (0, b"", b"")
with patch('builtins.open', create=True) as mock_open:
mock_open.return_value.__enter__.return_value.read.return_value = b'test content'
connection.put_file('/local/path', '/remote/path')
connection.exec_command.assert_called_once_with("/bin/sh -c 'cat > /remote/path'", in_data=b'test content', sudoable=False)
@patch('paramiko.SSHClient')
def test_put_file_general_error(mock_ssh, connection):
""" Test put_file with general error """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 1
mock_channel.makefile.return_value = [to_bytes("")]
mock_channel.makefile_stderr.return_value = [to_bytes('Some error')]
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleError, match='error occurred while putting file from /remote/path to /local/path'):
connection.put_file('/remote/path', '/local/path')
@patch('paramiko.SSHClient')
def test_put_file_cat_not_found(mock_ssh, connection):
""" Test command execution when cat is not found """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 1
mock_channel.makefile.return_value = [to_bytes("")]
mock_channel.makefile_stderr.return_value = [to_bytes('cat: not found')]
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleError, match='cat not found in path of container:'):
connection.fetch_file('/remote/path', '/local/path')
def test_fetch_file(connection):
""" Test fetching a file from the remote system """
connection.exec_command = MagicMock()
connection.exec_command.return_value = (0, b'test content', b"")
with patch('builtins.open', create=True) as mock_open:
connection.fetch_file('/remote/path', '/local/path')
connection.exec_command.assert_called_once_with("/bin/sh -c 'cat /remote/path'", sudoable=False)
mock_open.assert_called_with('/local/path', 'wb')
@patch('paramiko.SSHClient')
def test_fetch_file_general_error(mock_ssh, connection):
""" Test fetch_file with general error """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 1
mock_channel.makefile.return_value = [to_bytes("")]
mock_channel.makefile_stderr.return_value = [to_bytes('Some error')]
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleError, match='error occurred while fetching file from /remote/path to /local/path'):
connection.fetch_file('/remote/path', '/local/path')
@patch('paramiko.SSHClient')
def test_fetch_file_cat_not_found(mock_ssh, connection):
""" Test command execution when cat is not found """
mock_client = MagicMock()
mock_ssh.return_value = mock_client
mock_channel = MagicMock()
mock_transport = MagicMock()
mock_client.get_transport.return_value = mock_transport
mock_transport.open_session.return_value = mock_channel
mock_channel.recv_exit_status.return_value = 1
mock_channel.makefile.return_value = [to_bytes("")]
mock_channel.makefile_stderr.return_value = [to_bytes('cat: not found')]
connection._connected = True
connection.ssh = mock_client
with pytest.raises(AnsibleError, match='cat not found in path of container:'):
connection.fetch_file('/remote/path', '/local/path')
def test_close(connection):
""" Test connection close """
mock_ssh = MagicMock()
connection.ssh = mock_ssh
connection._connected = True
connection.close()
assert mock_ssh.close.called, 'ssh.close was not called'
assert not connection._connected, 'self._connected is still True'
def test_close_with_lock_file(connection):
""" Test close method with lock file creation """
connection._any_keys_added = MagicMock(return_value=True)
connection._connected = True
connection.keyfile = '/tmp/pct-remote-known_hosts-test'
connection.set_option('host_key_checking', True)
connection.set_option('lock_file_timeout', 5)
connection.set_option('record_host_keys', True)
connection.ssh = MagicMock()
lock_file_path = os.path.join(os.path.dirname(connection.keyfile),
f'ansible-{os.path.basename(connection.keyfile)}.lock')
try:
connection.close()
assert os.path.exists(lock_file_path), 'Lock file was not created'
lock_stat = os.stat(lock_file_path)
assert lock_stat.st_mode & 0o777 == 0o600, 'Incorrect lock file permissions'
finally:
Path(lock_file_path).unlink(missing_ok=True)
@patch('pathlib.Path.unlink')
@patch('os.path.exists')
def test_close_lock_file_time_out_error_handling(mock_exists, mock_unlink, connection):
""" Test close method with lock file timeout error """
connection._any_keys_added = MagicMock(return_value=True)
connection._connected = True
connection._save_ssh_host_keys = MagicMock()
connection.keyfile = '/tmp/pct-remote-known_hosts-test'
connection.set_option('host_key_checking', True)
connection.set_option('lock_file_timeout', 5)
connection.set_option('record_host_keys', True)
connection.ssh = MagicMock()
mock_exists.return_value = False
matcher = f'writing lock file for {connection.keyfile} ran in to the timeout of {connection.get_option("lock_file_timeout")}s'
with pytest.raises(AnsibleError, match=matcher):
with patch('os.getuid', return_value=1000), \
patch('os.getgid', return_value=1000), \
patch('os.chmod'), patch('os.chown'), \
patch('os.rename'), \
patch.object(FileLock, 'lock_file', side_effect=LockTimeout()):
connection.close()
@patch('ansible_collections.community.general.plugins.module_utils._filelock.FileLock.lock_file')
@patch('tempfile.NamedTemporaryFile')
@patch('os.chmod')
@patch('os.chown')
@patch('os.rename')
@patch('os.path.exists')
def test_tempfile_creation_and_move(mock_exists, mock_rename, mock_chown, mock_chmod, mock_tempfile, mock_lock_file, connection):
""" Test tempfile creation and move during close """
connection._any_keys_added = MagicMock(return_value=True)
connection._connected = True
connection._save_ssh_host_keys = MagicMock()
connection.keyfile = '/tmp/pct-remote-known_hosts-test'
connection.set_option('host_key_checking', True)
connection.set_option('lock_file_timeout', 5)
connection.set_option('record_host_keys', True)
connection.ssh = MagicMock()
mock_exists.return_value = False
mock_lock_file_instance = MagicMock()
mock_lock_file.return_value = mock_lock_file_instance
mock_lock_file_instance.__enter__.return_value = None
mock_tempfile_instance = MagicMock()
mock_tempfile_instance.name = '/tmp/mock_tempfile'
mock_tempfile.return_value.__enter__.return_value = mock_tempfile_instance
mode = 0o644
uid = 1000
gid = 1000
key_dir = os.path.dirname(connection.keyfile)
with patch('os.getuid', return_value=uid), patch('os.getgid', return_value=gid):
connection.close()
connection._save_ssh_host_keys.assert_called_once_with('/tmp/mock_tempfile')
mock_chmod.assert_called_once_with('/tmp/mock_tempfile', mode)
mock_chown.assert_called_once_with('/tmp/mock_tempfile', uid, gid)
mock_rename.assert_called_once_with('/tmp/mock_tempfile', connection.keyfile)
mock_tempfile.assert_called_once_with(dir=key_dir, delete=False)
@patch('pathlib.Path.unlink')
@patch('tempfile.NamedTemporaryFile')
@patch('ansible_collections.community.general.plugins.module_utils._filelock.FileLock.lock_file')
@patch('os.path.exists')
def test_close_tempfile_error_handling(mock_exists, mock_lock_file, mock_tempfile, mock_unlink, connection):
""" Test tempfile creation error """
connection._any_keys_added = MagicMock(return_value=True)
connection._connected = True
connection._save_ssh_host_keys = MagicMock()
connection.keyfile = '/tmp/pct-remote-known_hosts-test'
connection.set_option('host_key_checking', True)
connection.set_option('lock_file_timeout', 5)
connection.set_option('record_host_keys', True)
connection.ssh = MagicMock()
mock_exists.return_value = False
mock_lock_file_instance = MagicMock()
mock_lock_file.return_value = mock_lock_file_instance
mock_lock_file_instance.__enter__.return_value = None
mock_tempfile_instance = MagicMock()
mock_tempfile_instance.name = '/tmp/mock_tempfile'
mock_tempfile.return_value.__enter__.return_value = mock_tempfile_instance
with pytest.raises(AnsibleError, match='error occurred while writing SSH host keys!'):
with patch.object(os, 'chmod', side_effect=Exception()):
connection.close()
mock_unlink.assert_called_with(missing_ok=True)
@patch('ansible_collections.community.general.plugins.module_utils._filelock.FileLock.lock_file')
@patch('os.path.exists')
def test_close_with_invalid_host_key(mock_exists, mock_lock_file, connection):
""" Test load_system_host_keys on close with InvalidHostKey error """
connection._any_keys_added = MagicMock(return_value=True)
connection._connected = True
connection._save_ssh_host_keys = MagicMock()
connection.keyfile = '/tmp/pct-remote-known_hosts-test'
connection.set_option('host_key_checking', True)
connection.set_option('lock_file_timeout', 5)
connection.set_option('record_host_keys', True)
connection.ssh = MagicMock()
connection.ssh.load_system_host_keys.side_effect = paramiko.hostkeys.InvalidHostKey(
"Bad Line!", Exception('Something crashed!'))
mock_exists.return_value = False
mock_lock_file_instance = MagicMock()
mock_lock_file.return_value = mock_lock_file_instance
mock_lock_file_instance.__enter__.return_value = None
with pytest.raises(AnsibleConnectionFailure, match="Invalid host key: Bad Line!"):
connection.close()
def test_reset(connection):
""" Test connection reset """
connection._connected = True
connection.close = MagicMock()
connection._connect = MagicMock()
connection.reset()
connection.close.assert_called_once()
connection._connect.assert_called_once()
connection._connected = False
connection.reset()
assert connection.close.call_count == 1

View file

@ -1,786 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Jeffrey van Pelt <jeff@vanpelt.one>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
#
# The API responses used in these tests were recorded from PVE version 6.2.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.inventory.data import InventoryData
from ansible_collections.community.general.plugins.inventory.proxmox import InventoryModule
@pytest.fixture(scope="module")
def inventory():
r = InventoryModule()
r.inventory = InventoryData()
return r
def test_verify_file(tmp_path, inventory):
file = tmp_path / "foobar.proxmox.yml"
file.touch()
assert inventory.verify_file(str(file)) is True
def test_verify_file_bad_config(inventory):
assert inventory.verify_file('foobar.proxmox.yml') is False
def get_auth():
return True
# NOTE: when updating/adding replies to this function,
# be sure to only add only the _contents_ of the 'data' dict in the API reply
def get_json(url, ignore_errors=None):
if url == "https://localhost:8006/api2/json/nodes":
# _get_nodes
return [{"type": "node",
"cpu": 0.01,
"maxdisk": 500,
"mem": 500,
"node": "testnode",
"id": "node/testnode",
"maxcpu": 1,
"status": "online",
"ssl_fingerprint": "xx",
"disk": 1000,
"maxmem": 1000,
"uptime": 10000,
"level": ""},
{"type": "node",
"node": "testnode2",
"id": "node/testnode2",
"status": "offline",
"ssl_fingerprint": "yy"}]
elif url == "https://localhost:8006/api2/json/pools":
# _get_pools
return [{"poolid": "test"}]
elif url == "https://localhost:8006/api2/json/nodes/testnode/lxc":
# _get_lxc_per_node
return [{"cpus": 1,
"name": "test-lxc",
"cpu": 0.01,
"diskwrite": 0,
"lock": "",
"maxmem": 1000,
"template": "",
"diskread": 0,
"mem": 1000,
"swap": 0,
"type": "lxc",
"maxswap": 0,
"maxdisk": "1000",
"netout": 1000,
"pid": "1000",
"netin": 1000,
"status": "running",
"vmid": "100",
"disk": "1000",
"uptime": 1000}]
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu":
# _get_qemu_per_node
return [{"name": "test-qemu",
"cpus": 1,
"mem": 1000,
"template": "",
"diskread": 0,
"cpu": 0.01,
"maxmem": 1000,
"diskwrite": 0,
"netout": 1000,
"pid": "1001",
"netin": 1000,
"maxdisk": 1000,
"vmid": "101",
"uptime": 1000,
"disk": 0,
"status": "running"},
{"name": "test-qemu-windows",
"cpus": 1,
"mem": 1000,
"template": "",
"diskread": 0,
"cpu": 0.01,
"maxmem": 1000,
"diskwrite": 0,
"netout": 1000,
"pid": "1001",
"netin": 1000,
"maxdisk": 1000,
"vmid": "102",
"uptime": 1000,
"disk": 0,
"status": "running"},
{"name": "test-qemu-multi-nic",
"cpus": 1,
"mem": 1000,
"template": "",
"diskread": 0,
"cpu": 0.01,
"maxmem": 1000,
"diskwrite": 0,
"netout": 1000,
"pid": "1001",
"netin": 1000,
"maxdisk": 1000,
"vmid": "103",
"uptime": 1000,
"disk": 0,
"status": "running"},
{"name": "test-qemu-template",
"cpus": 1,
"mem": 0,
"template": 1,
"diskread": 0,
"cpu": 0,
"maxmem": 1000,
"diskwrite": 0,
"netout": 0,
"pid": "1001",
"netin": 0,
"maxdisk": 1000,
"vmid": "9001",
"uptime": 0,
"disk": 0,
"status": "stopped"}]
elif url == "https://localhost:8006/api2/json/pools/test":
# _get_members_per_pool
return {"members": [{"uptime": 1000,
"template": 0,
"id": "qemu/101",
"mem": 1000,
"status": "running",
"cpu": 0.01,
"maxmem": 1000,
"diskwrite": 1000,
"name": "test-qemu",
"netout": 1000,
"netin": 1000,
"vmid": 101,
"node": "testnode",
"maxcpu": 1,
"type": "qemu",
"maxdisk": 1000,
"disk": 0,
"diskread": 1000}]}
elif url == "https://localhost:8006/api2/json/nodes/testnode/network":
# _get_node_ip
return [{"families": ["inet"],
"priority": 3,
"active": 1,
"cidr": "10.1.1.2/24",
"iface": "eth0",
"method": "static",
"exists": 1,
"type": "eth",
"netmask": "24",
"gateway": "10.1.1.1",
"address": "10.1.1.2",
"method6": "manual",
"autostart": 1},
{"method6": "manual",
"autostart": 1,
"type": "OVSPort",
"exists": 1,
"method": "manual",
"iface": "eth1",
"ovs_bridge": "vmbr0",
"active": 1,
"families": ["inet"],
"priority": 5,
"ovs_type": "OVSPort"},
{"type": "OVSBridge",
"method": "manual",
"iface": "vmbr0",
"families": ["inet"],
"priority": 4,
"ovs_ports": "eth1",
"ovs_type": "OVSBridge",
"method6": "manual",
"autostart": 1,
"active": 1}]
elif url == "https://localhost:8006/api2/json/nodes/testnode/lxc/100/config":
# _get_vm_config (lxc)
return {
"console": 1,
"rootfs": "local-lvm:vm-100-disk-0,size=4G",
"cmode": "tty",
"description": "A testnode",
"cores": 1,
"hostname": "test-lxc",
"arch": "amd64",
"tty": 2,
"swap": 0,
"cpulimit": "0",
"net0": "name=eth0,bridge=vmbr0,gw=10.1.1.1,hwaddr=FF:FF:FF:FF:FF:FF,ip=10.1.1.3/24,type=veth",
"ostype": "ubuntu",
"digest": "123456789abcdef0123456789abcdef01234567890",
"protection": 0,
"memory": 1000,
"onboot": 0,
"cpuunits": 1024,
"tags": "one, two, three",
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/101/config":
# _get_vm_config (qemu)
return {
"tags": "one, two, three",
"cores": 1,
"ide2": "none,media=cdrom",
"memory": 1000,
"kvm": 1,
"digest": "0123456789abcdef0123456789abcdef0123456789",
"description": "A test qemu",
"sockets": 1,
"onboot": 1,
"vmgenid": "ffffffff-ffff-ffff-ffff-ffffffffffff",
"numa": 0,
"bootdisk": "scsi0",
"cpu": "host",
"name": "test-qemu",
"ostype": "l26",
"hotplug": "network,disk,usb",
"scsi0": "local-lvm:vm-101-disk-0,size=8G",
"net0": "virtio=ff:ff:ff:ff:ff:ff,bridge=vmbr0,firewall=1",
"agent": "1,fstrim_cloned_disks=1",
"bios": "seabios",
"ide0": "local-lvm:vm-101-cloudinit,media=cdrom,size=4M",
"boot": "cdn",
"scsihw": "virtio-scsi-pci",
"smbios1": "uuid=ffffffff-ffff-ffff-ffff-ffffffffffff"
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/102/config":
# _get_vm_config (qemu)
return {
"numa": 0,
"digest": "460add1531a7068d2ae62d54f67e8fb9493dece9",
"ide2": "none,media=cdrom",
"bootdisk": "sata0",
"name": "test-qemu-windows",
"balloon": 0,
"cpulimit": "4",
"agent": "1",
"cores": 6,
"sata0": "storage:vm-102-disk-0,size=100G",
"memory": 10240,
"smbios1": "uuid=127301fc-0122-48d5-8fc5-c04fa78d8146",
"scsihw": "virtio-scsi-pci",
"sockets": 1,
"ostype": "win8",
"net0": "virtio=ff:ff:ff:ff:ff:ff,bridge=vmbr0",
"onboot": 1
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/103/config":
# _get_vm_config (qemu)
return {
'scsi1': 'storage:vm-103-disk-3,size=30G',
'sockets': 1,
'memory': 8192,
'ostype': 'l26',
'scsihw': 'virtio-scsi-pci',
"net0": "virtio=ff:ff:ff:ff:ff:ff,bridge=vmbr0",
"net1": "virtio=ff:ff:ff:ff:ff:ff,bridge=vmbr1",
'bootdisk': 'scsi0',
'scsi0': 'storage:vm-103-disk-0,size=10G',
'name': 'test-qemu-multi-nic',
'cores': 4,
'digest': '51b7599f869b9a3f564804a0aed290f3de803292',
'smbios1': 'uuid=863b31c3-42ca-4a92-aed7-4111f342f70a',
'agent': '1,type=virtio',
'ide2': 'none,media=cdrom',
'balloon': 0,
'numa': 0,
'scsi2': 'storage:vm-103-disk-2,size=10G',
'serial0': 'socket',
'vmgenid': 'ddfb79b2-b484-4d66-88e7-6e76f2d1be77',
'onboot': 1,
'tablet': 0
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/101/agent/network-get-interfaces":
# _get_agent_network_interfaces
return {"result": [
{
"hardware-address": "00:00:00:00:00:00",
"ip-addresses": [
{
"prefix": 8,
"ip-address-type": "ipv4",
"ip-address": "127.0.0.1"
},
{
"ip-address-type": "ipv6",
"ip-address": "::1",
"prefix": 128
}],
"statistics": {
"rx-errs": 0,
"rx-bytes": 163244,
"rx-packets": 1623,
"rx-dropped": 0,
"tx-dropped": 0,
"tx-packets": 1623,
"tx-bytes": 163244,
"tx-errs": 0},
"name": "lo"},
{
"statistics": {
"rx-packets": 4025,
"rx-dropped": 12,
"rx-bytes": 324105,
"rx-errs": 0,
"tx-errs": 0,
"tx-bytes": 368860,
"tx-packets": 3479,
"tx-dropped": 0},
"name": "eth0",
"ip-addresses": [
{
"prefix": 24,
"ip-address-type": "ipv4",
"ip-address": "10.1.2.3"
},
{
"prefix": 64,
"ip-address": "fd8c:4687:e88d:1be3:5b70:7b88:c79c:293",
"ip-address-type": "ipv6"
}],
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"hardware-address": "ff:ff:ff:ff:ff:ff",
"ip-addresses": [
{
"prefix": 16,
"ip-address": "10.10.2.3",
"ip-address-type": "ipv4"
}],
"name": "docker0",
"statistics": {
"rx-bytes": 0,
"rx-errs": 0,
"rx-dropped": 0,
"rx-packets": 0,
"tx-packets": 0,
"tx-dropped": 0,
"tx-errs": 0,
"tx-bytes": 0
}}]}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/102/agent/network-get-interfaces":
# _get_agent_network_interfaces
return {"result": {'error': {'desc': 'this feature or command is not currently supported', 'class': 'Unsupported'}}}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/103/agent/network-get-interfaces":
# _get_agent_network_interfaces
return {
"result": [
{
"statistics": {
"tx-errs": 0,
"rx-errs": 0,
"rx-dropped": 0,
"tx-bytes": 48132932372,
"tx-dropped": 0,
"rx-bytes": 48132932372,
"tx-packets": 178578980,
"rx-packets": 178578980
},
"hardware-address": "ff:ff:ff:ff:ff:ff",
"ip-addresses": [
{
"ip-address-type": "ipv4",
"prefix": 8,
"ip-address": "127.0.0.1"
}
],
"name": "lo"
},
{
"name": "eth0",
"ip-addresses": [
{
"ip-address-type": "ipv4",
"prefix": 24,
"ip-address": "172.16.0.143"
}
],
"statistics": {
"rx-errs": 0,
"tx-errs": 0,
"rx-packets": 660028,
"tx-packets": 304599,
"tx-dropped": 0,
"rx-bytes": 1846743499,
"tx-bytes": 1287844926,
"rx-dropped": 0
},
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"name": "eth1",
"hardware-address": "ff:ff:ff:ff:ff:ff",
"statistics": {
"rx-bytes": 235717091946,
"tx-dropped": 0,
"rx-dropped": 0,
"tx-bytes": 123411636251,
"rx-packets": 540431277,
"tx-packets": 468411864,
"rx-errs": 0,
"tx-errs": 0
},
"ip-addresses": [
{
"ip-address": "10.0.0.133",
"prefix": 24,
"ip-address-type": "ipv4"
}
]
},
{
"name": "docker0",
"ip-addresses": [
{
"ip-address": "172.17.0.1",
"prefix": 16,
"ip-address-type": "ipv4"
}
],
"hardware-address": "ff:ff:ff:ff:ff:ff",
"statistics": {
"rx-errs": 0,
"tx-errs": 0,
"rx-packets": 0,
"tx-packets": 0,
"tx-dropped": 0,
"rx-bytes": 0,
"rx-dropped": 0,
"tx-bytes": 0
}
},
{
"hardware-address": "ff:ff:ff:ff:ff:ff",
"name": "datapath"
},
{
"name": "weave",
"ip-addresses": [
{
"ip-address": "10.42.0.1",
"ip-address-type": "ipv4",
"prefix": 16
}
],
"hardware-address": "ff:ff:ff:ff:ff:ff",
"statistics": {
"rx-bytes": 127289123306,
"tx-dropped": 0,
"rx-dropped": 0,
"tx-bytes": 43827573343,
"rx-packets": 132750542,
"tx-packets": 74218762,
"rx-errs": 0,
"tx-errs": 0
}
},
{
"name": "vethwe-datapath",
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"name": "vethwe-bridge",
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"hardware-address": "ff:ff:ff:ff:ff:ff",
"name": "vxlan-6784"
},
{
"name": "vethwepl0dfe1fe",
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"name": "vethweplf1e7715",
"hardware-address": "ff:ff:ff:ff:ff:ff"
},
{
"hardware-address": "ff:ff:ff:ff:ff:ff",
"name": "vethwepl9d244a1"
},
{
"hardware-address": "ff:ff:ff:ff:ff:ff",
"name": "vethwepl2ca477b"
},
{
"name": "nomacorip",
}
]
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/lxc/100/status/current":
# _get_vm_status (lxc)
return {
"swap": 0,
"name": "test-lxc",
"diskread": 0,
"vmid": 100,
"diskwrite": 0,
"pid": 9000,
"mem": 89980928,
"netin": 1950776396424,
"disk": 4998168576,
"cpu": 0.00163430613110039,
"type": "lxc",
"uptime": 6793736,
"maxmem": 1073741824,
"status": "running",
"cpus": "1",
"ha": {
"group": 'null',
"state": "started",
"managed": 1
},
"maxdisk": 3348329267200,
"netout": 1947793356037,
"maxswap": 1073741824
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/101/status/current":
# _get_vm_status (qemu)
return {
"status": "stopped",
"uptime": 0,
"maxmem": 5364514816,
"maxdisk": 34359738368,
"netout": 0,
"cpus": 2,
"ha": {
"managed": 0
},
"diskread": 0,
"vmid": 101,
"diskwrite": 0,
"name": "test-qemu",
"cpu": 0,
"disk": 0,
"netin": 0,
"mem": 0,
"qmpstatus": "stopped"
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/102/status/current":
# _get_vm_status (qemu)
return {
"status": "stopped",
"uptime": 0,
"maxmem": 5364514816,
"maxdisk": 34359738368,
"netout": 0,
"cpus": 2,
"ha": {
"managed": 0
},
"diskread": 0,
"vmid": 102,
"diskwrite": 0,
"name": "test-qemu-windows",
"cpu": 0,
"disk": 0,
"netin": 0,
"mem": 0,
"qmpstatus": "prelaunch"
}
elif url == "https://localhost:8006/api2/json/nodes/testnode/qemu/103/status/current":
# _get_vm_status (qemu)
return {
"status": "stopped",
"uptime": 0,
"maxmem": 5364514816,
"maxdisk": 34359738368,
"netout": 0,
"cpus": 2,
"ha": {
"managed": 0
},
"diskread": 0,
"vmid": 103,
"diskwrite": 0,
"name": "test-qemu-multi-nic",
"cpu": 0,
"disk": 0,
"netin": 0,
"mem": 0,
"qmpstatus": "paused"
}
def get_vm_snapshots(node, properties, vmtype, vmid, name):
return [
{"description": "",
"name": "clean",
"snaptime": 1000,
"vmstate": 0
},
{"name": "current",
"digest": "1234689abcdf",
"running": 0,
"description": "You are here!",
"parent": "clean"
}]
def get_option(opts):
def fn(option):
default = opts.get('default', False)
return opts.get(option, default)
return fn
def test_populate(inventory, mocker):
# module settings
inventory.proxmox_user = 'root@pam'
inventory.proxmox_password = 'password'
inventory.proxmox_url = 'https://localhost:8006'
inventory.group_prefix = 'proxmox_'
inventory.facts_prefix = 'proxmox_'
inventory.strict = False
inventory.exclude_nodes = False
opts = {
'group_prefix': 'proxmox_',
'facts_prefix': 'proxmox_',
'want_facts': True,
'want_proxmox_nodes_ansible_host': True,
'qemu_extended_statuses': True,
'exclude_nodes': False
}
# bypass authentication and API fetch calls
inventory._get_auth = mocker.MagicMock(side_effect=get_auth)
inventory._get_json = mocker.MagicMock(side_effect=get_json)
inventory._get_vm_snapshots = mocker.MagicMock(side_effect=get_vm_snapshots)
inventory.get_option = mocker.MagicMock(side_effect=get_option(opts))
inventory._can_add_host = mocker.MagicMock(return_value=True)
inventory._populate()
# get different hosts
host_qemu = inventory.inventory.get_host('test-qemu')
host_qemu_windows = inventory.inventory.get_host('test-qemu-windows')
host_qemu_multi_nic = inventory.inventory.get_host('test-qemu-multi-nic')
host_qemu_template = inventory.inventory.get_host('test-qemu-template')
host_lxc = inventory.inventory.get_host('test-lxc')
# check if qemu-test is in the proxmox_pool_test group
assert 'proxmox_pool_test' in inventory.inventory.groups
group_qemu = inventory.inventory.groups['proxmox_pool_test']
assert group_qemu.hosts == [host_qemu]
# check if qemu-test has eth0 interface in agent_interfaces fact
assert 'eth0' in [d['name'] for d in host_qemu.get_vars()['proxmox_agent_interfaces']]
# check if qemu-multi-nic has multiple network interfaces
for iface_name in ['eth0', 'eth1', 'weave']:
assert iface_name in [d['name'] for d in host_qemu_multi_nic.get_vars()['proxmox_agent_interfaces']]
# check if interface with no mac-address or ip-address defaults correctly
assert [iface for iface in host_qemu_multi_nic.get_vars()['proxmox_agent_interfaces']
if iface['name'] == 'nomacorip'
and iface['mac-address'] == ''
and iface['ip-addresses'] == []
]
# check to make sure qemu-windows doesn't have proxmox_agent_interfaces
assert "proxmox_agent_interfaces" not in host_qemu_windows.get_vars()
# check if lxc-test has been discovered correctly
group_lxc = inventory.inventory.groups['proxmox_all_lxc']
assert group_lxc.hosts == [host_lxc]
# check if qemu template is not present
assert host_qemu_template is None
# check that offline node is in inventory
assert inventory.inventory.get_host('testnode2')
# make sure that ['prelaunch', 'paused'] are in the group list
for group in ['paused', 'prelaunch']:
assert ('%sall_%s' % (inventory.group_prefix, group)) in inventory.inventory.groups
# check if qemu-windows is in the prelaunch group
group_prelaunch = inventory.inventory.groups['proxmox_all_prelaunch']
assert group_prelaunch.hosts == [host_qemu_windows]
# check if qemu-multi-nic is in the paused group
group_paused = inventory.inventory.groups['proxmox_all_paused']
assert group_paused.hosts == [host_qemu_multi_nic]
def test_populate_missing_qemu_extended_groups(inventory, mocker):
# module settings
inventory.proxmox_user = 'root@pam'
inventory.proxmox_password = 'password'
inventory.proxmox_url = 'https://localhost:8006'
inventory.group_prefix = 'proxmox_'
inventory.facts_prefix = 'proxmox_'
inventory.strict = False
inventory.exclude_nodes = False
opts = {
'group_prefix': 'proxmox_',
'facts_prefix': 'proxmox_',
'want_facts': True,
'want_proxmox_nodes_ansible_host': True,
'qemu_extended_statuses': False,
'exclude_nodes': False
}
# bypass authentication and API fetch calls
inventory._get_auth = mocker.MagicMock(side_effect=get_auth)
inventory._get_json = mocker.MagicMock(side_effect=get_json)
inventory._get_vm_snapshots = mocker.MagicMock(side_effect=get_vm_snapshots)
inventory.get_option = mocker.MagicMock(side_effect=get_option(opts))
inventory._can_add_host = mocker.MagicMock(return_value=True)
inventory._populate()
# make sure that ['prelaunch', 'paused'] are not in the group list
for group in ['paused', 'prelaunch']:
assert ('%sall_%s' % (inventory.group_prefix, group)) not in inventory.inventory.groups
def test_populate_exclude_nodes(inventory, mocker):
# module settings
inventory.proxmox_user = 'root@pam'
inventory.proxmox_password = 'password'
inventory.proxmox_url = 'https://localhost:8006'
inventory.group_prefix = 'proxmox_'
inventory.facts_prefix = 'proxmox_'
inventory.strict = False
inventory.exclude_nodes = True
opts = {
'group_prefix': 'proxmox_',
'facts_prefix': 'proxmox_',
'want_facts': True,
'want_proxmox_nodes_ansible_host': True,
'qemu_extended_statuses': False,
'exclude_nodes': True
}
# bypass authentication and API fetch calls
inventory._get_auth = mocker.MagicMock(side_effect=get_auth)
inventory._get_json = mocker.MagicMock(side_effect=get_json)
inventory._get_vm_snapshots = mocker.MagicMock(side_effect=get_vm_snapshots)
inventory.get_option = mocker.MagicMock(side_effect=get_option(opts))
inventory._can_add_host = mocker.MagicMock(return_value=True)
inventory._populate()
# make sure that nodes are not in the inventory
for node in ['testnode', 'testnode2']:
assert node not in inventory.inventory.hosts
# make sure that nodes group is absent
assert ('%s_nodes' % (inventory.group_prefix)) not in inventory.inventory.groups
# make sure that nodes are not in the "ungrouped" group
for node in ['testnode', 'testnode2']:
assert node not in inventory.inventory.get_groups_dict()["ungrouped"]

View file

@ -1,374 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
import \
ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
from ansible_collections.community.general.plugins.modules import proxmox_backup
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleExitJson, AnsibleFailJson, set_module_args, ModuleTestCase)
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch
__metaclass__ = type
import pytest
proxmoxer = pytest.importorskip('proxmoxer')
MINIMAL_PERMISSIONS = {
'/sdn/zones': {'Datastore.AllocateSpace': 1, 'Datastore.Audit': 1},
'/nodes': {'Datastore.AllocateSpace': 1, 'Datastore.Audit': 1},
'/sdn': {'Datastore.AllocateSpace': 1, 'Datastore.Audit': 1},
'/vms': {'VM.Audit': 1,
'Sys.Audit': 1,
'Mapping.Audit': 1,
'VM.Backup': 1,
'Datastore.Audit': 1,
'SDN.Audit': 1,
'Pool.Audit': 1},
'/': {'Datastore.Audit': 1, 'Datastore.AllocateSpace': 1},
'/storage/local-zfs': {'Datastore.AllocateSpace': 1,
'Datastore.Audit': 1},
'/storage': {'Datastore.AllocateSpace': 1, 'Datastore.Audit': 1},
'/access': {'Datastore.AllocateSpace': 1, 'Datastore.Audit': 1},
'/vms/101': {'VM.Backup': 1,
'Mapping.Audit': 1,
'Datastore.AllocateSpace': 0,
'Sys.Audit': 1,
'VM.Audit': 1,
'SDN.Audit': 1,
'Pool.Audit': 1,
'Datastore.Audit': 1},
'/vms/100': {'VM.Backup': 1,
'Mapping.Audit': 1,
'Datastore.AllocateSpace': 0,
'Sys.Audit': 1,
'VM.Audit': 1,
'SDN.Audit': 1,
'Pool.Audit': 1,
'Datastore.Audit': 1},
'/pool': {'Datastore.Audit': 1, 'Datastore.AllocateSpace': 1}, }
STORAGE = [{'type': 'pbs',
'username': 'test@pbs',
'datastore': 'Backup-Pool',
'server': '10.0.0.1',
'shared': 1,
'fingerprint': '94:fd:ac:e7:d5:36:0e:11:5b:23:05:40:d2:a4:e1:8a:c1:52:41:01:07:28:c0:4d:c5:ee:df:7f:7c:03:ab:41',
'prune-backups': 'keep-all=1',
'storage': 'backup',
'content': 'backup',
'digest': 'ca46a68d7699de061c139d714892682ea7c9d681'},
{'nodes': 'node1,node2,node3',
'sparse': 1,
'type': 'zfspool',
'content': 'rootdir,images',
'digest': 'ca46a68d7699de061c139d714892682ea7c9d681',
'pool': 'rpool/data',
'storage': 'local-zfs'}]
VMS = [{"diskwrite": 0,
"vmid": 100,
"node": "node1",
"id": "lxc/100",
"maxdisk": 10000,
"template": 0,
"disk": 10000,
"uptime": 10000,
"maxmem": 10000,
"maxcpu": 1,
"netin": 10000,
"type": "lxc",
"netout": 10000,
"mem": 10000,
"diskread": 10000,
"cpu": 0.01,
"name": "test-lxc",
"status": "running"},
{"diskwrite": 0,
"vmid": 101,
"node": "node2",
"id": "kvm/101",
"maxdisk": 10000,
"template": 0,
"disk": 10000,
"uptime": 10000,
"maxmem": 10000,
"maxcpu": 1,
"netin": 10000,
"type": "lxc",
"netout": 10000,
"mem": 10000,
"diskread": 10000,
"cpu": 0.01,
"name": "test-kvm",
"status": "running"}
]
NODES = [{'level': '',
'type': 'node',
'node': 'node1',
'status': 'online',
'id': 'node/node1',
'cgroup-mode': 2},
{'status': 'online',
'id': 'node/node2',
'cgroup-mode': 2,
'level': '',
'node': 'node2',
'type': 'node'},
{'status': 'online',
'id': 'node/node3',
'cgroup-mode': 2,
'level': '',
'node': 'node3',
'type': 'node'},
]
TASK_API_RETURN = {
"node1": {
'starttime': 1732606253,
'status': 'stopped',
'type': 'vzdump',
'pstart': 517463911,
'upid': 'UPID:node1:003F8C63:1E7FB79C:67449780:vzdump:100:root@pam:',
'id': '100',
'node': 'hypervisor',
'pid': 541669,
'user': 'test@pve',
'exitstatus': 'OK'},
"node2": {
'starttime': 1732606253,
'status': 'stopped',
'type': 'vzdump',
'pstart': 517463911,
'upid': 'UPID:node2:000029DD:1599528B:6108F068:vzdump:101:root@pam:',
'id': '101',
'node': 'hypervisor',
'pid': 541669,
'user': 'test@pve',
'exitstatus': 'OK'},
}
VZDUMP_API_RETURN = {
"node1": "UPID:node1:003F8C63:1E7FB79C:67449780:vzdump:100:root@pam:",
"node2": "UPID:node2:000029DD:1599528B:6108F068:vzdump:101:root@pam:",
"node3": "OK",
}
TASKLOG_API_RETURN = {"node1": [{'n': 1,
't': "INFO: starting new backup job: vzdump 100 --mode snapshot --node node1 "
"--notes-template '{{guestname}}' --storage backup --notification-mode auto"},
{'t': 'INFO: Starting Backup of VM 100 (lxc)',
'n': 2},
{'n': 23, 't': 'INFO: adding notes to backup'},
{'n': 24,
't': 'INFO: Finished Backup of VM 100 (00:00:03)'},
{'n': 25,
't': 'INFO: Backup finished at 2024-11-25 16:28:03'},
{'t': 'INFO: Backup job finished successfully',
'n': 26},
{'n': 27, 't': 'TASK OK'}],
"node2": [{'n': 1,
't': "INFO: starting new backup job: vzdump 101 --mode snapshot --node node2 "
"--notes-template '{{guestname}}' --storage backup --notification-mode auto"},
{'t': 'INFO: Starting Backup of VM 101 (kvm)',
'n': 2},
{'n': 24,
't': 'INFO: Finished Backup of VM 100 (00:00:03)'},
{'n': 25,
't': 'INFO: Backup finished at 2024-11-25 16:28:03'},
{'t': 'INFO: Backup job finished successfully',
'n': 26},
{'n': 27, 't': 'TASK OK'}],
}
def return_valid_resources(resource_type, *args, **kwargs):
if resource_type == "vm":
return VMS
if resource_type == "node":
return NODES
def return_vzdump_api(node, *args, **kwargs):
if node in ("node1", "node2", "node3"):
return VZDUMP_API_RETURN[node]
def return_logs_api(node, *args, **kwargs):
if node in ("node1", "node2"):
return TASKLOG_API_RETURN[node]
def return_task_status_api(node, *args, **kwargs):
if node in ("node1", "node2"):
return TASK_API_RETURN[node]
class TestProxmoxBackup(ModuleTestCase):
def setUp(self):
super(TestProxmoxBackup, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_backup
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect",
).start()
self.mock_get_permissions = patch.object(
proxmox_backup.ProxmoxBackupAnsible, "_get_permissions").start()
self.mock_get_storages = patch.object(proxmox_utils.ProxmoxAnsible,
"get_storages").start()
self.mock_get_resources = patch.object(
proxmox_backup.ProxmoxBackupAnsible, "_get_resources").start()
self.mock_get_tasklog = patch.object(
proxmox_backup.ProxmoxBackupAnsible, "_get_tasklog").start()
self.mock_post_vzdump = patch.object(
proxmox_backup.ProxmoxBackupAnsible, "_post_vzdump").start()
self.mock_get_taskok = patch.object(
proxmox_backup.ProxmoxBackupAnsible, "_get_taskok").start()
self.mock_get_permissions.return_value = MINIMAL_PERMISSIONS
self.mock_get_storages.return_value = STORAGE
self.mock_get_resources.side_effect = return_valid_resources
self.mock_get_taskok.side_effect = return_task_status_api
self.mock_get_tasklog.side_effect = return_logs_api
self.mock_post_vzdump.side_effect = return_vzdump_api
def tearDown(self):
self.connect_mock.stop()
self.mock_get_permissions.stop()
self.mock_get_storages.stop()
self.mock_get_resources.stop()
super(TestProxmoxBackup, self).tearDown()
def test_proxmox_backup_without_argument(self):
with set_module_args({}):
with pytest.raises(AnsibleFailJson):
proxmox_backup.main()
def test_create_backup_check_mode(self):
with set_module_args(
{
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "all",
"storage": "backup",
"_ansible_check_mode": True,
}
):
with pytest.raises(AnsibleExitJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "Backups would be created"
assert len(result["backups"]) == 0
assert self.mock_get_taskok.call_count == 0
assert self.mock_get_tasklog.call_count == 0
assert self.mock_post_vzdump.call_count == 0
def test_create_backup_all_mode(self):
with set_module_args({
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "all",
"storage": "backup",
}):
with pytest.raises(AnsibleExitJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "Backup tasks created"
for backup_result in result["backups"]:
assert backup_result["upid"] in {
VZDUMP_API_RETURN[key] for key in VZDUMP_API_RETURN}
assert self.mock_get_taskok.call_count == 0
assert self.mock_post_vzdump.call_count == 3
def test_create_backup_include_mode_with_wait(self):
with set_module_args({
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "include",
"node": "node1",
"storage": "backup",
"vmids": [100],
"wait": True
}):
with pytest.raises(AnsibleExitJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "Backups succeeded"
for backup_result in result["backups"]:
assert backup_result["upid"] in {
VZDUMP_API_RETURN[key] for key in VZDUMP_API_RETURN}
assert self.mock_get_taskok.call_count == 1
assert self.mock_post_vzdump.call_count == 1
def test_fail_insufficient_permissions(self):
with set_module_args({
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "include",
"storage": "backup",
"performance_tweaks": "max-workers=2",
"vmids": [100],
"wait": True
}):
with pytest.raises(AnsibleFailJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["msg"] == "Insufficient permission: Performance_tweaks and bandwidth require 'Sys.Modify' permission for '/'"
assert self.mock_get_taskok.call_count == 0
assert self.mock_post_vzdump.call_count == 0
def test_fail_missing_node(self):
with set_module_args({
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "include",
"storage": "backup",
"node": "nonexistingnode",
"vmids": [100],
"wait": True
}):
with pytest.raises(AnsibleFailJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["msg"] == "Node nonexistingnode was specified, but does not exist on the cluster"
assert self.mock_get_taskok.call_count == 0
assert self.mock_post_vzdump.call_count == 0
def test_fail_missing_storage(self):
with set_module_args({
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"mode": "include",
"storage": "nonexistingstorage",
"vmids": [100],
"wait": True
}):
with pytest.raises(AnsibleFailJson) as exc_info:
proxmox_backup.main()
result = exc_info.value.args[0]
assert result["msg"] == "Storage nonexistingstorage does not exist in the cluster"
assert self.mock_get_taskok.call_count == 0
assert self.mock_post_vzdump.call_count == 0

View file

@ -1,275 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Marzieh Raoufnezhad <raoufnezhad at gmail.com>
# Copyright (c) 2024 Maryam Mayabi <mayabi.ahm at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import pytest
proxmoxer = pytest.importorskip("proxmoxer")
from ansible_collections.community.general.plugins.modules import proxmox_backup_info
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
RESOURCE_LIST = [
{
"uptime": 0,
"diskwrite": 0,
"name": "test01",
"maxcpu": 0,
"node": "NODE1",
"mem": 0,
"netout": 0,
"netin": 0,
"maxmem": 0,
"diskread": 0,
"disk": 0,
"maxdisk": 0,
"status": "running",
"cpu": 0,
"id": "qemu/100",
"template": 0,
"vmid": 100,
"type": "qemu"
},
{
"uptime": 0,
"diskwrite": 0,
"name": "test02",
"maxcpu": 0,
"node": "NODE1",
"mem": 0,
"netout": 0,
"netin": 0,
"maxmem": 0,
"diskread": 0,
"disk": 0,
"maxdisk": 0,
"status": "running",
"cpu": 0,
"id": "qemu/101",
"template": 0,
"vmid": 101,
"type": "qemu"
},
{
"uptime": 0,
"diskwrite": 0,
"name": "test03",
"maxcpu": 0,
"node": "NODE2",
"mem": 0,
"netout": 0,
"netin": 0,
"maxmem": 0,
"diskread": 0,
"disk": 0,
"maxdisk": 0,
"status": "running",
"cpu": 0,
"id": "qemu/102",
"template": 0,
"vmid": 102,
"type": "qemu"
}
]
BACKUP_JOBS = [
{
"type": "vzdump",
"id": "backup-83831498-c631",
"storage": "local",
"vmid": "100",
"enabled": 1,
"next-run": 1735138800,
"mailnotification": "always",
"schedule": "06,18:30",
"mode": "snapshot",
"notes-template": "guestname"
},
{
"schedule": "sat 15:00",
"notes-template": "guestname",
"mode": "snapshot",
"mailnotification": "always",
"next-run": 1735385400,
"type": "vzdump",
"enabled": 1,
"vmid": "100,101,102",
"storage": "local",
"id": "backup-70025700-2302",
}
]
EXPECTED_BACKUP_OUTPUT = [
{
"bktype": "vzdump",
"enabled": 1,
"id": "backup-83831498-c631",
"mode": "snapshot",
"next-run": "2024-12-25 15:00:00",
"schedule": "06,18:30",
"storage": "local",
"vm_name": "test01",
"vmid": "100"
},
{
"bktype": "vzdump",
"enabled": 1,
"id": "backup-70025700-2302",
"mode": "snapshot",
"next-run": "2024-12-28 11:30:00",
"schedule": "sat 15:00",
"storage": "local",
"vm_name": "test01",
"vmid": "100"
},
{
"bktype": "vzdump",
"enabled": 1,
"id": "backup-70025700-2302",
"mode": "snapshot",
"next-run": "2024-12-28 11:30:00",
"schedule": "sat 15:00",
"storage": "local",
"vm_name": "test02",
"vmid": "101"
},
{
"bktype": "vzdump",
"enabled": 1,
"id": "backup-70025700-2302",
"mode": "snapshot",
"next-run": "2024-12-28 11:30:00",
"schedule": "sat 15:00",
"storage": "local",
"vm_name": "test03",
"vmid": "102"
}
]
EXPECTED_BACKUP_JOBS_OUTPUT = [
{
"enabled": 1,
"id": "backup-83831498-c631",
"mailnotification": "always",
"mode": "snapshot",
"next-run": 1735138800,
"notes-template": "guestname",
"schedule": "06,18:30",
"storage": "local",
"type": "vzdump",
"vmid": "100"
},
{
"enabled": 1,
"id": "backup-70025700-2302",
"mailnotification": "always",
"mode": "snapshot",
"next-run": 1735385400,
"notes-template": "guestname",
"schedule": "sat 15:00",
"storage": "local",
"type": "vzdump",
"vmid": "100,101,102"
}
]
class TestProxmoxBackupInfoModule(ModuleTestCase):
def setUp(self):
super(TestProxmoxBackupInfoModule, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_backup_info
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect",
).start()
self.connect_mock.return_value.cluster.resources.get.return_value = (
RESOURCE_LIST
)
self.connect_mock.return_value.cluster.backup.get.return_value = (
BACKUP_JOBS
)
def tearDown(self):
self.connect_mock.stop()
super(TestProxmoxBackupInfoModule, self).tearDown()
def test_module_fail_when_required_args_missing(self):
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args({}):
self.module.main()
result = exc_info.value.args[0]
assert result["msg"] == "missing required arguments: api_host, api_user"
def test_get_all_backups_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret'
}):
self.module.main()
result = exc_info.value.args[0]
assert result["backup_info"] == EXPECTED_BACKUP_OUTPUT
def test_get_specific_backup_information_by_vmname(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmname = 'test01'
expected_output = [
backup for backup in EXPECTED_BACKUP_OUTPUT if backup["vm_name"] == vmname
]
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'vm_name': vmname
}):
self.module.main()
result = exc_info.value.args[0]
assert result["backup_info"] == expected_output
assert len(result["backup_info"]) == 2
def test_get_specific_backup_information_by_vmid(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = "101"
expected_output = [
backup for backup in EXPECTED_BACKUP_OUTPUT if backup["vmid"] == vmid
]
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'vm_id': vmid
}):
self.module.main()
result = exc_info.value.args[0]
assert result["backup_info"] == expected_output
assert len(result["backup_info"]) == 1
def test_get_specific_backup_information_by_backupjobs(self):
with pytest.raises(AnsibleExitJson) as exc_info:
backupjobs = True
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'backup_jobs': backupjobs
}):
self.module.main()
result = exc_info.value.args[0]
assert result["backup_info"] == EXPECTED_BACKUP_JOBS_OUTPUT

View file

@ -1,168 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2021, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
import pytest
proxmoxer = pytest.importorskip("proxmoxer")
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason="The proxmoxer dependency requires python2.7 or higher",
)
from ansible_collections.community.general.plugins.modules import proxmox_kvm
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import (
patch,
DEFAULT,
)
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
class TestProxmoxKvmModule(ModuleTestCase):
def setUp(self):
super(TestProxmoxKvmModule, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_kvm
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect"
).start()
self.get_node_mock = patch.object(
proxmox_utils.ProxmoxAnsible, "get_node"
).start()
self.get_vm_mock = patch.object(proxmox_utils.ProxmoxAnsible, "get_vm").start()
self.create_vm_mock = patch.object(
proxmox_kvm.ProxmoxKvmAnsible, "create_vm"
).start()
def tearDown(self):
self.create_vm_mock.stop()
self.get_vm_mock.stop()
self.get_node_mock.stop()
self.connect_mock.stop()
super(TestProxmoxKvmModule, self).tearDown()
def test_module_fail_when_required_args_missing(self):
with self.assertRaises(AnsibleFailJson):
with set_module_args({}):
self.module.main()
def test_module_exits_unchaged_when_provided_vmid_exists(self):
with set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"vmid": "100",
"node": "pve",
}
):
self.get_vm_mock.return_value = [{"vmid": "100"}]
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert self.get_vm_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is False
assert result["msg"] == "VM with vmid <100> already exists"
def test_vm_created_when_vmid_not_exist_but_name_already_exist(self):
with set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"vmid": "100",
"name": "existing.vm.local",
"node": "pve",
}
):
self.get_vm_mock.return_value = None
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert self.get_vm_mock.call_count == 1
assert self.get_node_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "VM existing.vm.local with vmid 100 deployed"
def test_vm_not_created_when_name_already_exist_and_vmid_not_set(self):
with set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"name": "existing.vm.local",
"node": "pve",
}
):
with patch.object(proxmox_utils.ProxmoxAnsible, "get_vmid") as get_vmid_mock:
get_vmid_mock.return_value = {
"vmid": 100,
"name": "existing.vm.local",
}
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert get_vmid_mock.call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is False
def test_vm_created_when_name_doesnt_exist_and_vmid_not_set(self):
with set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"name": "existing.vm.local",
"node": "pve",
}
):
self.get_vm_mock.return_value = None
with patch.multiple(
proxmox_utils.ProxmoxAnsible, get_vmid=DEFAULT, get_nextvmid=DEFAULT
) as utils_mock:
utils_mock["get_vmid"].return_value = None
utils_mock["get_nextvmid"].return_value = 101
with pytest.raises(AnsibleExitJson) as exc_info:
self.module.main()
assert utils_mock["get_vmid"].call_count == 1
assert utils_mock["get_nextvmid"].call_count == 1
result = exc_info.value.args[0]
assert result["changed"] is True
assert result["msg"] == "VM existing.vm.local with vmid 101 deployed"
def test_parse_mac(self):
assert (
proxmox_kvm.parse_mac("virtio=00:11:22:AA:BB:CC,bridge=vmbr0,firewall=1")
== "00:11:22:AA:BB:CC"
)
def test_parse_dev(self):
assert (
proxmox_kvm.parse_dev("local-lvm:vm-1000-disk-0,format=qcow2")
== "local-lvm:vm-1000-disk-0"
)
assert (
proxmox_kvm.parse_dev("local-lvm:vm-101-disk-1,size=8G")
== "local-lvm:vm-101-disk-1"
)
assert (
proxmox_kvm.parse_dev("local-zfs:vm-1001-disk-0")
== "local-zfs:vm-1001-disk-0"
)

View file

@ -1,131 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import sys
import pytest
proxmoxer = pytest.importorskip('proxmoxer')
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason='The proxmoxer dependency requires python2.7 or higher'
)
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import MagicMock, patch
from ansible_collections.community.general.plugins.modules import proxmox_snap
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import set_module_args
def get_resources(type):
return [{"diskwrite": 0,
"vmid": 100,
"node": "localhost",
"id": "lxc/100",
"maxdisk": 10000,
"template": 0,
"disk": 10000,
"uptime": 10000,
"maxmem": 10000,
"maxcpu": 1,
"netin": 10000,
"type": "lxc",
"netout": 10000,
"mem": 10000,
"diskread": 10000,
"cpu": 0.01,
"name": "test-lxc",
"status": "running"}]
def fake_api(mocker):
r = mocker.MagicMock()
r.cluster.resources.get = MagicMock(side_effect=get_resources)
return r
def test_proxmox_snap_without_argument(capfd):
with set_module_args({}):
with pytest.raises(SystemExit) as results:
proxmox_snap.main()
out, err = capfd.readouterr()
assert not err
assert json.loads(out)['failed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_create_snapshot_check_mode(connect_mock, capfd, mocker):
with set_module_args({
"hostname": "test-lxc",
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"state": "present",
"snapname": "test",
"timeout": "1",
"force": True,
"_ansible_check_mode": True
}):
proxmox_utils.HAS_PROXMOXER = True
connect_mock.side_effect = lambda: fake_api(mocker)
with pytest.raises(SystemExit) as results:
proxmox_snap.main()
out, err = capfd.readouterr()
assert not err
assert not json.loads(out)['changed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_remove_snapshot_check_mode(connect_mock, capfd, mocker):
with set_module_args({
"hostname": "test-lxc",
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"state": "absent",
"snapname": "test",
"timeout": "1",
"force": True,
"_ansible_check_mode": True
}):
proxmox_utils.HAS_PROXMOXER = True
connect_mock.side_effect = lambda: fake_api(mocker)
with pytest.raises(SystemExit) as results:
proxmox_snap.main()
out, err = capfd.readouterr()
assert not err
assert not json.loads(out)['changed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_rollback_snapshot_check_mode(connect_mock, capfd, mocker):
with set_module_args({
"hostname": "test-lxc",
"api_user": "root@pam",
"api_password": "secret",
"api_host": "127.0.0.1",
"state": "rollback",
"snapname": "test",
"timeout": "1",
"force": True,
"_ansible_check_mode": True
}):
proxmox_utils.HAS_PROXMOXER = True
connect_mock.side_effect = lambda: fake_api(mocker)
with pytest.raises(SystemExit) as results:
proxmox_snap.main()
out, err = capfd.readouterr()
assert not err
output = json.loads(out)
assert not output['changed']
assert output['msg'] == "Snapshot test does not exist"

View file

@ -1,90 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2023, Julian Vanden Broeck <julian.vandenbroeck at dalibo.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import pytest
proxmoxer = pytest.importorskip("proxmoxer")
from ansible_collections.community.general.plugins.modules import proxmox_storage_contents_info
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
NODE1 = "pve"
RAW_LIST_OUTPUT = [
{
"content": "backup",
"ctime": 1702528474,
"format": "pbs-vm",
"size": 273804166061,
"subtype": "qemu",
"vmid": 931,
"volid": "datastore:backup/vm/931/2023-12-14T04:34:34Z",
},
{
"content": "backup",
"ctime": 1702582560,
"format": "pbs-vm",
"size": 273804166059,
"subtype": "qemu",
"vmid": 931,
"volid": "datastore:backup/vm/931/2023-12-14T19:36:00Z",
},
]
def get_module_args(node, storage, content="all", vmid=None):
return {
"api_host": "host",
"api_user": "user",
"api_password": "password",
"node": node,
"storage": storage,
"content": content,
"vmid": vmid,
}
class TestProxmoxStorageContentsInfo(ModuleTestCase):
def setUp(self):
super(TestProxmoxStorageContentsInfo, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_storage_contents_info
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect",
).start()
self.connect_mock.return_value.nodes.return_value.storage.return_value.content.return_value.get.return_value = (
RAW_LIST_OUTPUT
)
self.connect_mock.return_value.nodes.get.return_value = [{"node": NODE1}]
def tearDown(self):
self.connect_mock.stop()
super(TestProxmoxStorageContentsInfo, self).tearDown()
def test_module_fail_when_required_args_missing(self):
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args({}):
self.module.main()
def test_storage_contents_info(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args(get_module_args(node=NODE1, storage="datastore")):
expected_output = {}
self.module.main()
result = exc_info.value.args[0]
assert not result["changed"]
assert result["proxmox_storage_content"] == RAW_LIST_OUTPUT

View file

@ -1,206 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2021, Andreas Botzner (@paginabianca) <andreas at botzner dot com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
#
# Proxmox Tasks module unit tests.
# The API responses used in these tests were recorded from PVE version 6.4-8
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import sys
import pytest
proxmoxer = pytest.importorskip('proxmoxer')
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason='The proxmoxer dependency requires python2.7 or higher'
)
from ansible_collections.community.general.plugins.modules import proxmox_tasks_info
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import set_module_args
NODE = 'node01'
TASK_UPID = 'UPID:iaclab-01-01:000029DD:1599528B:6108F068:srvreload:networking:root@pam:'
TASKS = [
{
"endtime": 1629092710,
"id": "networking",
"node": "iaclab-01-01",
"pid": 3539,
"pstart": 474062216,
"starttime": 1629092709,
"status": "OK",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:00000DD3:1C419D88:6119FB65:srvreload:networking:root@pam:",
"user": "root@pam"
},
{
"endtime": 1627975785,
"id": "networking",
"node": "iaclab-01-01",
"pid": 10717,
"pstart": 362369675,
"starttime": 1627975784,
"status": "command 'ifreload -a' failed: exit code 1",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:000029DD:1599528B:6108F068:srvreload:networking:root@pam:",
"user": "root@pam"
},
{
"endtime": 1627975503,
"id": "networking",
"node": "iaclab-01-01",
"pid": 6778,
"pstart": 362341540,
"starttime": 1627975503,
"status": "OK",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:00001A7A:1598E4A4:6108EF4F:srvreload:networking:root@pam:",
"user": "root@pam"
}
]
EXPECTED_TASKS = [
{
"endtime": 1629092710,
"id": "networking",
"node": "iaclab-01-01",
"pid": 3539,
"pstart": 474062216,
"starttime": 1629092709,
"status": "OK",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:00000DD3:1C419D88:6119FB65:srvreload:networking:root@pam:",
"user": "root@pam",
"failed": False
},
{
"endtime": 1627975785,
"id": "networking",
"node": "iaclab-01-01",
"pid": 10717,
"pstart": 362369675,
"starttime": 1627975784,
"status": "command 'ifreload -a' failed: exit code 1",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:000029DD:1599528B:6108F068:srvreload:networking:root@pam:",
"user": "root@pam",
"failed": True
},
{
"endtime": 1627975503,
"id": "networking",
"node": "iaclab-01-01",
"pid": 6778,
"pstart": 362341540,
"starttime": 1627975503,
"status": "OK",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:00001A7A:1598E4A4:6108EF4F:srvreload:networking:root@pam:",
"user": "root@pam",
"failed": False
}
]
EXPECTED_SINGLE_TASK = [
{
"endtime": 1627975785,
"id": "networking",
"node": "iaclab-01-01",
"pid": 10717,
"pstart": 362369675,
"starttime": 1627975784,
"status": "command 'ifreload -a' failed: exit code 1",
"type": "srvreload",
"upid": "UPID:iaclab-01-01:000029DD:1599528B:6108F068:srvreload:networking:root@pam:",
"user": "root@pam",
"failed": True
},
]
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_without_required_parameters(connect_mock, capfd, mocker):
with set_module_args({}):
with pytest.raises(SystemExit):
proxmox_tasks_info.main()
out, err = capfd.readouterr()
assert not err
assert json.loads(out)['failed']
def mock_api_tasks_response(mocker):
m = mocker.MagicMock()
g = mocker.MagicMock()
m.nodes = mocker.MagicMock(return_value=g)
g.tasks.get = mocker.MagicMock(return_value=TASKS)
return m
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_get_tasks(connect_mock, capfd, mocker):
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'node': NODE
}):
connect_mock.side_effect = lambda: mock_api_tasks_response(mocker)
proxmox_utils.HAS_PROXMOXER = True
with pytest.raises(SystemExit):
proxmox_tasks_info.main()
out, err = capfd.readouterr()
assert not err
assert len(json.loads(out)['proxmox_tasks']) != 0
assert not json.loads(out)['changed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_get_single_task(connect_mock, capfd, mocker):
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'node': NODE,
'task': TASK_UPID
}):
connect_mock.side_effect = lambda: mock_api_tasks_response(mocker)
proxmox_utils.HAS_PROXMOXER = True
with pytest.raises(SystemExit):
proxmox_tasks_info.main()
out, err = capfd.readouterr()
assert not err
assert len(json.loads(out)['proxmox_tasks']) == 1
assert json.loads(out)
assert not json.loads(out)['changed']
@patch('ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect')
def test_get_non_existent_task(connect_mock, capfd, mocker):
with set_module_args({
'api_host': 'proxmoxhost',
'api_user': 'root@pam',
'api_password': 'supersecret',
'node': NODE,
'task': 'UPID:nonexistent'
}):
connect_mock.side_effect = lambda: mock_api_tasks_response(mocker)
proxmox_utils.HAS_PROXMOXER = True
with pytest.raises(SystemExit):
proxmox_tasks_info.main()
out, err = capfd.readouterr()
assert not err
assert json.loads(out)['failed']
assert 'proxmox_tasks' not in json.loads(out)
assert not json.loads(out)['changed']
assert json.loads(
out)['msg'] == 'Task: UPID:nonexistent does not exist on node: node01.'

View file

@ -1,66 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import sys
import pytest
proxmoxer = pytest.importorskip('proxmoxer')
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason='The proxmoxer dependency requires python2.7 or higher'
)
from ansible_collections.community.general.plugins.modules import proxmox_template
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch, Mock
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
class TestProxmoxTemplateModule(ModuleTestCase):
def setUp(self):
super(TestProxmoxTemplateModule, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_template
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect"
)
self.connect_mock.start()
def tearDown(self):
self.connect_mock.stop()
super(TestProxmoxTemplateModule, self).tearDown()
@patch("os.stat")
@patch.multiple(os.path, exists=Mock(return_value=True), isfile=Mock(return_value=True))
def test_module_fail_when_toolbelt_not_installed_and_file_size_is_big(self, mock_stat):
self.module.HAS_REQUESTS_TOOLBELT = False
mock_stat.return_value.st_size = 268435460
with set_module_args(
{
"api_host": "host",
"api_user": "user",
"api_password": "password",
"node": "pve",
"src": "/tmp/mock.iso",
"content_type": "iso"
}
):
with pytest.raises(AnsibleFailJson) as exc_info:
self.module.main()
result = exc_info.value.args[0]
assert result["failed"] is True
assert result["msg"] == "'requests_toolbelt' module is required to upload files larger than 256MB"

View file

@ -1,714 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
import pytest
proxmoxer = pytest.importorskip("proxmoxer")
mandatory_py_version = pytest.mark.skipif(
sys.version_info < (2, 7),
reason="The proxmoxer dependency requires python2.7 or higher",
)
from ansible_collections.community.general.plugins.modules import proxmox_vm_info
from ansible_collections.community.internal_test_tools.tests.unit.compat.mock import patch
from ansible_collections.community.internal_test_tools.tests.unit.plugins.modules.utils import (
AnsibleExitJson,
AnsibleFailJson,
ModuleTestCase,
set_module_args,
)
import ansible_collections.community.general.plugins.module_utils.proxmox as proxmox_utils
NODE1 = "pve"
NODE2 = "pve2"
RAW_CLUSTER_OUTPUT = [
{
"cpu": 0.174069059487628,
"disk": 0,
"diskread": 6656,
"diskwrite": 0,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"node": NODE1,
"status": "running",
"template": 0,
"type": "qemu",
"uptime": 669,
"vmid": 100,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "qemu",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/102",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"node": NODE1,
"status": "running",
"template": 0,
"type": "lxc",
"uptime": 161,
"vmid": 102,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/103",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 103,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/104",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 104,
},
{
"cpu": 0,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/105",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": 0,
"type": "lxc",
"uptime": 0,
"vmid": 105,
},
]
RAW_LXC_OUTPUT = [
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "103",
},
{
"cpu": 0,
"cpus": 2,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"pid": 4076752,
"status": "running",
"swap": 0,
"type": "lxc",
"uptime": 161,
"vmid": "102",
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "104",
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"status": "stopped",
"swap": 0,
"type": "lxc",
"uptime": 0,
"vmid": "105",
},
]
RAW_QEMU_OUTPUT = [
{
"cpu": 0,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"status": "stopped",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0.174069059487628,
"cpus": 1,
"disk": 0,
"diskread": 6656,
"diskwrite": 0,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"pid": 4076688,
"status": "running",
"uptime": 669,
"vmid": 100,
},
]
EXPECTED_VMS_OUTPUT = [
{
"cpu": 0.174069059487628,
"cpus": 1,
"disk": 0,
"diskread": 6656,
"diskwrite": 0,
"id": "qemu/100",
"maxcpu": 1,
"maxdisk": 34359738368,
"maxmem": 4294967296,
"mem": 35304543,
"name": "pxe.home.arpa",
"netin": 416956,
"netout": 17330,
"node": NODE1,
"pid": 4076688,
"status": "running",
"template": False,
"type": "qemu",
"uptime": 669,
"vmid": 100,
},
{
"cpu": 0,
"cpus": 1,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "qemu/101",
"maxcpu": 1,
"maxdisk": 0,
"maxmem": 536870912,
"mem": 0,
"name": "test1",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"template": False,
"type": "qemu",
"uptime": 0,
"vmid": 101,
},
{
"cpu": 0,
"cpus": 2,
"disk": 352190464,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/102",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 28192768,
"name": "test-lxc.home.arpa",
"netin": 102757,
"netout": 446,
"node": NODE1,
"pid": 4076752,
"status": "running",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 161,
"vmid": 102,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/103",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test1-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 103,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/104",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "test-lxc.home.arpa",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 104,
},
{
"cpu": 0,
"cpus": 2,
"disk": 0,
"diskread": 0,
"diskwrite": 0,
"id": "lxc/105",
"maxcpu": 2,
"maxdisk": 10737418240,
"maxmem": 536870912,
"maxswap": 536870912,
"mem": 0,
"name": "",
"netin": 0,
"netout": 0,
"node": NODE2,
"pool": "pool1",
"status": "stopped",
"swap": 0,
"template": False,
"type": "lxc",
"uptime": 0,
"vmid": 105,
},
]
def get_module_args(type="all", node=None, vmid=None, name=None, config="none"):
return {
"api_host": "host",
"api_user": "user",
"api_password": "password",
"node": node,
"type": type,
"vmid": vmid,
"name": name,
"config": config,
}
class TestProxmoxVmInfoModule(ModuleTestCase):
def setUp(self):
super(TestProxmoxVmInfoModule, self).setUp()
proxmox_utils.HAS_PROXMOXER = True
self.module = proxmox_vm_info
self.connect_mock = patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible._connect",
).start()
self.connect_mock.return_value.nodes.return_value.lxc.return_value.get.return_value = (
RAW_LXC_OUTPUT
)
self.connect_mock.return_value.nodes.return_value.qemu.return_value.get.return_value = (
RAW_QEMU_OUTPUT
)
self.connect_mock.return_value.cluster.return_value.resources.return_value.get.return_value = (
RAW_CLUSTER_OUTPUT
)
self.connect_mock.return_value.nodes.get.return_value = [{"node": NODE1}]
def tearDown(self):
self.connect_mock.stop()
super(TestProxmoxVmInfoModule, self).tearDown()
def test_module_fail_when_required_args_missing(self):
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args({}):
self.module.main()
result = exc_info.value.args[0]
assert result["msg"] == "missing required arguments: api_host, api_user"
def test_get_lxc_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args(get_module_args(type="lxc")):
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["type"] == "lxc"]
self.module.main()
result = exc_info.value.args[0]
assert result["changed"] is False
assert result["proxmox_vms"] == expected_output
def test_get_qemu_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args(get_module_args(type="qemu")):
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["type"] == "qemu"]
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
def test_get_all_vms_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args(get_module_args()):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == EXPECTED_VMS_OUTPUT
def test_vmid_is_converted_to_int(self):
with pytest.raises(AnsibleExitJson) as exc_info:
with set_module_args(get_module_args(type="lxc")):
self.module.main()
result = exc_info.value.args[0]
assert isinstance(result["proxmox_vms"][0]["vmid"], int)
def test_get_specific_lxc_vm_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 102
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["vmid"] == vmid and vm["type"] == "lxc"
]
with set_module_args(get_module_args(type="lxc", vmid=vmid)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_specific_qemu_vm_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 100
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["vmid"] == vmid and vm["type"] == "qemu"
]
with set_module_args(get_module_args(type="qemu", vmid=vmid)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_specific_vm_information(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 100
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["vmid"] == vmid]
with set_module_args(get_module_args(type="all", vmid=vmid)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_specific_vm_information_by_using_name(self):
name = "test1-lxc.home.arpa"
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "103"}
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
with set_module_args(get_module_args(type="all", name=name)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_multiple_vms_with_the_same_name(self):
name = "test-lxc.home.arpa"
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "102"},
{"name": name, "vmid": "104"},
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
with set_module_args(get_module_args(type="all", name=name)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 2
def test_get_vm_with_an_empty_name(self):
name = ""
self.connect_mock.return_value.cluster.resources.get.return_value = [
{"name": name, "vmid": "105"},
]
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["name"] == name]
with set_module_args(get_module_args(type="all", name=name)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_lxc_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["node"] == NODE1 and vm["type"] == "lxc"
]
with set_module_args(get_module_args(type="lxc", node=NODE1)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_qemu_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [
vm
for vm in EXPECTED_VMS_OUTPUT
if vm["node"] == NODE1 and vm["type"] == "qemu"
]
with set_module_args(get_module_args(type="qemu", node=NODE1)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 1
def test_get_all_vms_from_specific_node(self):
with pytest.raises(AnsibleExitJson) as exc_info:
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["node"] == NODE1]
with set_module_args(get_module_args(node=NODE1)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output
assert len(result["proxmox_vms"]) == 2
def test_module_returns_empty_list_when_vm_does_not_exist(self):
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 200
with set_module_args(get_module_args(type="all", vmid=vmid)):
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == []
def test_module_fail_when_qemu_request_fails(self):
self.connect_mock.return_value.nodes.return_value.qemu.return_value.get.side_effect = IOError(
"Some mocked connection error."
)
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args(get_module_args(type="qemu")):
self.module.main()
result = exc_info.value.args[0]
assert "Failed to retrieve QEMU VMs information:" in result["msg"]
def test_module_fail_when_lxc_request_fails(self):
self.connect_mock.return_value.nodes.return_value.lxc.return_value.get.side_effect = IOError(
"Some mocked connection error."
)
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args(get_module_args(type="lxc")):
self.module.main()
result = exc_info.value.args[0]
assert "Failed to retrieve LXC VMs information:" in result["msg"]
def test_module_fail_when_cluster_resources_request_fails(self):
self.connect_mock.return_value.cluster.return_value.resources.return_value.get.side_effect = IOError(
"Some mocked connection error."
)
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args(get_module_args()):
self.module.main()
result = exc_info.value.args[0]
assert (
"Failed to retrieve VMs information from cluster resources:"
in result["msg"]
)
def test_module_fail_when_node_does_not_exist(self):
with pytest.raises(AnsibleFailJson) as exc_info:
with set_module_args(get_module_args(type="all", node="NODE3")):
self.module.main()
result = exc_info.value.args[0]
assert result["msg"] == "Node NODE3 doesn't exist in PVE cluster"
def test_call_to_get_vmid_is_not_used_when_vmid_provided(self):
with patch(
"ansible_collections.community.general.plugins.module_utils.proxmox.ProxmoxAnsible.get_vmid"
) as get_vmid_mock:
with pytest.raises(AnsibleExitJson):
vmid = 100
with set_module_args(
get_module_args(type="all", vmid=vmid, name="something")
):
self.module.main()
assert get_vmid_mock.call_count == 0
def test_config_returned_when_specified_qemu_vm_with_config(self):
config_vm_value = {
'scsi0': 'local-lvm:vm-101-disk-0,iothread=1,size=32G',
'net0': 'virtio=4E:79:9F:A8:EE:E4,bridge=vmbr0,firewall=1',
'scsihw': 'virtio-scsi-single',
'cores': 1,
'name': 'test1',
'ostype': 'l26',
'boot': 'order=scsi0;ide2;net0',
'memory': 2048,
'sockets': 1,
}
(self.connect_mock.return_value.nodes.return_value.qemu.return_value.
config.return_value.get.return_value) = config_vm_value
with pytest.raises(AnsibleExitJson) as exc_info:
vmid = 101
with set_module_args(get_module_args(
type="qemu",
vmid=vmid,
config="current",
)):
expected_output = [vm for vm in EXPECTED_VMS_OUTPUT if vm["vmid"] == vmid]
expected_output[0]["config"] = config_vm_value
self.module.main()
result = exc_info.value.args[0]
assert result["proxmox_vms"] == expected_output

View file

@ -47,13 +47,6 @@ elastic-apm ; python_version >= '3.6'
# requirements for scaleway modules # requirements for scaleway modules
passlib[argon2] passlib[argon2]
# requirements for the proxmox modules
proxmoxer < 2.0.0 ; python_version >= '2.7' and python_version <= '3.6'
proxmoxer ; python_version > '3.6'
# requirements for the proxmox_pct_remote connection plugin
paramiko >= 3.0.0 ; python_version >= '3.6'
#requirements for nomad_token modules #requirements for nomad_token modules
python-nomad < 2.0.0 ; python_version <= '3.6' python-nomad < 2.0.0 ; python_version <= '3.6'
python-nomad >= 2.0.0 ; python_version >= '3.7' python-nomad >= 2.0.0 ; python_version >= '3.7'
@ -62,4 +55,7 @@ python-nomad >= 2.0.0 ; python_version >= '3.7'
python-jenkins >= 0.4.12 python-jenkins >= 0.4.12
# requirement for json_patch, json_patch_recipe and json_patch plugins # requirement for json_patch, json_patch_recipe and json_patch plugins
jsonpatch jsonpatch
# requirements for the wsl connection plugin
paramiko >= 3.0.0 ; python_version >= '3.6'