Shortens Scenario Guides, incorporates vmware guide (#54554)

* Shortens Scenario Guides, incorporates vmware guide

* adds links from network scenario guides to other network docs
This commit is contained in:
Alicia Cozine 2019-04-04 11:58:30 -05:00 committed by GitHub
commit a4d0bc2c43
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
30 changed files with 227 additions and 178 deletions

View file

@ -0,0 +1,21 @@
.. _cloud_guides:
*******************
Public Cloud Guides
*******************
The guides in this section cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
.. toctree::
:maxdepth: 1
guide_alicloud
guide_aws
guide_cloudstack
guide_gce
guide_azure
guide_online
guide_packet
guide_rax
guide_scaleway
guide_vultr

View file

@ -1,5 +1,5 @@
Getting Started with Docker
===========================
Docker Guide
============
Ansible offers the following modules for orchestrating Docker containers:
@ -327,7 +327,3 @@ For the default host and each host in the hosts list define the following attrib
description: The port containers use for SSH
required: false
default: 22

View file

@ -1,5 +1,5 @@
Getting Started with Kubernetes and OpenShift
=============================================
Kubernetes and OpenShift Guide
==============================
Modules for interacting with the Kubernetes (K8s) and OpenShift API are under development, and can be used in preview mode. To use them, review the requirements, and then follow the installation and use instructions.
@ -53,4 +53,3 @@ Filing issues
If you find a bug or have a suggestion regarding individual modules or the role, please file issues at `OpenShift Rest Client issues <https://github.com/openshift/openshift-restclient-python/issues>`_.
There is also a utility module, k8s_common.py, that is part of the `Ansible <https://github.com/ansible/ansible>`_ repo. If you find a bug or have suggestions regarding it, please file issues at `Ansible issues <https://github.com/ansible/ansible/issues>`_.

View file

@ -1,6 +1,6 @@
*************************
Using Ansible with Online
*************************
****************
Online.net Guide
****************
Introduction
============

View file

@ -1,302 +0,0 @@
Continuous Delivery and Rolling Upgrades
========================================
.. _lamp_introduction:
Introduction
````````````
Continuous Delivery is the concept of frequently delivering updates to your software application.
The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization
gets better at the process of responding to change.
Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis -- sometimes every time
there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates in a zero-downtime way.
This document describes in detail how to achieve this goal, using one of Ansible's most complete example
playbooks as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates,
and group variables, and it also comes with an orchestration playbook that can do zero-downtime
rolling upgrades of the web application stack.
.. note::
`Click here for the latest playbooks for this example
<https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy>`_.
The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.
We're not going to cover how to run these playbooks here. Read the included README in the github project along with the
example for that information. Instead, we're going to take a close look at every part of the playbook and describe what it does.
.. _lamp_deployment:
Site Deployment
```````````````
Let's start with ``site.yml``. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well
as push updates to all of the servers::
---
# This playbook deploys the whole application stack in this site.
# Apply common configuration to all hosts
- hosts: all
roles:
- common
# Configure and deploy database servers.
- hosts: dbservers
roles:
- db
# Configure and deploy the web servers. Note that we include two roles
# here, the 'base-apache' role which simply sets up Apache, and 'web'
# which includes our example web application.
- hosts: webservers
roles:
- base-apache
- web
# Configure and deploy the load balancer(s).
- hosts: lbservers
roles:
- haproxy
# Configure and deploy the Nagios monitoring node(s).
- hosts: monitoring
roles:
- base-apache
- nagios
.. note::
If you're not familiar with terms like playbooks and plays, you should review :ref:`working_with_playbooks`.
In this playbook we have 5 plays. The first one targets ``all`` hosts and applies the ``common`` role to all of the hosts.
This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers.
The next four plays run against specific host groups and apply specific roles to those servers.
Along with the roles for Nagios monitoring, the database, and the web application, we've implemented a
``base-apache`` role that installs and configures a basic Apache setup. This is used by both the
sample web application and the Nagios hosts.
.. _lamp_roles:
Reusable Content: Roles
```````````````````````
By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize
content: tasks, handlers, templates, and files, into reusable components.
This example has six roles: ``common``, ``base-apache``, ``db``, ``haproxy``, ``nagios``, and ``web``. How you organize
your roles is up to you and your application, but most sites will have one or more common roles that are applied to
all systems, and then a series of application-specific roles that install and configure particular parts of the site.
Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior.
You can read more about roles in the :ref:`playbooks_reuse_roles` section.
.. _lamp_group_variables:
Configuration: Group Variables
``````````````````````````````
Group variables are variables that are applied to groups of servers. They can be used in templates and in
playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in
a directory called ``group_vars`` in the same location as your inventory.
Here is lamp_haproxy's ``group_vars/all`` file. As you might expect, these variables are applied to all of the machines in your inventory::
---
httpd_port: 80
ntpserver: 192.0.2.23
This is a YAML file, and you can create lists and dictionaries for more complex variable structures.
In this case, we are just setting two variables, one for the port for the web server, and one for the
NTP server that our machines should use for time synchronization.
Here's another group variables file. This is ``group_vars/dbservers`` which applies to the hosts in the ``dbservers`` group::
---
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: usersecret
If you look in the example, there are group variables for the ``webservers`` group and the ``lbservers`` group, similarly.
These variables are used in a variety of places. You can use them in playbooks, like this, in ``roles/db/tasks/main.yml``::
- name: Create Application Database
mysql_db:
name: "{{ dbname }}"
state: present
- name: Create Application DB User
mysql_user:
name: "{{ dbuser }}"
password: "{{ upassword }}"
priv: "*.*:ALL"
host: '%'
state: present
You can also use these variables in templates, like this, in ``roles/common/templates/ntp.conf.j2``::
driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The syntax
inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the
data inside. In templates, you can also use for loops and if statements to handle more complex situations,
like this, in ``roles/common/templates/iptables.j2``:
.. code-block:: jinja
{% if inventory_hostname in groups['dbservers'] %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
This is testing to see if the inventory name of the machine we're currently operating on (``inventory_hostname``)
exists in the inventory group ``dbservers``. If so, that machine will get an iptables ACCEPT line for port 3306.
Here's another example, from the same template:
.. code-block:: jinja
{% for host in groups['monitoring'] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for
each monitoring hosts' default IPv4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts.
You can learn a lot more about Jinja2 and its capabilities `here <http://jinja.pocoo.org/docs/>`_, and you
can read more about Ansible variables in general in the :ref:`playbooks_variables` section.
.. _lamp_rolling_upgrade:
The Rolling Upgrade
```````````````````
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's
orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible
refers to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it.
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_update.yml``.
Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this::
- hosts: monitoring
tasks: []
What's going on here, and why are there no tasks? You might know that Ansible gathers "facts" from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it's a useful trick to know.
The next part is the update play. The first part looks like this::
- hosts: webservers
user: root
serial: 1
This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
Here is the next part of the update play::
pre_tasks:
- name: disable nagios alerts for this host webserver service
nagios:
action: disable_alerts
host: "{{ inventory_hostname }}"
services: webserver
delegate_to: "{{ item }}"
loop: "{{ groups.monitoring }}"
- name: disable the server in haproxy
shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
loop: "{{ groups.lbservers }}"
.. note::
- The ``serial`` keyword forces the play to be executed in 'batches'. Each batch counts as a full play with a subselection of hosts.
This has some consequences on play behavior. For example, if all hosts in a batch fails, the play fails, which in turn fails the entire run. You should consider this when combining with ``max_fail_percentage``.
The ``pre_tasks`` keyword just lets you list tasks to run before the roles are called. This will make more sense in a minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the webserver that we are currently updating from the HAProxy load balancing pool.
The ``delegate_to`` and ``loop`` arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, "on behalf" of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers.
Note that the HAProxy step looks a little complicated. We're using HAProxy in this example because it's freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the 'pre tasks' section -- take the server out of monitoring, and take it out of rotation.
The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in ``web`` and ``base-apache`` roles to be applied to the web servers, including an update of the web application code itself. We don't have to do it this way--we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks::
roles:
- common
- base-apache
- web
Finally, in the ``post_tasks`` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool::
post_tasks:
- name: Enable the server in haproxy
shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
loop: "{{ groups.lbservers }}"
- name: re-enable nagios alerts
nagios:
action: enable_alerts
host: "{{ inventory_hostname }}"
services: webserver
delegate_to: "{{ item }}"
loop: "{{ groups.monitoring }}"
Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead.
.. _lamp_end_notes:
Managing Other Load Balancers
`````````````````````````````
In this example, we use the simple HAProxy load balancer to front-end the web servers. It's easy to configure and easy to manage. As we have mentioned, Ansible has built-in support for a variety of other load balancers like Citrix NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the :ref:`working_with_modules` documentation for more information.
For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run them as a ``local_action`` if they contact an API. You can read more about local actions in the :ref:`playbooks_delegation` section. Should you develop anything interesting for some hardware where there is not a core module, it might make for a good module for core inclusion!
.. _lamp_end_to_end:
Continuous Delivery End-To-End
``````````````````````````````
Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like `Jenkins <https://jenkins.io/>`_ or `Atlassian Bamboo <https://www.atlassian.com/software/bamboo>`_ to tie the development, test, release, and deploy steps together. You may also want to use a tool like `Gerrit <https://www.gerritcodereview.com/>`_ to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both.
Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.
For integration with Continuous Integration systems, you can easily trigger playbook runs using the ``ansible-playbook`` command line tool, or, if you're using :ref:`ansible_tower`, the ``tower-cli`` or the built-in REST API. (The tower-cli command 'joblaunch' will spawn a remote job over the REST API and is pretty slick).
This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations.
.. seealso::
`lamp_haproxy example <https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy>`_
The lamp_haproxy example discussed here.
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
An introduction to playbook roles
:ref:`playbooks_variables`
An introduction to Ansible variables
`Ansible.com: Continuous Delivery <https://www.ansible.com/use-cases/continuous-delivery>`_
An introduction to Continuous Delivery with Ansible

View file

@ -1,8 +1,8 @@
.. _guide_scaleway:
***************************
Using Scaleway with Ansible
***************************
**************
Scaleway Guide
**************
.. _scaleway_introduction:

View file

@ -1,5 +1,5 @@
Using Vagrant and Ansible
=========================
Vagrant Guide
=============
.. _vagrant_intro:
@ -151,4 +151,3 @@ The "Tips and Tricks" chapter of the `Ansible Provisioner documentation
The open issues for the Ansible provisioner in the Vagrant project
:ref:`working_with_playbooks`
An introduction to playbooks

View file

@ -0,0 +1,30 @@
.. _vmware_ansible:
******************
VMware Guide
******************
Welcome to the Ansible for VMware Guide!
The purpose of this guide is to teach you everything you need to know about using Ansible with VMware.
To get started, please select one of the following topics.
.. toctree::
:maxdepth: 1
vmware_scenarios/vmware_intro
vmware_scenarios/vmware_concepts
vmware_scenarios/vmware_requirements
vmware_scenarios/vmware_inventory
vmware_scenarios/vmware_scenarios
vmware_scenarios/vmware_troubleshooting
vmware_scenarios/vmware_external_doc_links
vmware_scenarios/faq
.. comments look like this - start with two dots
.. getting_started content not ready
.. vmware_scenarios/vmware_getting_started
.. module index page not ready
.. vmware_scenarios/vmware_module_reference
.. always exclude the template file
.. vmware_scenarios/vmware_scenario_1

View file

@ -1,15 +1,42 @@
:orphan:
***************
Scenario Guides
***************
.. unified index page included for backwards compatibility
The guides in this section explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
******************
Scenario Guides
******************
The guides in this section cover integrating Ansible with a variety of
platforms, products, and technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
.. toctree::
:glob:
:maxdepth: 1
:caption: Public Cloud Guides
guide_*
guide_alicloud
guide_aws
guide_cloudstack
guide_gce
guide_azure
guide_online
guide_packet
guide_rax
guide_scaleway
guide_vultr
Pending topics may include: Jenkins, Linode/DigitalOcean, Continuous Deployment, and more.
.. toctree::
:maxdepth: 1
:caption: Network Technology Guides
guide_aci
guide_meraki
guide_infoblox
.. toctree::
:maxdepth: 1
:caption: Virtualization & Containerization Guides
guide_docker
guide_kubernetes
guide_vagrant
guide_vmware

View file

@ -0,0 +1,16 @@
.. _network_guides:
*************************
Network Technology Guides
*************************
The guides in this section cover using Ansible with specific network technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
.. toctree::
:maxdepth: 1
guide_aci
guide_meraki
guide_infoblox
To learn more about Network Automation with Ansible, see :ref:`network_getting_started` and :ref:`network_advanced`.

View file

@ -0,0 +1,15 @@
.. _virtualization_guides:
******************************************
Virtualization and Containerization Guides
******************************************
The guides in this section cover integrating Ansible with popular tools for creating virtual machines and containers. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
.. toctree::
:maxdepth: 1
guide_docker
guide_kubernetes
guide_vagrant
guide_vmware

View file

@ -0,0 +1,26 @@
.. _vmware_faq:
******************
Ansible VMware FAQ
******************
vmware_guest
============
Can I deploy a virtual machine on a standalone ESXi server ?
------------------------------------------------------------
Yes. ``vmware_guest`` can deploy a virtual machine with required settings on a standalone ESXi server.
Is ``/vm`` required for ``vmware_guest`` module ?
-------------------------------------------------
Prior to Ansible version 2.5, ``folder`` was an optional parameter with a default value of ``/vm``.
The folder parameter was used to discover information about virtual machines in the given infrastructure.
Starting with Ansible version 2.5, ``folder`` is still an optional parameter with no default value.
This parameter will be now used to identify a user's virtual machine, if multiple virtual machines or virtual
machine templates are found with same name. VMware does not restrict the system administrator from creating virtual
machines with same name.

View file

@ -0,0 +1,222 @@
.. _vmware_guest_from_template:
****************************************
Deploy a virtual machine from a template
****************************************
.. contents:: Topics
Introduction
============
This guide will show you how to utilize Ansible to clone a virtual machine from already existing VMware template or existing VMware guest.
Scenario Requirements
=====================
* Software
* Ansible 2.5 or later must be installed
* The Python module ``Pyvmomi`` must be installed on the Ansible (or Target host if not executing against localhost)
* Installing the latest ``Pyvmomi`` via ``pip`` is recommended [as the OS provided packages are usually out of date and incompatible]
* Hardware
* vCenter Server with at least one ESXi server
* Access / Credentials
* Ansible (or the target server) must have network access to the either vCenter server or the ESXi server you will be deploying to
* Username and Password
* Administrator user with following privileges
- ``Datastore.AllocateSpace`` on the destination datastore or datastore folder
- ``Network.Assign`` on the network to which the virtual machine will be assigned
- ``Resource.AssignVMToPool`` on the destination host, cluster, or resource pool
- ``VirtualMachine.Config.AddNewDisk`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.AddRemoveDevice`` on the datacenter or virtual machine folder
- ``VirtualMachine.Interact.PowerOn`` on the datacenter or virtual machine folder
- ``VirtualMachine.Inventory.CreateFromExisting`` on the datacenter or virtual machine folder
- ``VirtualMachine.Provisioning.Clone`` on the virtual machine you are cloning
- ``VirtualMachine.Provisioning.Customize`` on the virtual machine or virtual machine folder if you are customizing the guest operating system
- ``VirtualMachine.Provisioning.DeployTemplate`` on the template you are using
- ``VirtualMachine.Provisioning.ReadCustSpecs`` on the root vCenter Server if you are customizing the guest operating system
Depending on your requirements, you could also need one or more of the following privileges:
- ``VirtualMachine.Config.CPUCount`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.Memory`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.DiskExtend`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.Annotation`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.AdvancedConfig`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.EditDevice`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.Resource`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.Settings`` on the datacenter or virtual machine folder
- ``VirtualMachine.Config.UpgradeVirtualHardware`` on the datacenter or virtual machine folder
- ``VirtualMachine.Interact.SetCDMedia`` on the datacenter or virtual machine folder
- ``VirtualMachine.Interact.SetFloppyMedia`` on the datacenter or virtual machine folder
- ``VirtualMachine.Interact.DeviceConnection`` on the datacenter or virtual machine folder
Assumptions
===========
- All variable names and VMware object names are case sensitive
- VMware allows creation of virtual machine and templates with same name across datacenters and within datacenters
- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours
Caveats
=======
- Hosts in the ESXi cluster must have access to the datastore that the template resides on.
- Multiple templates with the same name will cause module failures.
- In order to utilize Guest Customization, VMWare Tools must be installed on the template. For Linux, the ``open-vm-tools`` package is recommended, and it requires that ``Perl`` be installed.
Example Description
===================
In this use case / example, we will be selecting a virtual machine template and cloning it into a specific folder in our Datacenter / Cluster. The following Ansible playbook showcases the basic parameters that are needed for this.
.. code-block:: yaml
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone the template
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
name: testvm_2
template: template_el7
datacenter: "{{ datacenter_name }}"
folder: /DC1/vm
state: poweredon
cluster: "{{ cluster_name }}"
wait_for_ip_address: yes
Since Ansible utilizes the VMware API to perform actions, in this use case we will be connecting directly to the API from our localhost. This means that our playbooks will not be running from the vCenter or ESXi Server. We do not necessarily need to collect facts about our localhost, so the ``gather_facts`` parameter will be disabled. You can run these modules against another server that would then connect to the API if your localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server.
To begin, there are a few bits of information we will need. First and foremost is the hostname of the ESXi server or vCenter server. After this, you will need the username and password for this server. For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_. If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
Now you need to supply the information about the virtual machine which will be created. Give your virtual machine a name, one that conforms to all VMware requirements for naming conventions. Next, select the display name of the template from which you want to clone new virtual machine. This must match what's displayed in VMware Web UI exactly. Then you can specify a folder to place this new virtual machine in. This path can either be a relative path or a full path to the folder including the Datacenter. You may need to specify a state for the virtual machine. This simply tells the module which action you want to take, in this case you will be ensure that the virtual machine exists and is powered on. An optional parameter is ``wait_for_ip_address``, this will tell Ansible to wait for the virtual machine to fully boot up and VMware Tools is running before completing this task.
What to expect
--------------
- You will see a bit of JSON output after this playbook completes. This output shows various parameters that are returned from the module and from vCenter about the newly created VM.
.. code-block:: yaml
{
"changed": true,
"instance": {
"annotation": "",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsNotRunning",
"guest_tools_version": "0",
"hw_cores_per_socket": 1,
"hw_datastores": [
"ds_215"
],
"hw_esxi_host": "192.0.2.44",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": null,
"label": "Network adapter 1",
"macaddress": "00:50:56:8c:19:f4",
"macaddress_dash": "00-50-56-8c-19-f4",
"portgroup_key": "dvportgroup-17",
"portgroup_portkey": "0",
"summary": "DVSwitch: 50 0c 5b 22 b6 68 ab 89-fc 0b 59 a4 08 6e 80 fa"
},
"hw_files": [
"[ds_215] testvm_2/testvm_2.vmx",
"[ds_215] testvm_2/testvm_2.vmsd",
"[ds_215] testvm_2/testvm_2.vmdk"
],
"hw_folder": "/DC1/vm",
"hw_guest_full_name": null,
"hw_guest_ha_state": null,
"hw_guest_id": null,
"hw_interfaces": [
"eth0"
],
"hw_is_template": false,
"hw_memtotal_mb": 512,
"hw_name": "testvm_2",
"hw_power_status": "poweredOff",
"hw_processor_count": 2,
"hw_product_uuid": "420cb25b-81e8-8d3b-dd2d-a439ee54fcc5",
"hw_version": "vmx-13",
"instance_uuid": "500cd53b-ed57-d74e-2da8-0dc0eddf54d5",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"snapshots": []
},
"invocation": {
"module_args": {
"annotation": null,
"cdrom": {},
"cluster": "DC1_C1",
"customization": {},
"customization_spec": null,
"customvalues": [],
"datacenter": "DC1",
"disk": [],
"esxi_hostname": null,
"folder": "/DC1/vm",
"force": false,
"guest_id": null,
"hardware": {},
"hostname": "192.0.2.44",
"is_template": false,
"linked_clone": false,
"name": "testvm_2",
"name_match": "first",
"networks": [],
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"resource_pool": null,
"snapshot_src": null,
"state": "present",
"state_change_timeout": 0,
"template": "template_el7",
"username": "administrator@vsphere.local",
"uuid": null,
"validate_certs": false,
"vapp_properties": [],
"wait_for_ip_address": true
}
}
}
- State is changed to ``True`` which notifies that the virtual machine is built using given template. The module will not complete until the clone task in VMware is finished. This can take some time depending on your environment.
- If you utilize the ``wait_for_ip_address`` parameter, then it will also increase the clone time as it will wait until virtual machine boots into the OS and an IP Address has been assigned to the given NIC.
Troubleshooting
---------------
Things to inspect
- Check if the values provided for username and password are correct
- Check if the datacenter you provided is available
- Check if the template specified exists and you have permissions to access the datastore
- Ensure the full folder path you specified already exists. It will not create folders automatically for you

View file

@ -0,0 +1,120 @@
.. _vmware_guest_find_folder:
******************************************************
Find folder path of an existing VMware virtual machine
******************************************************
.. contents:: Topics
Introduction
============
This guide will show you how to utilize Ansible to find folder path of an existing VMware virtual machine.
Scenario Requirements
=====================
* Software
* Ansible 2.5 or later must be installed.
* The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
* We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
* Hardware
* At least one standalone ESXi server or
* vCenter Server with at least one ESXi server
* Access / Credentials
* Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
* Username and Password for vCenter or ESXi server
Caveats
=======
- All variable names and VMware object names are case sensitive.
- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
Example Description
===================
With the following Ansible playbook you can find the folder path of an existing virtual machine using name.
.. code-block:: yaml
---
- name: Find folder path of an existing virtual machine
hosts: localhost
gather_facts: False
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- set_fact:
vm_name: "DC0_H0_VM0"
- name: "Find folder for VM - {{ vm_name }}"
vmware_guest_find:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
name: "{{ vm_name }}"
delegate_to: localhost
register: vm_facts
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
- Hostname of the ESXi server or vCenter server
- Username and password for the ESXi or vCenter server
- Name of the existing Virtual Machine for which you want to collect folder path
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
The name of existing virtual machine will be used as input for ``vmware_guest_find`` module via ``name`` parameter.
What to expect
--------------
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
.. code-block:: yaml
"vm_facts": {
"changed": false,
"failed": false,
...
"folders": [
"/F0/DC0/vm/F0"
]
}
Troubleshooting
---------------
If your playbook fails:
- Check if the values provided for username and password are correct.
- Check if the datacenter you provided is available.
- Check if the virtual machine specified exists and you have respective permissions to access VMware object.
- Ensure the full folder path you specified already exists.

View file

@ -0,0 +1,126 @@
.. _vmware_guest_remove_virtual_machine:
*****************************************
Remove an existing VMware virtual machine
*****************************************
.. contents:: Topics
Introduction
============
This guide will show you how to utilize Ansible to remove an existing VMware virtual machine.
Scenario Requirements
=====================
* Software
* Ansible 2.5 or later must be installed.
* The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
* We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
* Hardware
* At least one standalone ESXi server or
* vCenter Server with at least one ESXi server
* Access / Credentials
* Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
* Username and Password for vCenter or ESXi server
* Hosts in the ESXi cluster must have access to the datastore that the template resides on.
Caveats
=======
- All variable names and VMware object names are case sensitive.
- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
- ``vmware_guest`` module tries to mimick VMware Web UI and workflow, so the virtual machine must be in powered off state in order to remove it from the VMware inventory.
.. warning::
The removal VMware virtual machine using ``vmware_guest`` module is destructive operation and can not be reverted, so it is strongly recommended to take the backup of virtual machine and related files (vmx and vmdk files) before proceeding.
Example Description
===================
In this use case / example, user will be removing a virtual machine using name. The following Ansible playbook showcases the basic parameters that are needed for this.
.. code-block:: yaml
---
- name: Remove virtual machine
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
hosts: localhost
tasks:
- set_fact:
vm_name: "VM_0003"
datacenter: "DC1"
- name: Remove "{{ vm_name }}"
vmware_guest:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: no
cluster: "DC1_C1"
name: "{{ vm_name }}"
state: absent
delegate_to: localhost
register: facts
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
- Hostname of the ESXi server or vCenter server
- Username and password for the ESXi or vCenter server
- Name of the existing Virtual Machine you want to remove
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
The name of existing virtual machine will be used as input for ``vmware_guest`` module via ``name`` parameter.
What to expect
--------------
- You will not see any JSON output after this playbook completes as compared to other operations performed using ``vmware_guest`` module.
.. code-block:: yaml
{
"changed": true
}
- State is changed to ``True`` which notifies that the virtual machine is removed from the VMware inventory. This can take some time depending upon your environment and network connectivity.
Troubleshooting
---------------
If your playbook fails:
- Check if the values provided for username and password are correct.
- Check if the datacenter you provided is available.
- Check if the virtual machine specified exists and you have permissions to access the datastore.
- Ensure the full folder path you specified already exists. It will not create folders automatically for you.

View file

@ -0,0 +1,173 @@
.. _vmware_guest_rename_virtual_machine:
**********************************
Rename an existing virtual machine
**********************************
.. contents:: Topics
Introduction
============
This guide will show you how to utilize Ansible to rename an existing virtual machine.
Scenario Requirements
=====================
* Software
* Ansible 2.5 or later must be installed.
* The Python module ``Pyvmomi`` must be installed on the Ansible control node (or Target host if not executing against localhost).
* We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
* Hardware
* At least one standalone ESXi server or
* vCenter Server with at least one ESXi server
* Access / Credentials
* Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
* Username and Password for vCenter or ESXi server
* Hosts in the ESXi cluster must have access to the datastore that the template resides on.
Caveats
=======
- All variable names and VMware object names are case sensitive.
- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
Example Description
===================
With the following Ansible playbook you can rename an existing virtual machine by changing the UUID.
.. code-block:: yaml
---
- name: Rename virtual machine from old name to new name using UUID
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
hosts: localhost
tasks:
- set_fact:
vm_name: "old_vm_name"
new_vm_name: "new_vm_name"
datacenter: "DC1"
cluster_name: "DC1_C1"
- name: Get VM "{{ vm_name }}" uuid
vmware_guest_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
datacenter: "{{ datacenter }}"
folder: "/{{datacenter}}/vm"
name: "{{ vm_name }}"
register: vm_facts
- name: Rename "{{ vm_name }}" to "{{ new_vm_name }}"
vmware_guest:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
cluster: "{{ cluster_name }}"
uuid: "{{ vm_facts.instance.hw_product_uuid }}"
name: "{{ new_vm_name }}"
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: ``pip install Pyvmomi`` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
- Hostname of the ESXi server or vCenter server
- Username and password for the ESXi or vCenter server
- The UUID of the existing Virtual Machine you want to rename
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
Now you need to supply the information about the existing virtual machine which will be renamed. For renaming virtual machine, ``vmware_guest`` module uses VMware UUID, which is unique across vCenter environment. This value is autogenerated and can not be changed. You will use ``vmware_guest_facts`` module to find virtual machine and get information about VMware UUID of the virtual machine.
This value will be used input for ``vmware_guest`` module. Specify new name to virtual machine which conforms to all VMware requirements for naming conventions as ``name`` parameter. Also, provide ``uuid`` as the value of VMware UUID.
What to expect
--------------
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
.. code-block:: yaml
{
"changed": true,
"instance": {
"annotation": "",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsNotRunning",
"guest_tools_version": "10247",
"hw_cores_per_socket": 1,
"hw_datastores": ["ds_204_2"],
"hw_esxi_host": "10.x.x.x",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": [],
"label": "Network adapter 1",
"macaddress": "00:50:56:8c:b8:42",
"macaddress_dash": "00-50-56-8c-b8-42",
"portgroup_key": "dvportgroup-31",
"portgroup_portkey": "15",
"summary": "DVSwitch: 50 0c 3a 69 df 78 2c 7b-6e 08 0a 89 e3 a6 31 17"
},
"hw_files": ["[ds_204_2] old_vm_name/old_vm_name.vmx", "[ds_204_2] old_vm_name/old_vm_name.nvram", "[ds_204_2] old_vm_name/old_vm_name.vmsd", "[ds_204_2] old_vm_name/vmware.log", "[ds_204_2] old_vm_name/old_vm_name.vmdk"],
"hw_folder": "/DC1/vm",
"hw_guest_full_name": null,
"hw_guest_ha_state": null,
"hw_guest_id": null,
"hw_interfaces": ["eth0"],
"hw_is_template": false,
"hw_memtotal_mb": 1024,
"hw_name": "new_vm_name",
"hw_power_status": "poweredOff",
"hw_processor_count": 1,
"hw_product_uuid": "420cbebb-835b-980b-7050-8aea9b7b0a6d",
"hw_version": "vmx-13",
"instance_uuid": "500c60a6-b7b4-8ae5-970f-054905246a6f",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"snapshots": []
}
}
confirming that you've renamed the virtual machine.
Troubleshooting
---------------
If your playbook fails:
- Check if the values provided for username and password are correct.
- Check if the datacenter you provided is available.
- Check if the virtual machine specified exists and you have permissions to access the datastore.
- Ensure the full folder path you specified already exists.

View file

@ -0,0 +1,161 @@
.. _vmware_http_api_usage:
***********************************
Using VMware HTTP API using Ansible
***********************************
.. contents:: Topics
Introduction
============
This guide will show you how to utilize Ansible to use VMware HTTP APIs to automate various tasks.
Scenario Requirements
=====================
* Software
* Ansible 2.5 or later must be installed.
* We recommend installing the latest version with pip: ``pip install Pyvmomi`` on the Ansible control node
(as the OS packages are usually out of date and incompatible) if you are planning to use any existing VMware modules.
* Hardware
* vCenter Server 6.5 and above with at least one ESXi server
* Access / Credentials
* Ansible (or the target server) must have network access to either the vCenter server or the ESXi server
* Username and Password for vCenter
Caveats
=======
- All variable names and VMware object names are case sensitive.
- You need to use Python 2.7.9 version in order to use ``validate_certs`` option, as this version is capable of changing the SSL verification behaviours.
- VMware HTTP APIs are introduced in vSphere 6.5 and above so minimum level required in 6.5.
- There are very limited number of APIs exposed, so you may need to rely on XMLRPC based VMware modules.
Example Description
===================
With the following Ansible playbook you can find the VMware ESXi host system(s) and can perform various tasks depending on the list of host systems.
This is a generic example to show how Ansible can be utilized to consume VMware HTTP APIs.
.. code-block:: yaml
---
- name: Example showing VMware HTTP API utilization
hosts: localhost
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- name: Login into vCenter and get cookies
uri:
url: https://{{ vcenter_server }}/rest/com/vmware/cis/session
force_basic_auth: yes
validate_certs: no
method: POST
user: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
register: login
- name: Get all hosts from vCenter using cookies from last task
uri:
url: https://{{ vcenter_server }}/rest/vcenter/host
force_basic_auth: yes
validate_certs: no
headers:
Cookie: "{{ login.set_cookie }}"
register: vchosts
- name: Change Log level configuration of the given hostsystem
vmware_host_config_manager:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
esxi_hostname: "{{ item.name }}"
options:
'Config.HostAgent.log.level': 'error'
validate_certs: no
with_items: "{{ vchosts.json.value }}"
register: host_config_results
Since Ansible utilizes the VMware HTTP API using the ``uri`` module to perform actions, in this use case it will be connecting directly to the VMware HTTP API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the ``gather_facts`` parameter, since you don't want to collect facts about localhost.
Before you begin, make sure you have:
- Hostname of the vCenter server
- Username and password for the vCenter server
- Version of vCenter is at least 6.5
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using :ref:`ansible-vault` or using `Ansible Tower credentials <https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html>`_.
If your vCenter server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the ``validate_certs`` parameter. To do this you need to set ``validate_certs=False`` in your playbook.
As you can see, we are using the ``uri`` module in first task to login into the vCenter server and storing result in the ``login`` variable using register. In the second task, using cookies from the first task we are gathering information about the ESXi host system.
Using this information, we are changing the ESXi host system's advance configuration.
What to expect
--------------
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
.. code-block:: yaml
"results": [
{
...
"invocation": {
"module_args": {
"cluster_name": null,
"esxi_hostname": "10.76.33.226",
"hostname": "10.65.223.114",
"options": {
"Config.HostAgent.log.level": "error"
},
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"username": "administrator@vsphere.local",
"validate_certs": false
}
},
"item": {
"connection_state": "CONNECTED",
"host": "host-21",
"name": "10.76.33.226",
"power_state": "POWERED_ON"
},
"msg": "Config.HostAgent.log.level changed."
...
}
]
Troubleshooting
---------------
If your playbook fails:
- Check if the values provided for username and password are correct.
- Check if you are using vCenter 6.5 and onwards to use this HTTP APIs.
.. seealso::
`VMware vSphere and Ansible From Zero to Useful by @arielsanchezmor <https://www.youtube.com/watch?v=0_qwOKlBlo8>`_
vBrownBag session video related to VMware HTTP APIs
`Sample Playbooks for using VMware HTTP APIs <https://github.com/Akasurde/ansible-vmware-http>`_
GitHub repo for examples of Ansible playbook to manage VMware using HTTP APIs

View file

@ -0,0 +1,45 @@
.. _vmware_concepts:
***************************
Ansible for VMware Concepts
***************************
Some of these concepts are common to all uses of Ansible, including VMware automation; some are specific to VMware. You need to understand them to use Ansible for VMware automation. This introduction provides the background you need to follow the :ref:`scenarios<vmware_scenarios>` in this guide.
.. contents::
:local:
Control Node
============
Any machine with Ansible installed. You can run commands and playbooks, invoking ``/usr/bin/ansible`` or ``/usr/bin/ansible-playbook``, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
Delegation
==========
Delegation allows you to select the system that executes a given task. If you do not have ``pyVmomi`` installed on your control node, use the ``delegate_to`` keyword on VMware-specific tasks to execute them on any host where you have ``pyVmomi`` installed.
Modules
=======
The units of code Ansible executes. Each module has a particular use, from creating virtual machines on vCenter to managing distributed virtual switches in the vCenter environment. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the :ref:`list of cloud modules<cloud_modules>`, which includes VMware modules.
Playbooks
=========
Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand.
pyVmomi
=======
Ansible VMware modules are written on top of `pyVmomi <https://github.com/vmware/pyvmomi>`_. ``pyVmomi`` is the official Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi, and vCenter infrastructure.
You need to install this Python SDK on host from where you want to invoke VMware automation. For example, if you are using control node then ``pyVmomi`` must be installed on control node.
If you are using any ``delegate_to`` host which is different from your control node then you need to install ``pyVmomi`` on that ``delegate_to`` node.
You can install pyVmomi using pip:
.. code-block:: bash
$ pip install pyvmomi

View file

@ -0,0 +1,12 @@
.. _vmware_external_doc_links:
*****************************
Other useful VMware resources
*****************************
* `PyVmomi Documentation <https://github.com/vmware/pyvmomi/tree/master/docs>`_
* `VMware API and SDK Documentation <https://www.vmware.com/support/pubs/sdk_pubs.html>`_
* `VCSIM test container image <https://quay.io/repository/ansible/vcenter-test-container>`_
* `Ansible VMware community wiki page <https://github.com/ansible/community/wiki/VMware>`_
* `VMware's official Guest Operating system customization matrix <https://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf>`_
* `VMware Compatibility Guide <https://www.vmware.com/resources/compatibility/search.php>`_

View file

@ -0,0 +1,9 @@
:orphan:
.. _vmware_ansible_getting_started:
***************************************
Getting Started with Ansible for VMware
***************************************
This will have a basic "hello world" scenario/walkthrough that gets the user introduced to the basics.

View file

@ -0,0 +1,43 @@
.. _vmware_ansible_intro:
**********************************
Introduction to Ansible for VMware
**********************************
.. contents:: Topics
Introduction
============
Ansible provides various modules to manage VMware infrastructure, which includes datacenter, cluster,
host system and virtual machine.
Requirements
============
Ansible VMware modules are written on top of `pyVmomi <https://github.com/vmware/pyvmomi>`_.
pyVmomi is the Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi,
and vCenter infrastructure. You can install pyVmomi using pip:
.. code-block:: bash
$ pip install pyvmomi
vmware_guest module
===================
The :ref:`vmware_guest<vmware_guest_module>` module manages various operations related to virtual machines in the given ESXi or vCenter server.
.. seealso::
`pyVmomi <https://github.com/vmware/pyvmomi>`_
The GitHub Page of pyVmomi
`pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
The issue tracker for the pyVmomi project
`govc <https://github.com/vmware/govmomi/tree/master/govc>`_
govc is a vSphere CLI built on top of govmomi
:ref:`working_with_playbooks`
An introduction to playbooks

View file

@ -0,0 +1,73 @@
.. _vmware_ansible_inventory:
*************************************
Using VMware dynamic inventory plugin
*************************************
.. contents:: Topics
VMware Dynamic Inventory Plugin
===============================
The best way to interact with your hosts is to use the VMware dynamic inventory plugin, which dynamically queries VMware APIs and
tells Ansible what nodes can be managed.
To be able to use this VMware dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
enable_plugins = vmware_vm_inventory
Then, create a file that ends in ``.vmware.yml`` or ``.vmware.yaml`` in your working directory.
The ``vmware_vm_inventory`` script takes in the same authentication information as any VMware module.
Here's an example of a valid inventory file:
.. code-block:: yaml
plugin: vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: administrator@vsphere.local
password: Esxi@123$%
validate_certs: False
with_tags: True
Executing ``ansible-inventory --list -i <filename>.vmware.yml`` will create a list of VMware instances that are ready to be configured using Ansible.
Using vaulted configuration files
=================================
Since the inventory configuration file contains vCenter password in plain text, a security risk, you may want to
encrypt your entire inventory configuration file.
You can encrypt a valid inventory configuration file as follows:
.. code-block:: bash
$ ansible-vault encrypt <filename>.vmware.yml
New Vault password:
Confirm New Vault password:
Encryption successful
And you can use this vaulted inventory configuration file using:
.. code-block:: bash
$ ansible-inventory -i filename.vmware.yml --list --vault-password-file=/path/to/vault_password_file
.. seealso::
`pyVmomi <https://github.com/vmware/pyvmomi>`_
The GitHub Page of pyVmomi
`pyVmomi Issue Tracker <https://github.com/vmware/pyvmomi/issues>`_
The issue tracker for the pyVmomi project
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_vault`
Using Vault in playbooks

View file

@ -0,0 +1,9 @@
:orphan:
.. _vmware_ansible_module_index:
***************************
Ansible VMware Module Guide
***************************
This will be a listing similar to the module index in our core docs.

View file

@ -0,0 +1,44 @@
.. _vmware_requirements:
********************
VMware Prerequisites
********************
.. contents::
:local:
Installing SSL Certificates
===========================
All vCenter and ESXi servers require SSL encryption on all connections to enforce secure communication. You must enable SSL encryption for Ansible by installing the server's SSL certificates on your Ansible control node or delegate node.
If the SSL certificate of your vCenter or ESXi server is not correctly installed on your Ansible control node, you will see the following warning when using Ansible VMware modules:
``Unable to connect to vCenter or ESXi API at xx.xx.xx.xx on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)``
To install the SSL certificate for your VMware server, and run your Ansible VMware modules in encrypted mode, please follow the instructions for the server you are running with VMware.
Installing vCenter SSL certificates for Ansible
-----------------------------------------------
* From any web browser, go to the base URL of the vCenter Server without port number like ``https://vcenter-domain.example.com``
* Click the "Download trusted root CA certificates" link at the bottom of the grey box on the right and download the file.
* Change the extension of the file to .zip. The file is a ZIP file of all root certificates and all CRLs.
* Extract the contents of the zip file. The extracted directory contains a ``.certs`` directory that contains two types of files. Files with a number as the extension (.0, .1, and so on) are root certificates.
* Install the certificate files are trusted certificates by the process that is appropriate for your operating system.
Installing ESXi SSL certificates for Ansible
--------------------------------------------
* Enable SSH Service on ESXi either by using Ansible VMware module `vmware_host_service_manager <https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py>`_ or manually using vSphere Web interface.
* SSH to ESXi server using administrative credentials, and navigate to directory ``/etc/vmware/ssl``
* Secure copy (SCP) ``rui.crt`` located in ``/etc/vmware/ssl`` directory to Ansible control node.
* Install the certificate file by the process that is appropriate for your operating system.

View file

@ -0,0 +1,39 @@
:orphan:
**********************************
Sample Scenario for Ansible VMware
**********************************
Introductory paragraph.
.. contents::
:local:
Scenario Requirements
=====================
Describe the requirements and assumptions for this scenario.
Example Description
===================
Description and code here.
Example Output
--------------
What the user should expect to see.
Troubleshooting
---------------
What to look for if it breaks.
Conclusion and Where To Go Next
===============================
Recap of important points. For more information please see: links.

View file

@ -0,0 +1,16 @@
.. _vmware_scenarios:
****************************
Ansible for VMware Scenarios
****************************
These scenarios teach you how to accomplish common VMware tasks using Ansible. To get started, please select the task you want to accomplish.
.. toctree::
:maxdepth: 1
scenario_clone_template
scenario_rename_vm
scenario_remove_vm
scenario_find_vm_folder
scenario_vmware_http

View file

@ -0,0 +1,102 @@
.. _vmware_troubleshooting:
**********************************
Troubleshooting Ansible for VMware
**********************************
.. contents:: Topics
This section lists things that can go wrong and possible ways to fix them.
Debugging Ansible for VMware
============================
When debugging or creating a new issue, you will need information about your VMware infrastructure. You can get this information using
`govc <https://github.com/vmware/govmomi/tree/master/govc>`_, For example:
.. code-block:: bash
$ export GOVC_USERNAME=ESXI_OR_VCENTER_USERNAME
$ export GOVC_PASSWORD=ESXI_OR_VCENTER_PASSWORD
$ export GOVC_URL=https://ESXI_OR_VCENTER_HOSTNAME:443
$ govc find /
Known issues with Ansible for VMware
====================================
Network settings with vmware_guest in Ubuntu 18.04
--------------------------------------------------
Setting the network with ``vmware_guest`` in Ubuntu 18.04 is known to be broken, due to missing support for ``netplan`` in the ``open-vm-tools``.
This issue is tracked via:
* https://github.com/vmware/open-vm-tools/issues/240
* https://github.com/ansible/ansible/issues/41133
Potential Workarounds
^^^^^^^^^^^^^^^^^^^^^
There are several workarounds for this issue.
1) Modify the Ubuntu 18.04 images and installing ``ifupdown`` in them via ``sudo apt install ifupdown``.
If so you need to remove ``netplan`` via ``sudo apt remove netplan.io`` and you need stop ``systemd-networkd`` via ``sudo systemctl disable systemctl-networkd``.
2) Generate the ``systemd-networkd`` files with a task in your VMware Ansible role:
.. code-block:: yaml
- name: make sure cache directory exists
file: path="{{ inventory_dir }}/cache" state=directory
delegate_to: localhost
- name: generate network templates
template: src=network.j2 dest="{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
delegate_to: localhost
- name: copy generated files to vm
vmware_guest_file_operation:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
validate_certs: "{{ vmware_general.validate_certs }}"
vm_id: "{{ inventory_hostname }}"
vm_username: root
vm_password: "{{ template_password }}"
copy:
src: "{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
dest: "/etc/systemd/network/ens160.network"
overwrite: False
delegate_to: localhost
- name: restart systemd-networkd
vmware_vm_shell:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
folder: /vm
vm_id: "{{ inventory_hostname}}"
vm_username: root
vm_password: "{{ template_password }}"
vm_shell: /bin/systemctl
vm_shell_args: " restart systemd-networkd"
delegate_to: localhost
- name: restart systemd-resolved
vmware_vm_shell:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
folder: /vm
vm_id: "{{ inventory_hostname}}"
vm_username: root
vm_password: "{{ template_password }}"
vm_shell: /bin/systemctl
vm_shell_args: " restart systemd-resolved"
delegate_to: localhost
3) Wait for ``netplan`` support in ``open-vm-tools``