Adjust YAML in extra docs (#10252)

Adjust YAML in extra docs.
This commit is contained in:
Felix Fontein 2025-06-16 17:44:46 +02:00 committed by GitHub
parent 0aeb1b7bb2
commit e938ca5f20
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 107 additions and 107 deletions

View file

@ -26,8 +26,8 @@ You can use the :ansplugin:`community.general.dict_kv filter <community.general.
type: host type: host
database: all database: all
myservers: myservers:
- server1 - server1
- server2 - server2
This produces: This produces:

View file

@ -17,50 +17,50 @@ Consider this data structure:
.. code-block:: yaml+jinja .. code-block:: yaml+jinja
{ {
"domain_definition": { "domain_definition": {
"domain": { "domain": {
"cluster": [ "cluster": [
{ {
"name": "cluster1" "name": "cluster1"
}, },
{ {
"name": "cluster2" "name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
} }
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
} }
}
} }
To extract all clusters from this structure, you can use the following query: To extract all clusters from this structure, you can use the following query:

View file

@ -78,17 +78,17 @@ If you do not specify a ``count_tag``, the task creates the number of instances
tasks: tasks:
- name: Create a set of instances - name: Create a set of instances
community.general.ali_instance: community.general.ali_instance:
instance_type: ecs.n4.small instance_type: ecs.n4.small
image_id: "{{ ami_id }}" image_id: "{{ ami_id }}"
instance_name: "My-new-instance" instance_name: "My-new-instance"
instance_tags: instance_tags:
Name: NewECS Name: NewECS
Version: 0.0.1 Version: 0.0.1
count: 5 count: 5
count_tag: count_tag:
Name: NewECS Name: NewECS
allocate_public_ip: true allocate_public_ip: true
max_bandwidth_out: 50 max_bandwidth_out: 50
register: create_instance register: create_instance
In the example playbook above, data about the instances created by this playbook is saved in the variable defined by the ``register`` keyword in the task. In the example playbook above, data about the instances created by this playbook is saved in the variable defined by the ``register`` keyword in the task.

View file

@ -67,16 +67,16 @@ The following code block is a simple playbook that creates one `Type 0 <https://
hosts: localhost hosts: localhost
tasks: tasks:
- community.general.packet_sshkey: - community.general.packet_sshkey:
key_file: ./id_rsa.pub key_file: ./id_rsa.pub
label: tutorial key label: tutorial key
- community.general.packet_device: - community.general.packet_device:
project_id: <your_project_id> project_id: <your_project_id>
hostnames: myserver hostnames: myserver
operating_system: ubuntu_16_04 operating_system: ubuntu_16_04
plan: baremetal_0 plan: baremetal_0
facility: sjc1 facility: sjc1
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify through a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__. After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify through a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
@ -110,10 +110,10 @@ If your playbook acts on existing Packet devices, you can only pass the ``hostna
hosts: localhost hosts: localhost
tasks: tasks:
- community.general.packet_device: - community.general.packet_device:
project_id: <your_project_id> project_id: <your_project_id>
hostnames: myserver hostnames: myserver
state: rebooted state: rebooted
You can also identify specific Packet devices with the ``device_ids`` parameter. The device's UUID can be found in the `Packet Portal <https://app.packet.net/portal>`_ or by using a `CLI <https://www.packet.net/developers/integrations/>`_. The following playbook removes a Packet device using the ``device_ids`` field: You can also identify specific Packet devices with the ``device_ids`` parameter. The device's UUID can be found in the `Packet Portal <https://app.packet.net/portal>`_ or by using a `CLI <https://www.packet.net/developers/integrations/>`_. The following playbook removes a Packet device using the ``device_ids`` field:
@ -125,10 +125,10 @@ You can also identify specific Packet devices with the ``device_ids`` parameter.
hosts: localhost hosts: localhost
tasks: tasks:
- community.general.packet_device: - community.general.packet_device:
project_id: <your_project_id> project_id: <your_project_id>
device_ids: <myserver_device_id> device_ids: <myserver_device_id>
state: absent state: absent
More Complex Playbooks More Complex Playbooks
@ -153,43 +153,43 @@ The following playbook will create an SSH key, 3 Packet servers, and then wait u
hosts: localhost hosts: localhost
tasks: tasks:
- community.general.packet_sshkey: - community.general.packet_sshkey:
key_file: ./id_rsa.pub key_file: ./id_rsa.pub
label: new label: new
- community.general.packet_device: - community.general.packet_device:
hostnames: [coreos-one, coreos-two, coreos-three] hostnames: [coreos-one, coreos-two, coreos-three]
operating_system: coreos_beta operating_system: coreos_beta
plan: baremetal_0 plan: baremetal_0
facility: ewr1 facility: ewr1
project_id: <your_project_id> project_id: <your_project_id>
wait_for_public_IPv: 4 wait_for_public_IPv: 4
user_data: | user_data: |
#cloud-config # cloud-config
coreos: coreos:
etcd2: etcd2:
discovery: https://discovery.etcd.io/<token> discovery: https://discovery.etcd.io/<token>
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
initial-advertise-peer-urls: http://$private_ipv4:2380 initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://$private_ipv4:2380 listen-peer-urls: http://$private_ipv4:2380
fleet: fleet:
public-ip: $private_ipv4 public-ip: $private_ipv4
units: units:
- name: etcd2.service - name: etcd2.service
command: start command: start
- name: fleet.service - name: fleet.service
command: start command: start
register: newhosts register: newhosts
- name: wait for ssh - name: wait for ssh
ansible.builtin.wait_for: ansible.builtin.wait_for:
delay: 1 delay: 1
host: "{{ item.public_ipv4 }}" host: "{{ item.public_ipv4 }}"
port: 22 port: 22
state: started state: started
timeout: 500 timeout: 500
loop: "{{ newhosts.results[0].devices }}" loop: "{{ newhosts.results[0].devices }}"
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect. As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.