mirror of
https://github.com/ansible-collections/community.general.git
synced 2025-07-22 12:50:22 -07:00
E501 fixes (#22879)
This commit is contained in:
parent
4fdeade389
commit
3164e8b561
215 changed files with 1328 additions and 761 deletions
|
@ -71,7 +71,9 @@ options:
|
|||
resource_tags:
|
||||
description:
|
||||
- 'A dictionary array of resource tags of the form C({ tag1: value1, tag2: value2 }).
|
||||
- Tags in this list are used in conjunction with CIDR block to uniquely identify a VPC in lieu of vpc_id. Therefore, if CIDR/Tag combination does not exist, a new VPC will be created. VPC tags not on this list will be ignored. Prior to 1.7, specifying a resource tag was optional.'
|
||||
- Tags in this list are used in conjunction with CIDR block to uniquely identify a VPC in lieu of vpc_id. Therefore,
|
||||
if CIDR/Tag combination does not exist, a new VPC will be created. VPC tags not on this list will be ignored. Prior to 1.7,
|
||||
specifying a resource tag was optional.'
|
||||
required: true
|
||||
version_added: "1.6"
|
||||
internet_gateway:
|
||||
|
@ -82,7 +84,15 @@ options:
|
|||
choices: [ "yes", "no" ]
|
||||
route_tables:
|
||||
description:
|
||||
- 'A dictionary array of route tables to add of the form: C({ subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},], resource_tags: ... }). Where the subnets list is those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids, interface-ids, and vpc-peering-connection-ids in addition igw. resource_tags is optional and uses dictionary form: C({ "Name": "public", ... }). This module is currently unable to affect the "main" route table due to some limitations in boto, so you must explicitly define the associated subnets or they will be attached to the main table implicitly. As of 1.8, if the route_tables parameter is not specified, no existing routes will be modified.'
|
||||
- >
|
||||
A dictionary array of route tables to add of the form:
|
||||
C({ subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},], resource_tags: ... }). Where the subnets list is
|
||||
those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword
|
||||
for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids,
|
||||
interface-ids, and vpc-peering-connection-ids in addition igw. resource_tags is optional and uses dictionary form: C({ "Name": "public", ... }).
|
||||
This module is currently unable to affect the "main" route table due to some limitations in boto, so you must explicitly define the associated
|
||||
subnets or they will be attached to the main table implicitly. As of 1.8, if the route_tables parameter is not specified, no existing routes
|
||||
will be modified.
|
||||
required: false
|
||||
default: null
|
||||
wait:
|
||||
|
|
|
@ -283,7 +283,8 @@ def main():
|
|||
if not g in statement_label:
|
||||
module.fail_json(msg='{} is an unknown grant type.'.format(g))
|
||||
|
||||
ret = do_grant(kms, module.params['key_arn'], module.params['role_arn'], module.params['grant_types'], mode=mode, dry_run=module.check_mode, clean_invalid_entries=module.params['clean_invalid_entries'])
|
||||
ret = do_grant(kms, module.params['key_arn'], module.params['role_arn'], module.params['grant_types'], mode=mode, dry_run=module.check_mode,
|
||||
clean_invalid_entries=module.params['clean_invalid_entries'])
|
||||
result.update(ret)
|
||||
|
||||
except Exception as err:
|
||||
|
|
|
@ -33,7 +33,8 @@ short_description: Create or delete an AWS CloudFormation stack
|
|||
description:
|
||||
- Launches or updates an AWS CloudFormation stack and waits for it complete.
|
||||
notes:
|
||||
- As of version 2.3, migrated to boto3 to enable new features. To match existing behavior, YAML parsing is done in the module, not given to AWS as YAML. This will change (in fact, it may change before 2.3 is out).
|
||||
- As of version 2.3, migrated to boto3 to enable new features. To match existing behavior, YAML parsing is done in the module, not given to AWS as YAML.
|
||||
This will change (in fact, it may change before 2.3 is out).
|
||||
version_added: "1.1"
|
||||
options:
|
||||
stack_name:
|
||||
|
@ -59,8 +60,10 @@ options:
|
|||
template:
|
||||
description:
|
||||
- The local path of the cloudformation template.
|
||||
- This must be the full path to the file, relative to the working directory. If using roles this may look like "roles/cloudformation/files/cloudformation-example.json".
|
||||
- If 'state' is 'present' and the stack does not exist yet, either 'template' or 'template_url' must be specified (but not both). If 'state' is present, the stack does exist, and neither 'template' nor 'template_url' are specified, the previous template will be reused.
|
||||
- This must be the full path to the file, relative to the working directory. If using roles this may look
|
||||
like "roles/cloudformation/files/cloudformation-example.json".
|
||||
- If 'state' is 'present' and the stack does not exist yet, either 'template' or 'template_url' must be specified (but not both). If 'state' is
|
||||
present, the stack does exist, and neither 'template' nor 'template_url' are specified, the previous template will be reused.
|
||||
required: false
|
||||
default: null
|
||||
notification_arns:
|
||||
|
@ -71,7 +74,8 @@ options:
|
|||
version_added: "2.0"
|
||||
stack_policy:
|
||||
description:
|
||||
- the path of the cloudformation stack policy. A policy cannot be removed once placed, but it can be modified. (for instance, [allow all updates](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html#d0e9051)
|
||||
- the path of the cloudformation stack policy. A policy cannot be removed once placed, but it can be modified.
|
||||
(for instance, [allow all updates](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html#d0e9051)
|
||||
required: false
|
||||
default: null
|
||||
version_added: "1.9"
|
||||
|
@ -83,20 +87,24 @@ options:
|
|||
version_added: "1.4"
|
||||
template_url:
|
||||
description:
|
||||
- Location of file containing the template body. The URL must point to a template (max size 307,200 bytes) located in an S3 bucket in the same region as the stack.
|
||||
- If 'state' is 'present' and the stack does not exist yet, either 'template' or 'template_url' must be specified (but not both). If 'state' is present, the stack does exist, and neither 'template' nor 'template_url' are specified, the previous template will be reused.
|
||||
- Location of file containing the template body. The URL must point to a template (max size 307,200 bytes) located in an S3 bucket in the same region
|
||||
as the stack.
|
||||
- If 'state' is 'present' and the stack does not exist yet, either 'template' or 'template_url' must be specified (but not both). If 'state' is
|
||||
present, the stack does exist, and neither 'template' nor 'template_url' are specified, the previous template will be reused.
|
||||
required: false
|
||||
version_added: "2.0"
|
||||
template_format:
|
||||
description:
|
||||
- (deprecated) For local templates, allows specification of json or yaml format. Templates are now passed raw to CloudFormation regardless of format. This parameter is ignored since Ansible 2.3.
|
||||
- (deprecated) For local templates, allows specification of json or yaml format. Templates are now passed raw to CloudFormation regardless of format.
|
||||
This parameter is ignored since Ansible 2.3.
|
||||
default: json
|
||||
choices: [ json, yaml ]
|
||||
required: false
|
||||
version_added: "2.0"
|
||||
role_arn:
|
||||
description:
|
||||
- The role that AWS CloudFormation assumes to create the stack. See the AWS CloudFormation Service Role docs U(http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html)
|
||||
- The role that AWS CloudFormation assumes to create the stack. See the AWS CloudFormation Service Role
|
||||
docs U(http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html)
|
||||
required: false
|
||||
default: null
|
||||
version_added: "2.3"
|
||||
|
@ -212,7 +220,7 @@ stack_outputs:
|
|||
description: A key:value dictionary of all the stack outputs currently defined. If there are no stack outputs, it is an empty dictionary.
|
||||
returned: always
|
||||
sample: {"MySg": "AnsibleModuleTestYAML-CFTestSg-C8UVS567B6NS"}
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
import json
|
||||
import time
|
||||
|
|
|
@ -112,11 +112,13 @@ stack_description:
|
|||
returned: always
|
||||
type: dict
|
||||
stack_outputs:
|
||||
description: Dictionary of stack outputs keyed by the value of each output 'OutputKey' parameter and corresponding value of each output 'OutputValue' parameter
|
||||
description: Dictionary of stack outputs keyed by the value of each output 'OutputKey' parameter and corresponding value of each
|
||||
output 'OutputValue' parameter
|
||||
returned: always
|
||||
type: dict
|
||||
stack_parameters:
|
||||
description: Dictionary of stack parameters keyed by the value of each parameter 'ParameterKey' parameter and corresponding value of each parameter 'ParameterValue' parameter
|
||||
description: Dictionary of stack parameters keyed by the value of each parameter 'ParameterKey' parameter and corresponding value of
|
||||
each parameter 'ParameterValue' parameter
|
||||
returned: always
|
||||
type: dict
|
||||
stack_events:
|
||||
|
@ -136,7 +138,8 @@ stack_resource_list:
|
|||
returned: only if all_facts or stack_resourses is true
|
||||
type: list of resources
|
||||
stack_resources:
|
||||
description: Dictionary of stack resources keyed by the value of each resource 'LogicalResourceId' parameter and corresponding value of each resource 'PhysicalResourceId' parameter
|
||||
description: Dictionary of stack resources keyed by the value of each resource 'LogicalResourceId' parameter and corresponding value of each
|
||||
resource 'PhysicalResourceId' parameter
|
||||
returned: only if all_facts or stack_resourses is true
|
||||
type: dict
|
||||
'''
|
||||
|
|
|
@ -44,7 +44,8 @@ options:
|
|||
s3_bucket_prefix:
|
||||
description:
|
||||
- bucket to place CloudTrail in.
|
||||
- this bucket should exist and have the proper policy. See U(http://docs.aws.amazon.com/awscloudtrail/latest/userguide/aggregating_logs_regions_bucket_policy.html)
|
||||
- this bucket should exist and have the proper policy.
|
||||
See U(http://docs.aws.amazon.com/awscloudtrail/latest/userguide/aggregating_logs_regions_bucket_policy.html)
|
||||
- required when state=enabled.
|
||||
required: false
|
||||
s3_key_prefix:
|
||||
|
@ -215,12 +216,14 @@ def main():
|
|||
results['view'].get('S3KeyPrefix', '') != s3_key_prefix or \
|
||||
results['view']['IncludeGlobalServiceEvents'] != include_global_events:
|
||||
if not module.check_mode:
|
||||
results['update'] = cf_man.update(name=ct_name, s3_bucket_name=s3_bucket_name, s3_key_prefix=s3_key_prefix, include_global_service_events=include_global_events)
|
||||
results['update'] = cf_man.update(name=ct_name, s3_bucket_name=s3_bucket_name, s3_key_prefix=s3_key_prefix,
|
||||
include_global_service_events=include_global_events)
|
||||
results['changed'] = True
|
||||
else:
|
||||
if not module.check_mode:
|
||||
# doesn't exist. create it.
|
||||
results['enable'] = cf_man.enable(name=ct_name, s3_bucket_name=s3_bucket_name, s3_key_prefix=s3_key_prefix, include_global_service_events=include_global_events)
|
||||
results['enable'] = cf_man.enable(name=ct_name, s3_bucket_name=s3_bucket_name, s3_key_prefix=s3_key_prefix,
|
||||
include_global_service_events=include_global_events)
|
||||
results['changed'] = True
|
||||
|
||||
# given cloudtrail should exist now. Enable the logging.
|
||||
|
|
|
@ -117,7 +117,7 @@ targets:
|
|||
returned: success
|
||||
type: list
|
||||
sample: "[{ 'arn': 'arn:aws:lambda:us-east-1:123456789012:function:MyFunction', 'id': 'MyTargetId' }]"
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
|
||||
class CloudWatchEventRule(object):
|
||||
|
|
|
@ -332,7 +332,11 @@ def get_changed_global_indexes(table, global_indexes):
|
|||
removed_indexes = dict((name, index) for name, index in table_index_info.items() if name not in set_index_info)
|
||||
added_indexes = dict((name, set_index_objects[name]) for name, index in set_index_info.items() if name not in table_index_info)
|
||||
# todo: uncomment once boto has https://github.com/boto/boto/pull/3447 fixed
|
||||
# index_throughput_changes = dict((name, index.throughput) for name, index in set_index_objects.items() if name not in added_indexes and (index.throughput['read'] != str(table_index_objects[name].throughput['read']) or index.throughput['write'] != str(table_index_objects[name].throughput['write'])))
|
||||
# for name, index in set_index_objects.items():
|
||||
# if (name not in added_indexes and
|
||||
# (index.throughput['read'] != str(table_index_objects[name].throughput['read']) or
|
||||
# index.throughput['write'] != str(table_index_objects[name].throughput['write']))):
|
||||
# index_throughput_changes[name] = index.throughput
|
||||
# todo: remove once boto has https://github.com/boto/boto/pull/3447 fixed
|
||||
index_throughput_changes = dict((name, index.throughput) for name, index in set_index_objects.items() if name not in added_indexes)
|
||||
|
||||
|
|
|
@ -84,7 +84,8 @@ options:
|
|||
default: null
|
||||
no_reboot:
|
||||
description:
|
||||
- Flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance.
|
||||
- Flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the
|
||||
responsibility of maintaining file system integrity is left to the owner of the instance.
|
||||
required: false
|
||||
default: no
|
||||
choices: [ "yes", "no" ]
|
||||
|
@ -97,7 +98,9 @@ options:
|
|||
version_added: "2.0"
|
||||
description:
|
||||
- List of device hashes/dictionaries with custom configurations (same block-device-mapping parameters)
|
||||
- "Valid properties include: device_name, volume_type, size (in GB), delete_on_termination (boolean), no_device (boolean), snapshot_id, iops (for io1 volume_type)"
|
||||
- >
|
||||
Valid properties include: device_name, volume_type, size (in GB), delete_on_termination (boolean), no_device (boolean),
|
||||
snapshot_id, iops (for io1 volume_type)
|
||||
required: false
|
||||
default: null
|
||||
delete_snapshot:
|
||||
|
@ -474,7 +477,8 @@ def create_image(module, ec2):
|
|||
module.fail_json(msg="AMI creation failed, please see the AWS console for more details")
|
||||
except boto.exception.EC2ResponseError as e:
|
||||
if ('InvalidAMIID.NotFound' not in e.error_code and 'InvalidAMIID.Unavailable' not in e.error_code) and wait and i == wait_timeout - 1:
|
||||
module.fail_json(msg="Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help. %s: %s" % (e.error_code, e.error_message))
|
||||
module.fail_json(msg="Error while trying to find the new image. Using wait=yes and/or a longer "
|
||||
"wait_timeout may help. %s: %s" % (e.error_code, e.error_message))
|
||||
finally:
|
||||
time.sleep(1)
|
||||
|
||||
|
@ -569,7 +573,8 @@ def update_image(module, ec2, image_id):
|
|||
try:
|
||||
set_permissions = img.get_launch_permissions()
|
||||
if set_permissions != launch_permissions:
|
||||
if ('user_ids' in launch_permissions and launch_permissions['user_ids']) or ('group_names' in launch_permissions and launch_permissions['group_names']):
|
||||
if (('user_ids' in launch_permissions and launch_permissions['user_ids']) or
|
||||
('group_names' in launch_permissions and launch_permissions['group_names'])):
|
||||
res = img.set_launch_permissions(**launch_permissions)
|
||||
elif ('user_ids' in set_permissions and set_permissions['user_ids']) or ('group_names' in set_permissions and set_permissions['group_names']):
|
||||
res = img.remove_launch_permissions(**set_permissions)
|
||||
|
|
|
@ -32,7 +32,8 @@ description:
|
|||
- Results can be sorted and sliced
|
||||
author: "Tom Bamford (@tombamford)"
|
||||
notes:
|
||||
- This module is not backwards compatible with the previous version of the ec2_search_ami module which worked only for Ubuntu AMIs listed on cloud-images.ubuntu.com.
|
||||
- This module is not backwards compatible with the previous version of the ec2_search_ami module which worked only for Ubuntu AMIs listed on
|
||||
cloud-images.ubuntu.com.
|
||||
- See the example below for a suggestion of how to search by distro/release.
|
||||
options:
|
||||
region:
|
||||
|
@ -45,7 +46,9 @@ options:
|
|||
- Search AMIs owned by the specified owner
|
||||
- Can specify an AWS account ID, or one of the special IDs 'self', 'amazon' or 'aws-marketplace'
|
||||
- If not specified, all EC2 AMIs in the specified region will be searched.
|
||||
- You can include wildcards in many of the search options. An asterisk (*) matches zero or more characters, and a question mark (?) matches exactly one character. You can escape special characters using a backslash (\) before the character. For example, a value of \*amazon\?\\ searches for the literal string *amazon?\.
|
||||
- You can include wildcards in many of the search options. An asterisk (*) matches zero or more characters, and a question mark (?) matches exactly one
|
||||
character. You can escape special characters using a backslash (\) before the character. For example, a value of \*amazon\?\\ searches for the
|
||||
literal string *amazon?\.
|
||||
required: false
|
||||
default: null
|
||||
ami_id:
|
||||
|
@ -94,8 +97,24 @@ options:
|
|||
description:
|
||||
- Optional attribute which with to sort the results.
|
||||
- If specifying 'tag', the 'tag_name' parameter is required.
|
||||
- Starting at version 2.1, additional sort choices of architecture, block_device_mapping, creationDate, hypervisor, is_public, location, owner_id, platform, root_device_name, root_device_type, state, and virtualization_type are supported.
|
||||
choices: ['name', 'description', 'tag', 'architecture', 'block_device_mapping', 'creationDate', 'hypervisor', 'is_public', 'location', 'owner_id', 'platform', 'root_device_name', 'root_device_type', 'state', 'virtualization_type']
|
||||
- Starting at version 2.1, additional sort choices of architecture, block_device_mapping, creationDate, hypervisor, is_public, location, owner_id,
|
||||
platform, root_device_name, root_device_type, state, and virtualization_type are supported.
|
||||
choices:
|
||||
- 'name'
|
||||
- 'description'
|
||||
- 'tag'
|
||||
- 'architecture'
|
||||
- 'block_device_mapping'
|
||||
- 'creationDate'
|
||||
- 'hypervisor'
|
||||
- 'is_public'
|
||||
- 'location'
|
||||
- 'owner_id'
|
||||
- 'platform'
|
||||
- 'root_device_name'
|
||||
- 'root_device_type'
|
||||
- 'state'
|
||||
- 'virtualization_type'
|
||||
default: null
|
||||
required: false
|
||||
sort_tag:
|
||||
|
@ -316,7 +335,8 @@ def main():
|
|||
platform = dict(required=False),
|
||||
product_code = dict(required=False),
|
||||
sort = dict(required=False, default=None,
|
||||
choices=['name', 'description', 'tag', 'architecture', 'block_device_mapping', 'creationDate', 'hypervisor', 'is_public', 'location', 'owner_id', 'platform', 'root_device_name', 'root_device_type', 'state', 'virtualization_type']),
|
||||
choices=['name', 'description', 'tag', 'architecture', 'block_device_mapping', 'creationDate', 'hypervisor', 'is_public', 'location',
|
||||
'owner_id', 'platform', 'root_device_name', 'root_device_type', 'state', 'virtualization_type']),
|
||||
sort_tag = dict(required=False),
|
||||
sort_order = dict(required=False, default='ascending',
|
||||
choices=['ascending', 'descending']),
|
||||
|
|
|
@ -82,7 +82,8 @@ options:
|
|||
default: 1
|
||||
replace_instances:
|
||||
description:
|
||||
- List of instance_ids belonging to the named ASG that you would like to terminate and be replaced with instances matching the current launch configuration.
|
||||
- List of instance_ids belonging to the named ASG that you would like to terminate and be replaced with instances matching the current launch
|
||||
configuration.
|
||||
required: false
|
||||
version_added: "1.8"
|
||||
default: None
|
||||
|
@ -129,14 +130,16 @@ options:
|
|||
version_added: "1.8"
|
||||
wait_for_instances:
|
||||
description:
|
||||
- Wait for the ASG instances to be in a ready state before exiting. If instances are behind an ELB, it will wait until the ELB determines all instances have a lifecycle_state of "InService" and a health_status of "Healthy".
|
||||
- Wait for the ASG instances to be in a ready state before exiting. If instances are behind an ELB, it will wait until the ELB determines all
|
||||
instances have a lifecycle_state of "InService" and a health_status of "Healthy".
|
||||
version_added: "1.9"
|
||||
default: yes
|
||||
required: False
|
||||
termination_policies:
|
||||
description:
|
||||
- An ordered list of criteria used for selecting instances to be removed from the Auto Scaling group when reducing capacity.
|
||||
- For 'Default', when used to create a new autoscaling group, the "Default"i value is used. When used to change an existent autoscaling group, the current termination policies are maintained.
|
||||
- For 'Default', when used to create a new autoscaling group, the "Default"i value is used. When used to change an existent autoscaling group, the
|
||||
current termination policies are maintained.
|
||||
required: false
|
||||
default: Default
|
||||
choices: ['OldestInstance', 'NewestInstance', 'OldestLaunchConfiguration', 'ClosestToNextInstanceHour', 'Default']
|
||||
|
@ -150,7 +153,11 @@ options:
|
|||
notification_types:
|
||||
description:
|
||||
- A list of auto scaling events to trigger notifications on.
|
||||
default: ['autoscaling:EC2_INSTANCE_LAUNCH', 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR', 'autoscaling:EC2_INSTANCE_TERMINATE', 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR']
|
||||
default:
|
||||
- 'autoscaling:EC2_INSTANCE_LAUNCH'
|
||||
- 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR'
|
||||
- 'autoscaling:EC2_INSTANCE_TERMINATE'
|
||||
- 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
|
||||
required: false
|
||||
version_added: "2.2"
|
||||
suspend_processes:
|
||||
|
|
|
@ -35,7 +35,9 @@ options:
|
|||
required: false
|
||||
tags:
|
||||
description:
|
||||
- "A dictionary/hash of tags in the format { tag1_name: 'tag1_value', tag2_name: 'tag2_value' } to match against the auto scaling group(s) you are searching for."
|
||||
- >
|
||||
A dictionary/hash of tags in the format { tag1_name: 'tag1_value', tag2_name: 'tag2_value' } to match against the auto scaling
|
||||
group(s) you are searching for.
|
||||
required: false
|
||||
extends_documentation_fragment:
|
||||
- aws
|
||||
|
@ -232,7 +234,10 @@ def find_asgs(conn, module, name=None, tags=None):
|
|||
List
|
||||
[
|
||||
{
|
||||
"auto_scaling_group_arn": "arn:aws:autoscaling:us-west-2:275977225706:autoScalingGroup:58abc686-9783-4528-b338-3ad6f1cbbbaf:autoScalingGroupName/public-webapp-production",
|
||||
"auto_scaling_group_arn": (
|
||||
"arn:aws:autoscaling:us-west-2:275977225706:autoScalingGroup:58abc686-9783-4528-b338-3ad6f1cbbbaf:"
|
||||
"autoScalingGroupName/public-webapp-production"
|
||||
),
|
||||
"auto_scaling_group_name": "public-webapp-production",
|
||||
"availability_zones": ["us-west-2c", "us-west-2b", "us-west-2a"],
|
||||
"created_time": "2016-02-02T23:28:42.481000+00:00",
|
||||
|
|
|
@ -28,7 +28,9 @@ version_added: "2.2"
|
|||
author: Michael Baydoun (@MichaelBaydoun)
|
||||
requirements: [ botocore, boto3 ]
|
||||
notes:
|
||||
- You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent requests do not create new customer gateway resources.
|
||||
- You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the
|
||||
first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent
|
||||
requests do not create new customer gateway resources.
|
||||
- Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use
|
||||
customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.
|
||||
options:
|
||||
|
|
|
@ -69,7 +69,8 @@ options:
|
|||
version_added: "1.5"
|
||||
wait_timeout:
|
||||
description:
|
||||
- Number of seconds to wait for an instance to change state. If 0 then this module may return an error if a transient error occurs. If non-zero then any transient errors are ignored until the timeout is reached. Ignored when wait=no.
|
||||
- Number of seconds to wait for an instance to change state. If 0 then this module may return an error if a transient error occurs.
|
||||
If non-zero then any transient errors are ignored until the timeout is reached. Ignored when wait=no.
|
||||
required: false
|
||||
default: 0
|
||||
version_added: "1.6"
|
||||
|
|
|
@ -29,7 +29,8 @@ author: "Rob White (@wimnat)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
|
||||
|
|
|
@ -58,11 +58,14 @@ options:
|
|||
required: false
|
||||
security_groups:
|
||||
description:
|
||||
- A list of security groups to apply to the instances. For VPC instances, specify security group IDs. For EC2-Classic, specify either security group names or IDs.
|
||||
- A list of security groups to apply to the instances. For VPC instances, specify security group IDs. For EC2-Classic, specify either security
|
||||
group names or IDs.
|
||||
required: false
|
||||
volumes:
|
||||
description:
|
||||
- a list of volume dicts, each containing device name and optionally ephemeral id or snapshot id. Size and type (and number of iops for io device type) must be specified for a new volume or a root volume, and may be passed for a snapshot volume. For any volume, a volume size less than 1 will be interpreted as a request not to create the volume.
|
||||
- a list of volume dicts, each containing device name and optionally ephemeral id or snapshot id.
|
||||
Size and type (and number of iops for io device type) must be specified for a new volume or a root volume, and may be passed for a snapshot volume.
|
||||
For any volume, a volume size less than 1 will be interpreted as a request not to create the volume.
|
||||
required: false
|
||||
user_data:
|
||||
description:
|
||||
|
@ -87,7 +90,8 @@ options:
|
|||
default: false
|
||||
assign_public_ip:
|
||||
description:
|
||||
- Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud. Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.
|
||||
- Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud. Specifies whether to assign a public IP
|
||||
address to each instance launched in a Amazon VPC.
|
||||
required: false
|
||||
version_added: "1.8"
|
||||
ramdisk_id:
|
||||
|
|
|
@ -73,7 +73,34 @@ options:
|
|||
description:
|
||||
- The threshold's unit of measurement
|
||||
required: false
|
||||
choices: ['Seconds','Microseconds','Milliseconds','Bytes','Kilobytes','Megabytes','Gigabytes','Terabytes','Bits','Kilobits','Megabits','Gigabits','Terabits','Percent','Count','Bytes/Second','Kilobytes/Second','Megabytes/Second','Gigabytes/Second','Terabytes/Second','Bits/Second','Kilobits/Second','Megabits/Second','Gigabits/Second','Terabits/Second','Count/Second','None']
|
||||
choices:
|
||||
- 'Seconds'
|
||||
- 'Microseconds'
|
||||
- 'Milliseconds'
|
||||
- 'Bytes'
|
||||
- 'Kilobytes'
|
||||
- 'Megabytes'
|
||||
- 'Gigabytes'
|
||||
- 'Terabytes'
|
||||
- 'Bits'
|
||||
- 'Kilobits'
|
||||
- 'Megabits'
|
||||
- 'Gigabits'
|
||||
- 'Terabits'
|
||||
- 'Percent'
|
||||
- 'Count'
|
||||
- 'Bytes/Second'
|
||||
- 'Kilobytes/Second'
|
||||
- 'Megabytes/Second'
|
||||
- 'Gigabytes/Second'
|
||||
- 'Terabytes/Second'
|
||||
- 'Bits/Second'
|
||||
- 'Kilobits/Second'
|
||||
- 'Megabits/Second'
|
||||
- 'Gigabits/Second'
|
||||
- 'Terabits/Second'
|
||||
- 'Count/Second'
|
||||
- 'None'
|
||||
description:
|
||||
description:
|
||||
- A longer description of the alarm
|
||||
|
@ -254,7 +281,10 @@ def main():
|
|||
comparison=dict(type='str', choices=['<=', '<', '>', '>=']),
|
||||
threshold=dict(type='float'),
|
||||
period=dict(type='int'),
|
||||
unit=dict(type='str', choices=['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second', 'Terabits/Second', 'Count/Second', 'None']),
|
||||
unit=dict(type='str', choices=['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes',
|
||||
'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second',
|
||||
'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second',
|
||||
'Gigabits/Second', 'Terabits/Second', 'Count/Second', 'None']),
|
||||
evaluation_periods=dict(type='int'),
|
||||
description=dict(type='str'),
|
||||
dimensions=dict(type='dict', default={}),
|
||||
|
|
|
@ -28,7 +28,8 @@ version_added: "2.0"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
author:
|
||||
|
|
|
@ -110,7 +110,8 @@ def create_scaling_policy(connection, module):
|
|||
try:
|
||||
connection.create_scaling_policy(sp)
|
||||
policy = connection.get_all_policies(as_group=asg_name,policy_names=[sp_name])[0]
|
||||
module.exit_json(changed=True, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
|
||||
module.exit_json(changed=True, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment,
|
||||
cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
|
||||
except BotoServerError as e:
|
||||
module.fail_json(msg=str(e))
|
||||
else:
|
||||
|
@ -137,7 +138,8 @@ def create_scaling_policy(connection, module):
|
|||
if changed:
|
||||
connection.create_scaling_policy(policy)
|
||||
policy = connection.get_all_policies(as_group=asg_name,policy_names=[sp_name])[0]
|
||||
module.exit_json(changed=changed, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
|
||||
module.exit_json(changed=changed, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment,
|
||||
cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
|
||||
except BotoServerError as e:
|
||||
module.fail_json(msg=str(e))
|
||||
|
||||
|
|
|
@ -113,7 +113,9 @@ state:
|
|||
type: string
|
||||
sample: completed
|
||||
state_message:
|
||||
description: Encrypted Amazon EBS snapshots are copied asynchronously. If a snapshot copy operation fails (for example, if the proper AWS Key Management Service (AWS KMS) permissions are not obtained) this field displays error state details to help you diagnose why the error occurred.
|
||||
description: Encrypted Amazon EBS snapshots are copied asynchronously. If a snapshot copy operation fails (for example, if the proper
|
||||
AWS Key Management Service (AWS KMS) permissions are not obtained) this field displays error state details to help you diagnose why the
|
||||
error occurred.
|
||||
type: string
|
||||
sample:
|
||||
start_time:
|
||||
|
|
|
@ -24,7 +24,8 @@ DOCUMENTATION = '''
|
|||
module: ec2_tag
|
||||
short_description: create and remove tag(s) to ec2 resources.
|
||||
description:
|
||||
- Creates, removes and lists tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto.
|
||||
- Creates, removes and lists tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX).
|
||||
It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto.
|
||||
version_added: "1.3"
|
||||
options:
|
||||
resource:
|
||||
|
|
|
@ -29,7 +29,8 @@ author: "Rob White (@wimnat)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVolumes.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVolumes.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
extends_documentation_fragment:
|
||||
|
|
|
@ -30,7 +30,8 @@ author: "Nick Aslanidis (@naslanidis)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
DhcpOptionsIds:
|
||||
|
|
|
@ -61,7 +61,8 @@ options:
|
|||
required: false
|
||||
tags:
|
||||
description:
|
||||
- The tags you want attached to the VPC. This is independent of the name value, note if you pass a 'Name' key it would override the Name of the VPC if it's different.
|
||||
- The tags you want attached to the VPC. This is independent of the name value, note if you pass a 'Name' key it would override the Name of
|
||||
the VPC if it's different.
|
||||
default: None
|
||||
required: false
|
||||
aliases: [ 'resource_tags' ]
|
||||
|
@ -73,7 +74,8 @@ options:
|
|||
choices: [ 'present', 'absent' ]
|
||||
multi_ok:
|
||||
description:
|
||||
- By default the module will not create another VPC if there is another VPC with the same name and CIDR block. Specify this as true if you want duplicate VPCs created.
|
||||
- By default the module will not create another VPC if there is another VPC with the same name and CIDR block. Specify this as true if you want
|
||||
duplicate VPCs created.
|
||||
default: false
|
||||
required: false
|
||||
|
||||
|
|
|
@ -29,7 +29,8 @@ author: "Rob White (@wimnat)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
|
||||
|
|
|
@ -29,7 +29,8 @@ author: "Rob White (@wimnat)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
extends_documentation_fragment:
|
||||
|
|
|
@ -29,7 +29,8 @@ author: "Rob White (@wimnat)"
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html) for possible filters.
|
||||
required: false
|
||||
default: null
|
||||
extends_documentation_fragment:
|
||||
|
|
|
@ -29,7 +29,8 @@ requirements: [ boto3 ]
|
|||
options:
|
||||
filters:
|
||||
description:
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value. See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
|
||||
See U(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
|
||||
required: false
|
||||
default: None
|
||||
vpn_gateway_ids:
|
||||
|
|
|
@ -23,7 +23,8 @@ DOCUMENTATION = '''
|
|||
module: ec2_win_password
|
||||
short_description: gets the default administrator password for ec2 windows instances
|
||||
description:
|
||||
- Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. i-XXXXXXX). This module has a dependency on python-boto.
|
||||
- Gets the default administrator password from any EC2 Windows instance. The instance is referenced by its id (e.g. i-XXXXXXX). This module
|
||||
has a dependency on python-boto.
|
||||
version_added: "2.0"
|
||||
author: "Rick Mendes (@rickmendes)"
|
||||
options:
|
||||
|
@ -38,7 +39,8 @@ options:
|
|||
key_passphrase:
|
||||
version_added: "2.0"
|
||||
description:
|
||||
- The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to convert your password protected keys if they do not use DES or 3DES. ex) openssl rsa -in current_key -out new_key -des3.
|
||||
- The passphrase for the instance key pair. The key must use DES or 3DES encryption for this module to decrypt it. You can use openssl to
|
||||
convert your password protected keys if they do not use DES or 3DES. ex) openssl rsa -in current_key -out new_key -des3.
|
||||
required: false
|
||||
default: null
|
||||
wait:
|
||||
|
|
|
@ -69,7 +69,8 @@ options:
|
|||
required: false
|
||||
role:
|
||||
description:
|
||||
- The name or full Amazon Resource Name (ARN) of the IAM role that allows your Amazon ECS container agent to make calls to your load balancer on your behalf. This parameter is only required if you are using a load balancer with your service.
|
||||
- The name or full Amazon Resource Name (ARN) of the IAM role that allows your Amazon ECS container agent to make calls to your load balancer
|
||||
on your behalf. This parameter is only required if you are using a load balancer with your service.
|
||||
required: false
|
||||
delay:
|
||||
description:
|
||||
|
@ -164,7 +165,9 @@ service:
|
|||
returned: always
|
||||
type: int
|
||||
serviceArn:
|
||||
description: The Amazon Resource Name (ARN) that identifies the service. The ARN contains the arn:aws:ecs namespace, followed by the region of the service, the AWS account ID of the service owner, the service namespace, and then the service name. For example, arn:aws:ecs:region :012345678910 :service/my-service .
|
||||
description: The Amazon Resource Name (ARN) that identifies the service. The ARN contains the arn:aws:ecs namespace, followed by the region
|
||||
of the service, the AWS account ID of the service owner, the service namespace, and then the service name. For example,
|
||||
arn:aws:ecs:region :012345678910 :service/my-service .
|
||||
returned: always
|
||||
type: string
|
||||
serviceName:
|
||||
|
|
|
@ -130,7 +130,8 @@ services:
|
|||
description: lost of service events
|
||||
returned: always
|
||||
type: list of complex
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
try:
|
||||
import boto
|
||||
import botocore
|
||||
|
@ -167,7 +168,8 @@ class EcsServiceManager:
|
|||
# return self.client.list_clusters()
|
||||
# {'failures': [],
|
||||
# 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'ce7b5880-1c41-11e5-8a31-47a93a8a98eb'},
|
||||
# 'clusters': [{'activeServicesCount': 0, 'clusterArn': 'arn:aws:ecs:us-west-2:777110527155:cluster/default', 'status': 'ACTIVE', 'pendingTasksCount': 0, 'runningTasksCount': 0, 'registeredContainerInstancesCount': 0, 'clusterName': 'default'}]}
|
||||
# 'clusters': [{'activeServicesCount': 0, 'clusterArn': 'arn:aws:ecs:us-west-2:777110527155:cluster/default',
|
||||
# 'status': 'ACTIVE', 'pendingTasksCount': 0, 'runningTasksCount': 0, 'registeredContainerInstancesCount': 0, 'clusterName': 'default'}]}
|
||||
# {'failures': [{'arn': 'arn:aws:ecs:us-west-2:777110527155:cluster/bogus', 'reason': 'MISSING'}],
|
||||
# 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '0f66c219-1c42-11e5-8a31-47a93a8a98eb'},
|
||||
# 'clusters': []}
|
||||
|
|
|
@ -59,7 +59,8 @@ options:
|
|||
version_added: 2.3
|
||||
task_role_arn:
|
||||
description:
|
||||
- The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role.
|
||||
- The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted
|
||||
the permissions that are specified in this role.
|
||||
required: false
|
||||
version_added: 2.3
|
||||
volumes:
|
||||
|
@ -88,7 +89,10 @@ EXAMPLES = '''
|
|||
hostPort: 80
|
||||
- name: busybox
|
||||
command:
|
||||
- /bin/sh -c "while true; do echo '<html><head><title>Amazon ECS Sample App</title></head><body><div><h1>Amazon ECS Sample App</h1><h2>Congratulations!</h2><p>Your application is now running on a container in Amazon ECS.</p>' > top; /bin/date > date ; echo '</div></body></html>' > bottom; cat top date bottom > /usr/local/apache2/htdocs/index.html ; sleep 1; done"
|
||||
- >
|
||||
/bin/sh -c "while true; do echo '<html><head><title>Amazon ECS Sample App</title></head><body><div><h1>Amazon ECS Sample App</h1><h2>Congratulations!
|
||||
</h2><p>Your application is now running on a container in Amazon ECS.</p>' > top; /bin/date > date ; echo '</div></body></html>' > bottom;
|
||||
cat top date bottom > /usr/local/apache2/htdocs/index.html ; sleep 1; done"
|
||||
cpu: 10
|
||||
entryPoint:
|
||||
- sh
|
||||
|
@ -199,7 +203,12 @@ class EcsTaskManager:
|
|||
pass
|
||||
|
||||
# Return the full descriptions of the task definitions, sorted ascending by revision
|
||||
return list(sorted([self.ecs.describe_task_definition(taskDefinition=arn)['taskDefinition'] for arn in data['taskDefinitionArns']], key=lambda td: td['revision']))
|
||||
return list(
|
||||
sorted(
|
||||
[self.ecs.describe_task_definition(taskDefinition=arn)['taskDefinition'] for arn in data['taskDefinitionArns']],
|
||||
key=lambda td: td['revision']
|
||||
)
|
||||
)
|
||||
|
||||
def deregister_task(self, taskArn):
|
||||
response = self.ecs.deregister_task_definition(taskDefinition=taskArn)
|
||||
|
@ -256,7 +265,8 @@ def main():
|
|||
if not existing_definitions_in_family and revision != 1:
|
||||
module.fail_json(msg="You have specified a revision of %d but a created revision would be 1" % revision)
|
||||
elif existing_definitions_in_family and existing_definitions_in_family[-1]['revision'] + 1 != revision:
|
||||
module.fail_json(msg="You have specified a revision of %d but a created revision would be %d" % (revision, existing_definitions_in_family[-1]['revision'] + 1))
|
||||
module.fail_json(msg="You have specified a revision of %d but a created revision would be %d" %
|
||||
(revision, existing_definitions_in_family[-1]['revision'] + 1))
|
||||
else:
|
||||
existing = None
|
||||
|
||||
|
|
|
@ -31,7 +31,8 @@ author: "Jim Dalton (@jsdalton)"
|
|||
options:
|
||||
state:
|
||||
description:
|
||||
- C(absent) or C(present) are idempotent actions that will create or destroy a cache cluster as needed. C(rebooted) will reboot the cluster, resulting in a momentary outage.
|
||||
- C(absent) or C(present) are idempotent actions that will create or destroy a cache cluster as needed. C(rebooted) will reboot the cluster,
|
||||
resulting in a momentary outage.
|
||||
choices: ['present', 'absent', 'rebooted']
|
||||
required: true
|
||||
name:
|
||||
|
@ -65,7 +66,8 @@ options:
|
|||
default: None
|
||||
cache_parameter_group:
|
||||
description:
|
||||
- The name of the cache parameter group to associate with this cache cluster. If this argument is omitted, the default cache parameter group for the specified engine will be used.
|
||||
- The name of the cache parameter group to associate with this cache cluster. If this argument is omitted, the default cache parameter group
|
||||
for the specified engine will be used.
|
||||
required: false
|
||||
default: None
|
||||
version_added: "2.0"
|
||||
|
|
|
@ -70,7 +70,8 @@ options:
|
|||
- The path to the private key of the certificate in PEM encoded format.
|
||||
dup_ok:
|
||||
description:
|
||||
- By default the module will not upload a certificate that is already uploaded into AWS. If set to True, it will upload the certificate as long as the name is unique.
|
||||
- By default the module will not upload a certificate that is already uploaded into AWS. If set to True, it will upload the certificate as
|
||||
long as the name is unique.
|
||||
required: false
|
||||
default: False
|
||||
aliases: []
|
||||
|
|
|
@ -46,7 +46,8 @@ options:
|
|||
required: false
|
||||
policy_json:
|
||||
description:
|
||||
- A properly json formatted policy as string (mutually exclusive with C(policy_document), see https://github.com/ansible/ansible/issues/7005#issuecomment-42894813 on how to use it properly)
|
||||
- A properly json formatted policy as string (mutually exclusive with C(policy_document),
|
||||
see https://github.com/ansible/ansible/issues/7005#issuecomment-42894813 on how to use it properly)
|
||||
required: false
|
||||
state:
|
||||
description:
|
||||
|
@ -56,7 +57,8 @@ options:
|
|||
choices: [ "present", "absent"]
|
||||
skip_duplicates:
|
||||
description:
|
||||
- By default the module looks for any policies that match the document you pass in, if there is a match it will not make a new policy object with the same rules. You can override this by specifying false which would allow for two policy objects with different names but same rules.
|
||||
- By default the module looks for any policies that match the document you pass in, if there is a match it will not make a new policy object with
|
||||
the same rules. You can override this by specifying false which would allow for two policy objects with different names but same rules.
|
||||
required: false
|
||||
default: "/"
|
||||
|
||||
|
|
|
@ -43,7 +43,8 @@ options:
|
|||
required: false
|
||||
managed_policy:
|
||||
description:
|
||||
- A list of managed policy ARNs (can't use friendly names due to AWS API limitation) to attach to the role. To embed an inline policy, use M(iam_policy). To remove existing policies, use an empty list item.
|
||||
- A list of managed policy ARNs (can't use friendly names due to AWS API limitation) to attach to the role. To embed an inline policy,
|
||||
use M(iam_policy). To remove existing policies, use an empty list item.
|
||||
required: true
|
||||
state:
|
||||
description:
|
||||
|
|
|
@ -41,11 +41,13 @@ options:
|
|||
choices: [ 'present', 'absent' ]
|
||||
runtime:
|
||||
description:
|
||||
- The runtime environment for the Lambda function you are uploading. Required when creating a function. Use parameters as described in boto3 docs. Current example runtime environments are nodejs, nodejs4.3, java8 or python2.7
|
||||
- The runtime environment for the Lambda function you are uploading. Required when creating a function. Use parameters as described in boto3 docs.
|
||||
Current example runtime environments are nodejs, nodejs4.3, java8 or python2.7
|
||||
required: true
|
||||
role:
|
||||
description:
|
||||
- The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it executes your function to access any other Amazon Web Services (AWS) resources. You may use the bare ARN if the role belongs to the same AWS account.
|
||||
- The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it executes your function to access any other Amazon Web Services (AWS)
|
||||
resources. You may use the bare ARN if the role belongs to the same AWS account.
|
||||
default: null
|
||||
handler:
|
||||
description:
|
||||
|
@ -89,7 +91,8 @@ options:
|
|||
default: 128
|
||||
vpc_subnet_ids:
|
||||
description:
|
||||
- List of subnet IDs to run Lambda function in. Use this option if you need to access resources in your VPC. Leave empty if you don't want to run the function in a VPC.
|
||||
- List of subnet IDs to run Lambda function in. Use this option if you need to access resources in your VPC. Leave empty if you don't want to run
|
||||
the function in a VPC.
|
||||
required: false
|
||||
default: None
|
||||
vpc_security_group_ids:
|
||||
|
|
|
@ -25,7 +25,9 @@ module: rds
|
|||
version_added: "1.3"
|
||||
short_description: create, delete, or modify an Amazon rds instance
|
||||
description:
|
||||
- Creates, deletes, or modifies rds instances. When creating an instance it can be either a new instance or a read-only replica of an existing instance. This module has a dependency on python-boto >= 2.5. The 'promote' command requires boto >= 2.18.0. Certain features such as tags rely on boto.rds2 (boto >= 2.26.0)
|
||||
- Creates, deletes, or modifies rds instances. When creating an instance it can be either a new instance or a read-only replica of an existing
|
||||
instance. This module has a dependency on python-boto >= 2.5. The 'promote' command requires boto >= 2.18.0. Certain features such as tags rely
|
||||
on boto.rds2 (boto >= 2.26.0)
|
||||
options:
|
||||
command:
|
||||
description:
|
||||
|
@ -48,7 +50,7 @@ options:
|
|||
- mariadb was added in version 2.2
|
||||
required: false
|
||||
default: null
|
||||
choices: [ 'mariadb', 'MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres', 'aurora']
|
||||
choices: ['mariadb', 'MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres', 'aurora']
|
||||
size:
|
||||
description:
|
||||
- Size in gigabytes of the initial storage for the DB instance. Used only when command=create or command=modify.
|
||||
|
@ -56,7 +58,8 @@ options:
|
|||
default: null
|
||||
instance_type:
|
||||
description:
|
||||
- The instance type of the database. Must be specified when command=create. Optional when command=replicate, command=modify or command=restore. If not specified then the replica inherits the same instance type as the source instance.
|
||||
- The instance type of the database. Must be specified when command=create. Optional when command=replicate, command=modify or command=restore.
|
||||
If not specified then the replica inherits the same instance type as the source instance.
|
||||
required: false
|
||||
default: null
|
||||
username:
|
||||
|
@ -81,12 +84,13 @@ options:
|
|||
default: null
|
||||
engine_version:
|
||||
description:
|
||||
- Version number of the database engine to use. Used only when command=create. If not specified then the current Amazon RDS default engine version is used.
|
||||
- Version number of the database engine to use. Used only when command=create. If not specified then the current Amazon RDS default engine version is used
|
||||
required: false
|
||||
default: null
|
||||
parameter_group:
|
||||
description:
|
||||
- Name of the DB parameter group to associate with this instance. If omitted then the RDS default DBParameterGroup will be used. Used only when command=create or command=modify.
|
||||
- Name of the DB parameter group to associate with this instance. If omitted then the RDS default DBParameterGroup will be used. Used only
|
||||
when command=create or command=modify.
|
||||
required: false
|
||||
default: null
|
||||
license_model:
|
||||
|
@ -97,7 +101,8 @@ options:
|
|||
choices: [ 'license-included', 'bring-your-own-license', 'general-public-license', 'postgresql-license' ]
|
||||
multi_zone:
|
||||
description:
|
||||
- Specifies if this is a Multi-availability-zone deployment. Can not be used in conjunction with zone parameter. Used only when command=create or command=modify.
|
||||
- Specifies if this is a Multi-availability-zone deployment. Can not be used in conjunction with zone parameter. Used only when command=create or
|
||||
command=modify.
|
||||
choices: [ "yes", "no" ]
|
||||
required: false
|
||||
default: null
|
||||
|
@ -136,7 +141,9 @@ options:
|
|||
default: null
|
||||
maint_window:
|
||||
description:
|
||||
- "Maintenance window in format of ddd:hh24:mi-ddd:hh24:mi. (Example: Mon:22:00-Mon:23:15) If not specified then a random maintenance window is assigned. Used only when command=create or command=modify."
|
||||
- >
|
||||
Maintenance window in format of ddd:hh24:mi-ddd:hh24:mi. (Example: Mon:22:00-Mon:23:15) If not specified then a random maintenance window is
|
||||
assigned. Used only when command=create or command=modify.
|
||||
required: false
|
||||
default: null
|
||||
backup_window:
|
||||
|
@ -146,7 +153,9 @@ options:
|
|||
default: null
|
||||
backup_retention:
|
||||
description:
|
||||
- "Number of days backups are retained. Set to 0 to disable backups. Default is 1 day. Valid range: 0-35. Used only when command=create or command=modify."
|
||||
- >
|
||||
Number of days backups are retained. Set to 0 to disable backups. Default is 1 day. Valid range: 0-35. Used only when command=create or
|
||||
command=modify.
|
||||
required: false
|
||||
default: null
|
||||
zone:
|
||||
|
@ -162,7 +171,8 @@ options:
|
|||
default: null
|
||||
snapshot:
|
||||
description:
|
||||
- Name of snapshot to take. When command=delete, if no snapshot name is provided then no snapshot is taken. If used with command=delete with no instance_name, the snapshot is deleted. Used with command=facts, command=delete or command=snapshot.
|
||||
- Name of snapshot to take. When command=delete, if no snapshot name is provided then no snapshot is taken. If used with command=delete with
|
||||
no instance_name, the snapshot is deleted. Used with command=facts, command=delete or command=snapshot.
|
||||
required: false
|
||||
default: null
|
||||
aws_secret_key:
|
||||
|
@ -178,7 +188,8 @@ options:
|
|||
aliases: [ 'ec2_access_key', 'access_key' ]
|
||||
wait:
|
||||
description:
|
||||
- When command=create, replicate, modify or restore then wait for the database to enter the 'available' state. When command=delete wait for the database to be terminated.
|
||||
- When command=create, replicate, modify or restore then wait for the database to enter the 'available' state. When command=delete wait for
|
||||
the database to be terminated.
|
||||
required: false
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
|
@ -188,7 +199,8 @@ options:
|
|||
default: 300
|
||||
apply_immediately:
|
||||
description:
|
||||
- Used only when command=modify. If enabled, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window.
|
||||
- Used only when command=modify. If enabled, the modifications will be applied as soon as possible rather than waiting for the next
|
||||
preferred maintenance window.
|
||||
default: no
|
||||
choices: [ "yes", "no" ]
|
||||
force_failover:
|
||||
|
@ -445,7 +457,9 @@ class RDS2Connection:
|
|||
|
||||
def get_db_instance(self, instancename):
|
||||
try:
|
||||
dbinstances = self.connection.describe_db_instances(db_instance_identifier=instancename)['DescribeDBInstancesResponse']['DescribeDBInstancesResult']['DBInstances']
|
||||
dbinstances = self.connection.describe_db_instances(
|
||||
db_instance_identifier=instancename
|
||||
)['DescribeDBInstancesResponse']['DescribeDBInstancesResult']['DBInstances']
|
||||
result = RDS2DBInstance(dbinstances[0])
|
||||
return result
|
||||
except boto.rds2.exceptions.DBInstanceNotFound as e:
|
||||
|
@ -455,7 +469,10 @@ class RDS2Connection:
|
|||
|
||||
def get_db_snapshot(self, snapshotid):
|
||||
try:
|
||||
snapshots = self.connection.describe_db_snapshots(db_snapshot_identifier=snapshotid, snapshot_type='manual')['DescribeDBSnapshotsResponse']['DescribeDBSnapshotsResult']['DBSnapshots']
|
||||
snapshots = self.connection.describe_db_snapshots(
|
||||
db_snapshot_identifier=snapshotid,
|
||||
snapshot_type='manual'
|
||||
)['DescribeDBSnapshotsResponse']['DescribeDBSnapshotsResult']['DBSnapshots']
|
||||
result = RDS2Snapshot(snapshots[0])
|
||||
return result
|
||||
except boto.rds2.exceptions.DBSnapshotNotFound as e:
|
||||
|
@ -472,7 +489,11 @@ class RDS2Connection:
|
|||
|
||||
def create_db_instance_read_replica(self, instance_name, source_instance, **params):
|
||||
try:
|
||||
result = self.connection.create_db_instance_read_replica(instance_name, source_instance, **params)['CreateDBInstanceReadReplicaResponse']['CreateDBInstanceReadReplicaResult']['DBInstance']
|
||||
result = self.connection.create_db_instance_read_replica(
|
||||
instance_name,
|
||||
source_instance,
|
||||
**params
|
||||
)['CreateDBInstanceReadReplicaResponse']['CreateDBInstanceReadReplicaResult']['DBInstance']
|
||||
return RDS2DBInstance(result)
|
||||
except boto.exception.BotoServerError as e:
|
||||
raise RDSException(e)
|
||||
|
@ -507,7 +528,11 @@ class RDS2Connection:
|
|||
|
||||
def restore_db_instance_from_db_snapshot(self, instance_name, snapshot, instance_type, **params):
|
||||
try:
|
||||
result = self.connection.restore_db_instance_from_db_snapshot(instance_name, snapshot, **params)['RestoreDBInstanceFromDBSnapshotResponse']['RestoreDBInstanceFromDBSnapshotResult']['DBInstance']
|
||||
result = self.connection.restore_db_instance_from_db_snapshot(
|
||||
instance_name,
|
||||
snapshot,
|
||||
**params
|
||||
)['RestoreDBInstanceFromDBSnapshotResponse']['RestoreDBInstanceFromDBSnapshotResult']['DBInstance']
|
||||
return RDS2DBInstance(result)
|
||||
except boto.exception.BotoServerError as e:
|
||||
raise RDSException(e)
|
||||
|
@ -1046,7 +1071,8 @@ def main():
|
|||
command = dict(choices=['create', 'replicate', 'delete', 'facts', 'modify', 'promote', 'snapshot', 'reboot', 'restore'], required=True),
|
||||
instance_name = dict(required=False),
|
||||
source_instance = dict(required=False),
|
||||
db_engine = dict(choices=['mariadb', 'MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex', 'sqlserver-web', 'postgres', 'aurora'], required=False),
|
||||
db_engine = dict(choices=['mariadb', 'MySQL', 'oracle-se1', 'oracle-se', 'oracle-ee', 'sqlserver-ee', 'sqlserver-se', 'sqlserver-ex',
|
||||
'sqlserver-web', 'postgres', 'aurora'], required=False),
|
||||
size = dict(required=False),
|
||||
instance_type = dict(aliases=['type'], required=False),
|
||||
username = dict(required=False),
|
||||
|
|
|
@ -52,7 +52,35 @@ options:
|
|||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
choices: [ 'aurora5.6', 'mariadb10.0', 'mariadb10.1', 'mysql5.1', 'mysql5.5', 'mysql5.6', 'mysql5.7', 'oracle-ee-11.2', 'oracle-ee-12.1', 'oracle-se-11.2', 'oracle-se-12.1', 'oracle-se1-11.2', 'oracle-se1-12.1', 'postgres9.3', 'postgres9.4', 'postgres9.5', 'postgres9.6', sqlserver-ee-10.5', 'sqlserver-ee-11.0', 'sqlserver-ex-10.5', 'sqlserver-ex-11.0', 'sqlserver-ex-12.0', 'sqlserver-se-10.5', 'sqlserver-se-11.0', 'sqlserver-se-12.0', 'sqlserver-web-10.5', 'sqlserver-web-11.0', 'sqlserver-web-12.0' ]
|
||||
choices:
|
||||
- 'aurora5.6'
|
||||
- 'mariadb10.0'
|
||||
- 'mariadb10.1'
|
||||
- 'mysql5.1'
|
||||
- 'mysql5.5'
|
||||
- 'mysql5.6'
|
||||
- 'mysql5.7'
|
||||
- 'oracle-ee-11.2'
|
||||
- 'oracle-ee-12.1'
|
||||
- 'oracle-se-11.2'
|
||||
- 'oracle-se-12.1'
|
||||
- 'oracle-se1-11.2'
|
||||
- 'oracle-se1-12.1'
|
||||
- 'postgres9.3'
|
||||
- 'postgres9.4'
|
||||
- 'postgres9.5'
|
||||
- 'postgres9.6'
|
||||
- 'sqlserver-ee-10.5'
|
||||
- 'sqlserver-ee-11.0'
|
||||
- 'sqlserver-ex-10.5'
|
||||
- 'sqlserver-ex-11.0'
|
||||
- 'sqlserver-ex-12.0'
|
||||
- 'sqlserver-se-10.5'
|
||||
- 'sqlserver-se-11.0'
|
||||
- 'sqlserver-se-12.0'
|
||||
- 'sqlserver-web-10.5'
|
||||
- 'sqlserver-web-11.0'
|
||||
- 'sqlserver-web-12.0'
|
||||
immediate:
|
||||
description:
|
||||
- Whether to apply the changes immediately, or after the next reboot of any associated instances.
|
||||
|
@ -61,7 +89,8 @@ options:
|
|||
aliases: []
|
||||
params:
|
||||
description:
|
||||
- Map of parameter names and values. Numeric values may be represented as K for kilo (1024), M for mega (1024^2), G for giga (1024^3), or T for tera (1024^4), and these values will be expanded into the appropriate number before being set in the parameter group.
|
||||
- Map of parameter names and values. Numeric values may be represented as K for kilo (1024), M for mega (1024^2), G for giga (1024^3),
|
||||
or T for tera (1024^4), and these values will be expanded into the appropriate number before being set in the parameter group.
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
|
|
|
@ -144,7 +144,9 @@ def main():
|
|||
# Sort the subnet groups before we compare them
|
||||
matching_groups[0].subnet_ids.sort()
|
||||
group_subnets.sort()
|
||||
if ( (matching_groups[0].name != group_name) or (matching_groups[0].description != group_description) or (matching_groups[0].subnet_ids != group_subnets) ):
|
||||
if (matching_groups[0].name != group_name or
|
||||
matching_groups[0].description != group_description or
|
||||
matching_groups[0].subnet_ids != group_subnets):
|
||||
changed_group = conn.modify_db_subnet_group(group_name, description=group_description, subnet_ids=group_subnets)
|
||||
changed = True
|
||||
except BotoServerError as e:
|
||||
|
|
|
@ -129,7 +129,8 @@ options:
|
|||
default: null
|
||||
wait:
|
||||
description:
|
||||
- When command=create, modify or restore then wait for the database to enter the 'available' state. When command=delete wait for the database to be terminated.
|
||||
- When command=create, modify or restore then wait for the database to enter the 'available' state. When command=delete wait for the database to be
|
||||
terminated.
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
wait_timeout:
|
||||
|
@ -413,7 +414,8 @@ def main():
|
|||
argument_spec.update(dict(
|
||||
command = dict(choices=['create', 'facts', 'delete', 'modify'], required=True),
|
||||
identifier = dict(required=True),
|
||||
node_type = dict(choices=['ds1.xlarge', 'ds1.8xlarge', 'ds2.xlarge', 'ds2.8xlarge', 'dc1.large', 'dc1.8xlarge', 'dw1.xlarge', 'dw1.8xlarge', 'dw2.large', 'dw2.8xlarge'], required=False),
|
||||
node_type = dict(choices=['ds1.xlarge', 'ds1.8xlarge', 'ds2.xlarge', 'ds2.8xlarge', 'dc1.large', 'dc1.8xlarge',
|
||||
'dw1.xlarge', 'dw1.8xlarge', 'dw2.large', 'dw2.8xlarge'], required=False),
|
||||
username = dict(required=False),
|
||||
password = dict(no_log=True, required=False),
|
||||
db_name = dict(require=False),
|
||||
|
|
|
@ -77,7 +77,8 @@ options:
|
|||
default: false
|
||||
value:
|
||||
description:
|
||||
- The new value when creating a DNS record. Multiple comma-spaced values are allowed for non-alias records. When deleting a record all values for the record must be specified or Route53 will not delete it.
|
||||
- The new value when creating a DNS record. Multiple comma-spaced values are allowed for non-alias records. When deleting a record all values
|
||||
for the record must be specified or Route53 will not delete it.
|
||||
required: false
|
||||
default: null
|
||||
overwrite:
|
||||
|
@ -87,12 +88,14 @@ options:
|
|||
default: null
|
||||
retry_interval:
|
||||
description:
|
||||
- In the case that route53 is still servicing a prior request, this module will wait and try again after this many seconds. If you have many domain names, the default of 500 seconds may be too long.
|
||||
- In the case that route53 is still servicing a prior request, this module will wait and try again after this many seconds. If you have many
|
||||
domain names, the default of 500 seconds may be too long.
|
||||
required: false
|
||||
default: 500
|
||||
private_zone:
|
||||
description:
|
||||
- If set to true, the private zone matching the requested name within the domain will be used if there are both public and private zones. The default is to use the public zone.
|
||||
- If set to true, the private zone matching the requested name within the domain will be used if there are both public and private zones.
|
||||
The default is to use the public zone.
|
||||
required: false
|
||||
default: false
|
||||
version_added: "1.9"
|
||||
|
|
|
@ -24,7 +24,8 @@ DOCUMENTATION = '''
|
|||
module: s3
|
||||
short_description: manage objects in S3.
|
||||
description:
|
||||
- This module allows the user to manage S3 buckets and the objects within them. Includes support for creating and deleting both objects and buckets, retrieving objects as files or strings and generating download links. This module has a dependency on python-boto.
|
||||
- This module allows the user to manage S3 buckets and the objects within them. Includes support for creating and deleting both objects and buckets,
|
||||
retrieving objects as files or strings and generating download links. This module has a dependency on python-boto.
|
||||
version_added: "1.1"
|
||||
options:
|
||||
aws_access_key:
|
||||
|
@ -89,7 +90,8 @@ options:
|
|||
version_added: "1.6"
|
||||
mode:
|
||||
description:
|
||||
- Switches the module behaviour between put (upload), get (download), geturl (return download url, Ansible 1.3+), getstr (download object as string (1.3+)), list (list keys, Ansible 2.0+), create (bucket), delete (bucket), and delobj (delete object, Ansible 2.0+).
|
||||
- Switches the module behaviour between put (upload), get (download), geturl (return download url, Ansible 1.3+),
|
||||
getstr (download object as string (1.3+)), list (list keys, Ansible 2.0+), create (bucket), delete (bucket), and delobj (delete object, Ansible 2.0+).
|
||||
required: true
|
||||
choices: ['get', 'put', 'delete', 'create', 'geturl', 'getstr', 'delobj', 'list']
|
||||
object:
|
||||
|
@ -99,7 +101,9 @@ options:
|
|||
default: null
|
||||
permission:
|
||||
description:
|
||||
- This option lets the user set the canned permissions on the object/bucket that are created. The permissions that can be set are 'private', 'public-read', 'public-read-write', 'authenticated-read'. Multiple permissions can be specified as a list.
|
||||
- This option lets the user set the canned permissions on the object/bucket that are created.
|
||||
The permissions that can be set are 'private', 'public-read', 'public-read-write', 'authenticated-read'. Multiple permissions can be
|
||||
specified as a list.
|
||||
required: false
|
||||
default: private
|
||||
version_added: "2.0"
|
||||
|
@ -118,13 +122,17 @@ options:
|
|||
version_added: "2.0"
|
||||
overwrite:
|
||||
description:
|
||||
- Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. Boolean or one of [always, never, different], true is equal to 'always' and false is equal to 'never', new in 2.0
|
||||
- Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations.
|
||||
Boolean or one of [always, never, different], true is equal to 'always' and false is equal to 'never', new in 2.0
|
||||
required: false
|
||||
default: 'always'
|
||||
version_added: "1.2"
|
||||
region:
|
||||
description:
|
||||
- "AWS region to create the bucket in. If not set then the value of the AWS_REGION and EC2_REGION environment variables are checked, followed by the aws_region and ec2_region settings in the Boto config file. If none of those are set the region defaults to the S3 Location: US Standard. Prior to ansible 1.8 this parameter could be specified but had no effect."
|
||||
- >
|
||||
AWS region to create the bucket in. If not set then the value of the AWS_REGION and EC2_REGION environment variables are checked,
|
||||
followed by the aws_region and ec2_region settings in the Boto config file. If none of those are set the region defaults to the
|
||||
S3 Location: US Standard. Prior to ansible 1.8 this parameter could be specified but had no effect.
|
||||
required: false
|
||||
default: null
|
||||
version_added: "1.8"
|
||||
|
@ -153,7 +161,9 @@ options:
|
|||
version_added: "1.3"
|
||||
ignore_nonexistent_bucket:
|
||||
description:
|
||||
- "Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the GetObject permission but no other permissions. In this case using the option mode: get will fail without specifying ignore_nonexistent_bucket: True."
|
||||
- >
|
||||
Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the GetObject permission but no other
|
||||
permissions. In this case using the option mode: get will fail without specifying ignore_nonexistent_bucket: True.
|
||||
default: false
|
||||
aliases: []
|
||||
version_added: "2.3"
|
||||
|
|
|
@ -38,7 +38,9 @@ options:
|
|||
required: true
|
||||
expiration_date:
|
||||
description:
|
||||
- "Indicates the lifetime of the objects that are subject to the rule by the date they will expire. The value must be ISO-8601 format, the time must be midnight and a GMT timezone must be specified."
|
||||
- >
|
||||
Indicates the lifetime of the objects that are subject to the rule by the date they will expire. The value must be ISO-8601 format, the time must
|
||||
be midnight and a GMT timezone must be specified.
|
||||
required: false
|
||||
default: null
|
||||
expiration_days:
|
||||
|
@ -77,7 +79,10 @@ options:
|
|||
choices: [ 'glacier', 'standard_ia']
|
||||
transition_date:
|
||||
description:
|
||||
- "Indicates the lifetime of the objects that are subject to the rule by the date they will transition to a different storage class. The value must be ISO-8601 format, the time must be midnight and a GMT timezone must be specified. If transition_days is not specified, this parameter is required."
|
||||
- >
|
||||
Indicates the lifetime of the objects that are subject to the rule by the date they will transition to a different storage class.
|
||||
The value must be ISO-8601 format, the time must be midnight and a GMT timezone must be specified. If transition_days is not specified,
|
||||
this parameter is required."
|
||||
required: false
|
||||
default: null
|
||||
transition_days:
|
||||
|
@ -110,7 +115,8 @@ EXAMPLES = '''
|
|||
status: enabled
|
||||
state: present
|
||||
|
||||
# Configure a lifecycle rule to transition all items with a prefix of /logs/ to glacier on 31 Dec 2020 and then delete on 31 Dec 2030. Note that midnight GMT must be specified.
|
||||
# Configure a lifecycle rule to transition all items with a prefix of /logs/ to glacier on 31 Dec 2020 and then delete on 31 Dec 2030.
|
||||
# Note that midnight GMT must be specified.
|
||||
# Be sure to quote your date strings
|
||||
- s3_lifecycle:
|
||||
name: mybucket
|
||||
|
@ -295,7 +301,9 @@ def compare_rule(rule_a, rule_b):
|
|||
if rule2_expiration is None:
|
||||
rule2_expiration = Expiration()
|
||||
|
||||
if (rule1.__dict__ == rule2.__dict__) and (rule1_expiration.__dict__ == rule2_expiration.__dict__) and (rule1_transition.__dict__ == rule2_transition.__dict__):
|
||||
if (rule1.__dict__ == rule2.__dict__ and
|
||||
rule1_expiration.__dict__ == rule2_expiration.__dict__ and
|
||||
rule1_transition.__dict__ == rule2_transition.__dict__):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
|
|
@ -24,7 +24,8 @@ DOCUMENTATION = '''
|
|||
module: s3_sync
|
||||
short_description: Efficiently upload multiple files to S3
|
||||
description:
|
||||
- The S3 module is great, but it is very slow for a large volume of files- even a dozen will be noticeable. In addition to speed, it handles globbing, inclusions/exclusions, mime types, expiration mapping, recursion, and smart directory mapping.
|
||||
- The S3 module is great, but it is very slow for a large volume of files- even a dozen will be noticeable. In addition to speed, it handles globbing,
|
||||
inclusions/exclusions, mime types, expiration mapping, recursion, and smart directory mapping.
|
||||
version_added: "2.3"
|
||||
options:
|
||||
mode:
|
||||
|
@ -63,7 +64,9 @@ options:
|
|||
choices: [ '', private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control ]
|
||||
mime_map:
|
||||
description:
|
||||
- 'Dict entry from extension to MIME type. This will override any default/sniffed MIME type. For example C({".txt": "application/text", ".yml": "appication/text"})'
|
||||
- >
|
||||
Dict entry from extension to MIME type. This will override any default/sniffed MIME type.
|
||||
For example C({".txt": "application/text", ".yml": "appication/text"})
|
||||
required: false
|
||||
include:
|
||||
description:
|
||||
|
@ -362,7 +365,10 @@ def head_s3(s3, bucket, s3keys):
|
|||
try:
|
||||
retentry['s3_head'] = s3.head_object(Bucket=bucket, Key=entry['s3_path'])
|
||||
except botocore.exceptions.ClientError as err:
|
||||
if hasattr(err, 'response') and 'ResponseMetadata' in err.response and 'HTTPStatusCode' in err.response['ResponseMetadata'] and str(err.response['ResponseMetadata']['HTTPStatusCode']) == '404':
|
||||
if (hasattr(err, 'response') and
|
||||
'ResponseMetadata' in err.response and
|
||||
'HTTPStatusCode' in err.response['ResponseMetadata'] and
|
||||
str(err.response['ResponseMetadata']['HTTPStatusCode']) == '404'):
|
||||
pass
|
||||
else:
|
||||
raise Exception(err)
|
||||
|
@ -444,7 +450,8 @@ def main():
|
|||
bucket = dict(required=True),
|
||||
key_prefix = dict(required=False, default=''),
|
||||
file_root = dict(required=True, type='path'),
|
||||
permission = dict(required=False, choices=['private', 'public-read', 'public-read-write', 'authenticated-read', 'aws-exec-read', 'bucket-owner-read', 'bucket-owner-full-control']),
|
||||
permission = dict(required=False, choices=['private', 'public-read', 'public-read-write', 'authenticated-read', 'aws-exec-read', 'bucket-owner-read',
|
||||
'bucket-owner-full-control']),
|
||||
retries = dict(required=False),
|
||||
mime_map = dict(required=False, type='dict'),
|
||||
exclude = dict(required=False, default=".*"),
|
||||
|
|
|
@ -44,7 +44,10 @@ options:
|
|||
default: null
|
||||
region:
|
||||
description:
|
||||
- "AWS region to create the bucket in. If not set then the value of the AWS_REGION and EC2_REGION environment variables are checked, followed by the aws_region and ec2_region settings in the Boto config file. If none of those are set the region defaults to the S3 Location: US Standard."
|
||||
- >
|
||||
AWS region to create the bucket in. If not set then the value of the AWS_REGION and EC2_REGION environment variables are checked,
|
||||
followed by the aws_region and ec2_region settings in the Boto config file. If none of those are set the region defaults to the
|
||||
S3 Location: US Standard.
|
||||
required: false
|
||||
default: null
|
||||
state:
|
||||
|
@ -55,7 +58,10 @@ options:
|
|||
choices: [ 'present', 'absent' ]
|
||||
suffix:
|
||||
description:
|
||||
- "Suffix that is appended to a request that is for a directory on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not include a slash character."
|
||||
- >
|
||||
Suffix that is appended to a request that is for a directory on the website endpoint (e.g. if the suffix is index.html and you make a request to
|
||||
samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not include a slash
|
||||
character.
|
||||
required: false
|
||||
default: index.html
|
||||
|
||||
|
@ -115,7 +121,8 @@ routing_rules:
|
|||
sample: ansible.com
|
||||
condition:
|
||||
key_prefix_equals:
|
||||
description: object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html, the key prefix will be ExamplePage.html
|
||||
description: object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html, the key prefix will be
|
||||
ExamplePage.html
|
||||
returned: when routing rule present
|
||||
type: string
|
||||
sample: docs/
|
||||
|
|
|
@ -30,7 +30,8 @@ author: Boris Ekelchik (@bekelchik)
|
|||
options:
|
||||
role_arn:
|
||||
description:
|
||||
- The Amazon Resource Name (ARN) of the role that the caller is assuming (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#Identifiers_ARNs)
|
||||
- The Amazon Resource Name (ARN) of the role that the caller is
|
||||
assuming (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html#Identifiers_ARNs)
|
||||
required: true
|
||||
role_session_name:
|
||||
description:
|
||||
|
@ -43,7 +44,8 @@ options:
|
|||
default: null
|
||||
duration_seconds:
|
||||
description:
|
||||
- The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds.
|
||||
- The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour).
|
||||
By default, the value is set to 3600 seconds.
|
||||
required: false
|
||||
default: null
|
||||
external_id:
|
||||
|
|
|
@ -30,7 +30,9 @@ author: Victor Costan (@pwnall)
|
|||
options:
|
||||
duration_seconds:
|
||||
description:
|
||||
- The duration, in seconds, of the session token. See http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html#API_GetSessionToken_RequestParameters for acceptable and default values.
|
||||
- The duration, in seconds, of the session token.
|
||||
See http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html#API_GetSessionToken_RequestParameters
|
||||
for acceptable and default values.
|
||||
required: false
|
||||
default: null
|
||||
mfa_serial_number:
|
||||
|
|
|
@ -53,12 +53,14 @@ options:
|
|||
required: true
|
||||
image:
|
||||
description:
|
||||
- system image for creating the virtual machine (e.g., b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu_DAILY_BUILD-precise-12_04_3-LTS-amd64-server-20131205-en-us-30GB)
|
||||
- system image for creating the virtual machine
|
||||
(e.g., b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu_DAILY_BUILD-precise-12_04_3-LTS-amd64-server-20131205-en-us-30GB)
|
||||
required: true
|
||||
default: null
|
||||
role_size:
|
||||
description:
|
||||
- azure role size for the new virtual machine (e.g., Small, ExtraLarge, A6). You have to pay attention to the fact that instances of type G and DS are not available in all regions (locations). Make sure if you selected the size and type of instance available in your chosen location.
|
||||
- azure role size for the new virtual machine (e.g., Small, ExtraLarge, A6). You have to pay attention to the fact that instances of
|
||||
type G and DS are not available in all regions (locations). Make sure if you selected the size and type of instance available in your chosen location.
|
||||
required: false
|
||||
default: Small
|
||||
endpoints:
|
||||
|
@ -78,7 +80,8 @@ options:
|
|||
default: null
|
||||
ssh_cert_path:
|
||||
description:
|
||||
- path to an X509 certificate containing the public ssh key to install in the virtual machine. See http://www.windowsazure.com/en-us/manage/linux/tutorials/intro-to-linux/ for more details.
|
||||
- path to an X509 certificate containing the public ssh key to install in the virtual machine.
|
||||
See http://www.windowsazure.com/en-us/manage/linux/tutorials/intro-to-linux/ for more details.
|
||||
- if this option is specified, password-based ssh authentication will be disabled.
|
||||
required: false
|
||||
default: null
|
||||
|
|
|
@ -233,7 +233,9 @@ EXAMPLES = '''
|
|||
- "14.04.2-LTS"
|
||||
- "15.04"
|
||||
metadata:
|
||||
description: "The Ubuntu version for the VM. This will pick a fully patched image of this given Ubuntu version. Allowed values: 12.04.5-LTS, 14.04.2-LTS, 15.04."
|
||||
description: >
|
||||
The Ubuntu version for the VM. This will pick a fully patched image of this given Ubuntu version.
|
||||
Allowed values: 12.04.5-LTS, 14.04.2-LTS, 15.04."
|
||||
variables:
|
||||
location: "West US"
|
||||
imagePublisher: "Canonical"
|
||||
|
@ -320,7 +322,9 @@ EXAMPLES = '''
|
|||
osDisk:
|
||||
name: "osdisk"
|
||||
vhd:
|
||||
uri: "[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/',variables('OSDiskName'),'.vhd')]"
|
||||
uri: >
|
||||
[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/',
|
||||
variables('OSDiskName'),'.vhd')]
|
||||
caching: "ReadWrite"
|
||||
createOption: "FromImage"
|
||||
networkProfile:
|
||||
|
|
|
@ -120,7 +120,7 @@ azure_networkinterfaces:
|
|||
"tags": {},
|
||||
"type": "Microsoft.Network/networkInterfaces"
|
||||
}]
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
from ansible.module_utils.basic import *
|
||||
from ansible.module_utils.azure_rm_common import *
|
||||
|
|
|
@ -334,7 +334,7 @@ state:
|
|||
},
|
||||
"type": "Microsoft.Network/networkSecurityGroups"
|
||||
}
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
from ansible.module_utils.basic import *
|
||||
from ansible.module_utils.azure_rm_common import *
|
||||
|
|
|
@ -200,7 +200,7 @@ azure_securitygroups:
|
|||
"type": "Microsoft.Network/networkSecurityGroups"
|
||||
}]
|
||||
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
|
||||
from ansible.module_utils.basic import *
|
||||
|
|
|
@ -130,7 +130,7 @@ state:
|
|||
description: Success or failure of the provisioning event.
|
||||
type: str
|
||||
example: "Succeeded"
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
|
||||
from ansible.module_utils.basic import *
|
||||
|
|
|
@ -52,7 +52,8 @@ options:
|
|||
description:
|
||||
- Assert the state of the virtual machine.
|
||||
- State 'present' will check that the machine exists with the requested configuration. If the configuration
|
||||
of the existing machine does not match, the machine will be updated. Use options started, allocated and restarted to change the machine's power state.
|
||||
of the existing machine does not match, the machine will be updated. Use options started, allocated and restarted to change the machine's power
|
||||
state.
|
||||
- State 'absent' will remove the virtual machine.
|
||||
default: present
|
||||
required: false
|
||||
|
@ -437,7 +438,7 @@ azure_vm:
|
|||
},
|
||||
"type": "Microsoft.Compute/virtualMachines"
|
||||
}
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
import random
|
||||
|
||||
|
|
|
@ -31,12 +31,15 @@ description:
|
|||
- Create, start, stop and delete servers on the cloudscale.ch IaaS service.
|
||||
- All operations are performed using the cloudscale.ch public API v1.
|
||||
- "For details consult the full API documentation: U(https://www.cloudscale.ch/en/api/v1)."
|
||||
- An valid API token is required for all operations. You can create as many tokens as you like using the cloudscale.ch control panel at U(https://control.cloudscale.ch).
|
||||
- An valid API token is required for all operations. You can create as many tokens as you like using the cloudscale.ch control panel at
|
||||
U(https://control.cloudscale.ch).
|
||||
notes:
|
||||
- Instead of the api_token parameter the CLOUDSCALE_API_TOKEN environment variable can be used.
|
||||
- To create a new server at least the C(name), C(ssh_key), C(image) and C(flavor) options are required.
|
||||
- If more than one server with the name given by the C(name) option exists, execution is aborted.
|
||||
- Once a server is created all parameters except C(state) are read-only. You can't change the name, flavor or any other property. This is a limitation of the cloudscale.ch API. The module will silently ignore differences between the configured parameters and the running server if a server with the correct name or UUID exists. Only state changes will be applied.
|
||||
- Once a server is created all parameters except C(state) are read-only. You can't change the name, flavor or any other property. This is a limitation
|
||||
of the cloudscale.ch API. The module will silently ignore differences between the configured parameters and the running server if a server with the
|
||||
correct name or UUID exists. Only state changes will be applied.
|
||||
version_added: 2.3
|
||||
author: "Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>"
|
||||
options:
|
||||
|
|
|
@ -307,7 +307,8 @@ state:
|
|||
type: string
|
||||
sample: Up
|
||||
suitable_for_migration:
|
||||
description: Whether this host is suitable (has enough capacity and satisfies all conditions like hosttags, max guests VM limit, etc) to migrate a VM to it or not.
|
||||
description: Whether this host is suitable (has enough capacity and satisfies all conditions like hosttags, max guests VM limit, etc) to migrate a VM
|
||||
to it or not.
|
||||
returned: success
|
||||
type: string
|
||||
sample: true
|
||||
|
|
|
@ -150,7 +150,8 @@ options:
|
|||
default: null
|
||||
root_disk_size:
|
||||
description:
|
||||
- Root disk size in GByte required if deploying instance with KVM hypervisor and want resize the root disk size at startup (need CloudStack >= 4.4, cloud-initramfs-growroot installed and enabled in the template)
|
||||
- Root disk size in GByte required if deploying instance with KVM hypervisor and want resize the root disk size at startup
|
||||
(need CloudStack >= 4.4, cloud-initramfs-growroot installed and enabled in the template)
|
||||
required: false
|
||||
default: null
|
||||
security_groups:
|
||||
|
@ -984,7 +985,8 @@ def main():
|
|||
memory = dict(default=None, type='int'),
|
||||
template = dict(default=None),
|
||||
iso = dict(default=None),
|
||||
template_filter = dict(default="executable", aliases=['iso_filter'], choices=['featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable', 'community']),
|
||||
template_filter = dict(default="executable", aliases=['iso_filter'], choices=['featured', 'self', 'selfexecutable', 'sharedexecutable', 'executable',
|
||||
'community']),
|
||||
networks = dict(type='list', aliases=[ 'network' ], default=None),
|
||||
ip_to_networks = dict(type='list', aliases=['ip_to_network'], default=None),
|
||||
ip_address = dict(defaul=None),
|
||||
|
|
|
@ -48,7 +48,8 @@ options:
|
|||
default: null
|
||||
is_ready:
|
||||
description:
|
||||
- This flag is used for searching existing ISOs. If set to C(true), it will only list ISO ready for deployment e.g. successfully downloaded and installed. Recommended to set it to C(false).
|
||||
- This flag is used for searching existing ISOs. If set to C(true), it will only list ISO ready for deployment e.g.
|
||||
successfully downloaded and installed. Recommended to set it to C(false).
|
||||
required: false
|
||||
default: false
|
||||
aliases: []
|
||||
|
|
|
@ -51,7 +51,8 @@ options:
|
|||
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key.
|
||||
unique_name:
|
||||
description:
|
||||
- Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence.
|
||||
- Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host
|
||||
per name. Useful for idempotence.
|
||||
version_added: "1.4"
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
|
@ -269,7 +270,8 @@ class Droplet(JsonfyMixIn):
|
|||
cls.manager = DoManager(None, api_token, api_version=2)
|
||||
|
||||
@classmethod
|
||||
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False, user_data=None, ipv6=False):
|
||||
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False, user_data=None,
|
||||
ipv6=False):
|
||||
private_networking_lower = str(private_networking).lower()
|
||||
backups_enabled_lower = str(backups_enabled).lower()
|
||||
ipv6_lower = str(ipv6).lower()
|
||||
|
@ -463,7 +465,8 @@ def main():
|
|||
),
|
||||
)
|
||||
if not HAS_DOPY and not HAS_SIX:
|
||||
module.fail_json(msg='dopy >= 0.3.2 is required for this module. dopy requires six but six is not installed. Make sure both dopy and six are installed.')
|
||||
module.fail_json(msg='dopy >= 0.3.2 is required for this module. dopy requires six but six is not installed. '
|
||||
'Make sure both dopy and six are installed.')
|
||||
if not HAS_DOPY:
|
||||
module.fail_json(msg='dopy >= 0.3.2 required for this module')
|
||||
|
||||
|
|
|
@ -25,7 +25,10 @@ module: gc_storage
|
|||
version_added: "1.4"
|
||||
short_description: This module manages objects/buckets in Google Cloud Storage.
|
||||
description:
|
||||
- This module allows users to manage their objects/buckets in Google Cloud Storage. It allows upload and download operations and can set some canned permissions. It also allows retrieval of URLs for objects for use in playbooks, and retrieval of string contents of objects. This module requires setting the default project in GCS prior to playbook usage. See U(https://developers.google.com/storage/docs/reference/v1/apiversion1) for information about setting the default project.
|
||||
- This module allows users to manage their objects/buckets in Google Cloud Storage. It allows upload and download operations and can set some
|
||||
canned permissions. It also allows retrieval of URLs for objects for use in playbooks, and retrieval of string contents of objects. This module
|
||||
requires setting the default project in GCS prior to playbook usage. See U(https://developers.google.com/storage/docs/reference/v1/apiversion1) for
|
||||
information about setting the default project.
|
||||
|
||||
options:
|
||||
bucket:
|
||||
|
@ -54,7 +57,8 @@ options:
|
|||
aliases: [ 'overwrite' ]
|
||||
permission:
|
||||
description:
|
||||
- This option let's the user set the canned permissions on the object/bucket that are created. The permissions that can be set are 'private', 'public-read', 'authenticated-read'.
|
||||
- This option let's the user set the canned permissions on the object/bucket that are created. The permissions that can be set are 'private',
|
||||
'public-read', 'authenticated-read'.
|
||||
required: false
|
||||
default: private
|
||||
headers:
|
||||
|
@ -65,12 +69,14 @@ options:
|
|||
default: '{}'
|
||||
expiration:
|
||||
description:
|
||||
- Time limit (in seconds) for the URL generated and returned by GCA when performing a mode=put or mode=get_url operation. This url is only available when public-read is the acl for the object.
|
||||
- Time limit (in seconds) for the URL generated and returned by GCA when performing a mode=put or mode=get_url operation. This url is only
|
||||
available when public-read is the acl for the object.
|
||||
required: false
|
||||
default: null
|
||||
mode:
|
||||
description:
|
||||
- Switches the module behaviour between upload, download, get_url (return download url) , get_str (download object as string), create (bucket) and delete (bucket).
|
||||
- Switches the module behaviour between upload, download, get_url (return download url) , get_str (download object as string), create (bucket) and
|
||||
delete (bucket).
|
||||
required: true
|
||||
default: null
|
||||
choices: [ 'get', 'put', 'get_url', 'get_str', 'delete', 'create' ]
|
||||
|
|
|
@ -36,7 +36,8 @@ options:
|
|||
aliases: []
|
||||
instance_pattern:
|
||||
description:
|
||||
- The pattern of GCE instance names to match for adding/removing tags. Full-Python regex is supported. See U(https://docs.python.org/2/library/re.html) for details.
|
||||
- The pattern of GCE instance names to match for adding/removing tags. Full-Python regex is supported.
|
||||
See U(https://docs.python.org/2/library/re.html) for details.
|
||||
If instance_name is not specified, this field is required.
|
||||
required: false
|
||||
default: null
|
||||
|
|
|
@ -44,7 +44,9 @@ options:
|
|||
required: True
|
||||
subscription:
|
||||
description:
|
||||
- Dictionary containing a subscripton name associated with a topic (required), along with optional ack_deadline, push_endpoint and pull. For pulling from a subscription, message_ack (bool), max_messages (int) and return_immediate are available as subfields. See subfields name, push_endpoint and ack_deadline for more information.
|
||||
- Dictionary containing a subscripton name associated with a topic (required), along with optional ack_deadline, push_endpoint and pull.
|
||||
For pulling from a subscription, message_ack (bool), max_messages (int) and return_immediate are available as subfields.
|
||||
See subfields name, push_endpoint and ack_deadline for more information.
|
||||
required: False
|
||||
name:
|
||||
description: Subfield of subscription. Required if subscription is specified. See examples.
|
||||
|
@ -53,15 +55,25 @@ options:
|
|||
description: Subfield of subscription. Not required. Default deadline for subscriptions to ACK the message before it is resent. See examples.
|
||||
required: False
|
||||
pull:
|
||||
description: Subfield of subscription. Not required. If specified, messages will be retrieved from topic via the provided subscription name. max_messages (int; default None; max number of messages to pull), message_ack (bool; default False; acknowledge the message) and return_immediately (bool; default True, don't wait for messages to appear). If the messages are acknowledged, changed is set to True, otherwise, changed is False.
|
||||
description:
|
||||
- Subfield of subscription. Not required. If specified, messages will be retrieved from topic via the provided subscription name.
|
||||
max_messages (int; default None; max number of messages to pull), message_ack (bool; default False; acknowledge the message) and return_immediately
|
||||
(bool; default True, don't wait for messages to appear). If the messages are acknowledged, changed is set to True, otherwise, changed is False.
|
||||
push_endpoint:
|
||||
description: Subfield of subscription. Not required. If specified, message will be sent to an endpoint. See U(https://cloud.google.com/pubsub/docs/advanced#push_endpoints) for more information.
|
||||
description:
|
||||
- Subfield of subscription. Not required. If specified, message will be sent to an endpoint.
|
||||
See U(https://cloud.google.com/pubsub/docs/advanced#push_endpoints) for more information.
|
||||
required: False
|
||||
publish:
|
||||
description: List of dictionaries describing messages and attributes to be published. Dictionary is in message(str):attributes(dict) format. Only message is required.
|
||||
description:
|
||||
- List of dictionaries describing messages and attributes to be published. Dictionary is in message(str):attributes(dict) format.
|
||||
Only message is required.
|
||||
required: False
|
||||
state:
|
||||
description: State of the topic or queue (absent, present). Applies to the most granular resource. Remove the most granular resource. If subcription is specified we remove it. If only topic is specified, that is what is removed. Note that a topic can be removed without first removing the subscription.
|
||||
description:
|
||||
- State of the topic or queue (absent, present). Applies to the most granular resource. Remove the most granular resource. If subcription is
|
||||
specified we remove it. If only topic is specified, that is what is removed. Note that a topic can be removed without first removing the
|
||||
subscription.
|
||||
required: False
|
||||
default: "present"
|
||||
'''
|
||||
|
@ -144,7 +156,8 @@ gcpubsub:
|
|||
|
||||
RETURN = '''
|
||||
publish:
|
||||
description: List of dictionaries describing messages and attributes to be published. Dictionary is in message(str):attributes(dict) format. Only message is required.
|
||||
description: List of dictionaries describing messages and attributes to be published. Dictionary is in message(str):attributes(dict) format.
|
||||
Only message is required.
|
||||
returned: Only when specified
|
||||
type: list of dictionary
|
||||
sample: "publish: ['message': 'my message', attributes: {'key1': 'value1'}]"
|
||||
|
|
|
@ -292,7 +292,9 @@ def conn(url, user, password):
|
|||
def create_vm(conn, vmtype, vmname, zone, vmdisk_size, vmcpus, vmnic, vmnetwork, vmmem, vmdisk_alloc, sdomain, vmcores, vmos, vmdisk_int):
|
||||
if vmdisk_alloc == 'thin':
|
||||
# define VM params
|
||||
vmparams = params.VM(name=vmname,cluster=conn.clusters.get(name=zone),os=params.OperatingSystem(type_=vmos),template=conn.templates.get(name="Blank"),memory=1024 * 1024 * int(vmmem),cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))), type_=vmtype)
|
||||
vmparams = params.VM(name=vmname, cluster=conn.clusters.get(name=zone), os=params.OperatingSystem(type_=vmos),
|
||||
template=conn.templates.get(name="Blank"), memory=1024 * 1024 * int(vmmem),
|
||||
cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))), type_=vmtype)
|
||||
# define disk params
|
||||
vmdisk= params.Disk(size=1024 * 1024 * 1024 * int(vmdisk_size), wipe_after_delete=True, sparse=True, interface=vmdisk_int, type_="System", format='cow',
|
||||
storage_domains=params.StorageDomains(storage_domain=[conn.storagedomains.get(name=sdomain)]))
|
||||
|
@ -301,10 +303,12 @@ def create_vm(conn, vmtype, vmname, zone, vmdisk_size, vmcpus, vmnic, vmnetwork,
|
|||
nic_net1 = params.NIC(name='nic1', network=network_net, interface='virtio')
|
||||
elif vmdisk_alloc == 'preallocated':
|
||||
# define VM params
|
||||
vmparams = params.VM(name=vmname,cluster=conn.clusters.get(name=zone),os=params.OperatingSystem(type_=vmos),template=conn.templates.get(name="Blank"),memory=1024 * 1024 * int(vmmem),cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))) ,type_=vmtype)
|
||||
vmparams = params.VM(name=vmname, cluster=conn.clusters.get(name=zone), os=params.OperatingSystem(type_=vmos),
|
||||
template=conn.templates.get(name="Blank"), memory=1024 * 1024 * int(vmmem),
|
||||
cpu=params.CPU(topology=params.CpuTopology(cores=int(vmcores))) ,type_=vmtype)
|
||||
# define disk params
|
||||
vmdisk= params.Disk(size=1024 * 1024 * 1024 * int(vmdisk_size), wipe_after_delete=True, sparse=False, interface=vmdisk_int, type_="System", format='raw',
|
||||
storage_domains=params.StorageDomains(storage_domain=[conn.storagedomains.get(name=sdomain)]))
|
||||
vmdisk= params.Disk(size=1024 * 1024 * 1024 * int(vmdisk_size), wipe_after_delete=True, sparse=False, interface=vmdisk_int, type_="System",
|
||||
format='raw', storage_domains=params.StorageDomains(storage_domain=[conn.storagedomains.get(name=sdomain)]))
|
||||
# define network parameters
|
||||
network_net = params.Network(name=vmnetwork)
|
||||
nic_net1 = params.NIC(name=vmnic, network=network_net, interface='virtio')
|
||||
|
|
|
@ -762,7 +762,10 @@ def get_vminfo(module, proxmox, node, vmid, **kwargs):
|
|||
k = vm[k]
|
||||
k = re.search('=(.*?),', k).group(1)
|
||||
mac[interface] = k
|
||||
if re.match(r'virtio[0-9]', k) is not None or re.match(r'ide[0-9]', k) is not None or re.match(r'scsi[0-9]', k) is not None or re.match(r'sata[0-9]', k) is not None:
|
||||
if (re.match(r'virtio[0-9]', k) is not None or
|
||||
re.match(r'ide[0-9]', k) is not None or
|
||||
re.match(r'scsi[0-9]', k) is not None or
|
||||
re.match(r'sata[0-9]', k) is not None):
|
||||
device = k
|
||||
k = vm[k]
|
||||
k = re.search('(.*?),', k).group(1)
|
||||
|
|
|
@ -54,7 +54,8 @@ options:
|
|||
default: us-east-1
|
||||
deploy:
|
||||
description:
|
||||
- Whether or not to deploy artifacts after building them. When this option is `false` all the functions will be built, but no stack update will be run to send them out. This is mostly useful for generating artifacts to be stored/deployed elsewhere.
|
||||
- Whether or not to deploy artifacts after building them. When this option is `false` all the functions will be built, but no stack update will be
|
||||
run to send them out. This is mostly useful for generating artifacts to be stored/deployed elsewhere.
|
||||
required: false
|
||||
default: true
|
||||
notes:
|
||||
|
|
|
@ -80,7 +80,8 @@ options:
|
|||
version_added: "1.8"
|
||||
image_exclude:
|
||||
description:
|
||||
- Text to use to filter image names, for the case, such as HP, where there are multiple image names matching the common identifying portions. image_exclude is a negative match filter - it is text that may not exist in the image name. Defaults to "(deprecated)"
|
||||
- Text to use to filter image names, for the case, such as HP, where there are multiple image names matching the common identifying
|
||||
portions. image_exclude is a negative match filter - it is text that may not exist in the image name. Defaults to "(deprecated)"
|
||||
version_added: "1.8"
|
||||
flavor_id:
|
||||
description:
|
||||
|
@ -95,7 +96,8 @@ options:
|
|||
version_added: "1.8"
|
||||
flavor_include:
|
||||
description:
|
||||
- Text to use to filter flavor names, for the case, such as Rackspace, where there are multiple flavors that have the same ram count. flavor_include is a positive match filter - it must exist in the flavor name.
|
||||
- Text to use to filter flavor names, for the case, such as Rackspace, where there are multiple flavors that have the same ram count.
|
||||
flavor_include is a positive match filter - it must exist in the flavor name.
|
||||
version_added: "1.8"
|
||||
key_name:
|
||||
description:
|
||||
|
@ -459,7 +461,7 @@ def _create_server(module, nova):
|
|||
public = openstack_find_nova_addresses(getattr(server, 'addresses'), 'floating', 'public')
|
||||
|
||||
# now exit with info
|
||||
module.exit_json(changed = True, id = server.id, private_ip=''.join(private), public_ip=''.join(public), status = server.status, info = server._info)
|
||||
module.exit_json(changed=True, id=server.id, private_ip=''.join(private), public_ip=''.join(public), status=server.status, info=server._info)
|
||||
|
||||
if server.status == 'ERROR':
|
||||
module.fail_json(msg = "Error in creating the server, please check logs")
|
||||
|
|
|
@ -30,7 +30,8 @@ module: packet_device
|
|||
short_description: create, destroy, start, stop, and reboot a Packet Host machine.
|
||||
|
||||
description:
|
||||
- create, destroy, update, start, stop, and reboot a Packet Host machine. When the machine is created it can optionally wait for it to have an IP address before returning. This module has a dependency on packet >= 1.0.
|
||||
- create, destroy, update, start, stop, and reboot a Packet Host machine. When the machine is created it can optionally wait for it to have an
|
||||
IP address before returning. This module has a dependency on packet >= 1.0.
|
||||
- API is documented at U(https://www.packet.net/help/api/#page:devices,header:devices-devices-post).
|
||||
|
||||
version_added: 2.3
|
||||
|
@ -157,7 +158,7 @@ EXAMPLES = '''
|
|||
user_data: |
|
||||
#cloud-config
|
||||
ssh_authorized_keys:
|
||||
- ssh-dss AAAAB3NzaC1kc3MAAACBAIfNT5S0ncP4BBJBYNhNPxFF9lqVhfPeu6SM1LoCocxqDc1AT3zFRi8hjIf6TLZ2AA4FYbcAWxLMhiBxZRVldT9GdBXile78kAK5z3bKTwq152DCqpxwwbaTIggLFhsU8wrfBsPWnDuAxZ0h7mmrCjoLIE3CNLDA/NmV3iB8xMThAAAAFQCStcesSgR1adPORzBxTr7hug92LwAAAIBOProm3Gk+HWedLyE8IfofLaOeRnbBRHAOL4z0SexKkVOnQ/LGN/uDIIPGGBDYTvXgKZT+jbHeulRJ2jKgfSpGKN4JxFQ8uzVH492jEiiUJtT72Ss1dCV4PmyERVIw+f54itihV3z/t25dWgowhb0int8iC/OY3cGodlmYb3wdcQAAAIBuLbB45djZXzUkOTzzcRDIRfhaxo5WipbtEM2B1fuBt2gyrvksPpH/LK6xTjdIIb0CxPu4OCxwJG0aOz5kJoRnOWIXQGhH7VowrJhsqhIc8gN9ErbO5ea8b1L76MNcAotmBDeTUiPw01IJ8MdDxfmcsCslJKgoRKSmQpCwXQtN2g== tomk@hp2
|
||||
- {{ lookup('file', 'my_packet_sshkey') }}
|
||||
coreos:
|
||||
etcd:
|
||||
discovery: https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3
|
||||
|
@ -207,7 +208,7 @@ devices:
|
|||
type: array
|
||||
sample: '[{"hostname": "my-server.com", "id": "server-id", "public-ipv4": "147.229.15.12", "private-ipv4": "10.0.15.12", "public-ipv6": ""2604:1380:2:5200::3"}]'
|
||||
returned: always
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
|
||||
import os
|
||||
|
|
|
@ -69,7 +69,7 @@ EXAMPLES = '''
|
|||
hosts: localhost
|
||||
tasks:
|
||||
packet_sshkey:
|
||||
key: ssh-dss AAAAB3NzaC1kc3MAAACBAIfNT5S0ncP4BBJBYNhNPxFF9lqVhfPeu6SM1LoCocxqDc1AT3zFRi8hjIf6TLZ2AA4FYbcAWxLMhiBxZRVldT9GdBXile78kAK5z3bKTwq152DCqpxwwbaTIggLFhsU8wrfBsPWnDuAxZ0h7mmrCjoLIE3CNLDA/NmV3iB8xMThAAAAFQCStcesSgR1adPORzBxTr7hug92LwAAAIBOProm3Gk+HWedLyE8IfofLaOeRnbBRHAOL4z0SexKkVOnQ/LGN/uDIIPGGBDYTvXgKZT+jbHeulRJ2jKgfSpGKN4JxFQ8uzVH492jEiiUJtT72Ss1dCV4PmyERVIw+f54itihV3z/t25dWgowhb0int8iC/OY3cGodlmYb3wdcQAAAIBuLbB45djZXzUkOTzzcRDIRfhaxo5WipbtEM2B1fuBt2gyrvksPpH/LK6xTjdIIb0CxPu4OCxwJG0aOz5kJoRnOWIXQGhH7VowrJhsqhIc8gN9ErbO5ea8b1L76MNcAotmBDeTUiPw01IJ8MdDxfmcsCslJKgoRKSmQpCwXQtN2g== tomk@hp2
|
||||
key: "{{ lookup('file', 'my_packet_sshkey.pub') }}"
|
||||
|
||||
- name: create sshkey from file
|
||||
hosts: localhost
|
||||
|
@ -104,7 +104,7 @@ sshkeys:
|
|||
}
|
||||
]
|
||||
returned: always
|
||||
'''
|
||||
''' # NOQA
|
||||
|
||||
import os
|
||||
import uuid
|
||||
|
|
|
@ -24,7 +24,8 @@ DOCUMENTATION = '''
|
|||
module: profitbricks
|
||||
short_description: Create, destroy, start, stop, and reboot a ProfitBricks virtual machine.
|
||||
description:
|
||||
- Create, destroy, update, start, stop, and reboot a ProfitBricks virtual machine. When the virtual machine is created it can optionally wait for it to be 'running' before returning. This module has a dependency on profitbricks >= 1.0.0
|
||||
- Create, destroy, update, start, stop, and reboot a ProfitBricks virtual machine. When the virtual machine is created it can optionally wait
|
||||
for it to be 'running' before returning. This module has a dependency on profitbricks >= 1.0.0
|
||||
version_added: "2.0"
|
||||
options:
|
||||
auto_increment:
|
||||
|
|
|
@ -24,7 +24,8 @@ DOCUMENTATION = '''
|
|||
module: profitbricks_datacenter
|
||||
short_description: Create or destroy a ProfitBricks Virtual Datacenter.
|
||||
description:
|
||||
- This is a simple module that supports creating or removing vDCs. A vDC is required before you can create servers. This module has a dependency on profitbricks >= 1.0.0
|
||||
- This is a simple module that supports creating or removing vDCs. A vDC is required before you can create servers. This module has a dependency
|
||||
on profitbricks >= 1.0.0
|
||||
version_added: "2.0"
|
||||
options:
|
||||
name:
|
||||
|
|
|
@ -240,13 +240,14 @@ import time
|
|||
|
||||
#TODO: get this info from API
|
||||
STATES = ['present', 'absent']
|
||||
DATACENTERS = ['ams01','ams03','che01','dal01','dal05','dal06','dal09','dal10','fra02','hkg02','hou02','lon02','mel01','mex01','mil01','mon01','osl01','par01','sjc01','sjc03','sao01','sea01','sng01','syd01','tok02','tor01','wdc01','wdc04']
|
||||
CPU_SIZES = [1,2,4,8,16,32,56]
|
||||
MEMORY_SIZES = [1024,2048,4096,6144,8192,12288,16384,32768,49152,65536,131072,247808]
|
||||
INITIALDISK_SIZES = [25,100]
|
||||
LOCALDISK_SIZES = [25,100,150,200,300]
|
||||
SANDISK_SIZES = [10,20,25,30,40,50,75,100,125,150,175,200,250,300,350,400,500,750,1000,1500,2000]
|
||||
NIC_SPEEDS = [10,100,1000]
|
||||
DATACENTERS = ['ams01', 'ams03', 'che01', 'dal01', 'dal05', 'dal06', 'dal09', 'dal10', 'fra02', 'hkg02', 'hou02', 'lon02', 'mel01', 'mex01', 'mil01', 'mon01',
|
||||
'osl01', 'par01', 'sjc01', 'sjc03', 'sao01', 'sea01', 'sng01', 'syd01', 'tok02', 'tor01', 'wdc01', 'wdc04']
|
||||
CPU_SIZES = [1, 2, 4, 8, 16, 32, 56]
|
||||
MEMORY_SIZES = [1024, 2048, 4096, 6144, 8192, 12288, 16384, 32768, 49152, 65536, 131072, 247808]
|
||||
INITIALDISK_SIZES = [25, 100]
|
||||
LOCALDISK_SIZES = [25, 100, 150, 200, 300]
|
||||
SANDISK_SIZES = [10, 20, 25, 30, 40, 50, 75, 100, 125, 150, 175, 200, 250, 300, 350, 400, 500, 750, 1000, 1500, 2000]
|
||||
NIC_SPEEDS = [10, 100, 1000]
|
||||
|
||||
try:
|
||||
import SoftLayer
|
||||
|
|
|
@ -82,7 +82,9 @@ options:
|
|||
description:
|
||||
- "Set the guest ID (Debian, RHEL, Windows...)"
|
||||
- "This field is required when creating a VM"
|
||||
- "Valid values are referenced here: https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html"
|
||||
- >
|
||||
Valid values are referenced here:
|
||||
https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
|
||||
version_added: "2.3"
|
||||
disk:
|
||||
description:
|
||||
|
@ -675,7 +677,9 @@ class PyVmomiHelper(object):
|
|||
# VDS switch
|
||||
pg_obj = get_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_devices[key]['name'])
|
||||
|
||||
if nic.device.backing and ( nic.device.backing.port.portgroupKey != pg_obj.key or nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid ):
|
||||
if (nic.device.backing and
|
||||
(nic.device.backing.port.portgroupKey != pg_obj.key or
|
||||
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid)):
|
||||
nic_change_detected = True
|
||||
|
||||
dvs_port_connection = vim.dvs.PortConnection()
|
||||
|
@ -792,7 +796,8 @@ class PyVmomiHelper(object):
|
|||
|
||||
if 'joindomain' in self.params['customization']:
|
||||
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
|
||||
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use joindomain feature")
|
||||
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
|
||||
"joindomain feature")
|
||||
|
||||
ident.identification.domainAdmin = str(self.params['customization'].get('domainadmin'))
|
||||
ident.identification.joinDomain = str(self.params['customization'].get('joindomain'))
|
||||
|
|
|
@ -75,18 +75,21 @@ options:
|
|||
default: None
|
||||
esxi:
|
||||
description:
|
||||
- Dictionary which includes datacenter and hostname on which the VM should be created. For standalone ESXi hosts, ha-datacenter should be used as the datacenter name
|
||||
- Dictionary which includes datacenter and hostname on which the VM should be created. For standalone ESXi hosts, ha-datacenter should be used as the
|
||||
datacenter name
|
||||
required: false
|
||||
default: null
|
||||
state:
|
||||
description:
|
||||
- Indicate desired state of the vm. 'reconfigured' only applies changes to 'vm_cdrom', 'memory_mb', and 'num_cpus' in vm_hardware parameter. The 'memory_mb' and 'num_cpus' changes are applied to powered-on vms when hot-plugging is enabled for the guest.
|
||||
- Indicate desired state of the vm. 'reconfigured' only applies changes to 'vm_cdrom', 'memory_mb', and 'num_cpus' in vm_hardware parameter.
|
||||
The 'memory_mb' and 'num_cpus' changes are applied to powered-on vms when hot-plugging is enabled for the guest.
|
||||
default: present
|
||||
choices: ['present', 'powered_off', 'absent', 'powered_on', 'restarted', 'reconfigured']
|
||||
from_template:
|
||||
version_added: "1.9"
|
||||
description:
|
||||
- Specifies if the VM should be deployed from a template (mutually exclusive with 'state' parameter). No guest customization changes to hardware such as CPU, RAM, NICs or Disks can be applied when launching from template.
|
||||
- Specifies if the VM should be deployed from a template (mutually exclusive with 'state' parameter). No guest customization changes to hardware
|
||||
such as CPU, RAM, NICs or Disks can be applied when launching from template.
|
||||
default: no
|
||||
choices: ['yes', 'no']
|
||||
template_src:
|
||||
|
@ -128,7 +131,8 @@ options:
|
|||
default: null
|
||||
vm_hw_version:
|
||||
description:
|
||||
- Desired hardware version identifier (for example, "vmx-08" for vms that needs to be managed with vSphere Client). Note that changing hardware version of existing vm is not supported.
|
||||
- Desired hardware version identifier (for example, "vmx-08" for vms that needs to be managed with vSphere Client). Note that changing hardware
|
||||
version of existing vm is not supported.
|
||||
required: false
|
||||
default: null
|
||||
version_added: "1.7"
|
||||
|
@ -1780,7 +1784,8 @@ def main():
|
|||
# CONNECT TO THE SERVER
|
||||
viserver = VIServer()
|
||||
if validate_certs and not hasattr(ssl, 'SSLContext') and not vcenter_hostname.startswith('http://'):
|
||||
module.fail_json(msg='pysphere does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
|
||||
module.fail_json(msg='pysphere does not support verifying certificates with python < 2.7.9. Either update python or set '
|
||||
'validate_certs=False on the task')
|
||||
|
||||
try:
|
||||
viserver.connect(vcenter_hostname, username, password)
|
||||
|
|
|
@ -41,7 +41,10 @@ description:
|
|||
author: Quentin Stafford-Fraser (@quentinsf)
|
||||
version_added: "2.0"
|
||||
notes:
|
||||
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
|
||||
- >
|
||||
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
|
||||
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as
|
||||
your host, you may want to add C(serial: 1) to the plays.
|
||||
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
|
||||
|
||||
options:
|
||||
|
|
|
@ -38,7 +38,10 @@ description:
|
|||
author: Quentin Stafford-Fraser (@quentinsf)
|
||||
version_added: "2.0"
|
||||
notes:
|
||||
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
|
||||
- >
|
||||
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
|
||||
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as
|
||||
your host, you may want to add C(serial: 1) to the plays.
|
||||
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
|
||||
options:
|
||||
|
||||
|
|
|
@ -36,8 +36,12 @@ description:
|
|||
author: Quentin Stafford-Fraser (@quentinsf)
|
||||
version_added: "2.0"
|
||||
notes:
|
||||
- If you are I(deleting) domains by using C(state=absent), then note that if you specify subdomains, just those particular subdomains will be deleted. If you don't specify subdomains, the domain will be deleted.
|
||||
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
|
||||
- If you are I(deleting) domains by using C(state=absent), then note that if you specify subdomains, just those particular subdomains will be deleted.
|
||||
If you don't specify subdomains, the domain will be deleted.
|
||||
- >
|
||||
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
|
||||
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as
|
||||
your host, you may want to add C(serial: 1) to the plays.
|
||||
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
|
||||
|
||||
options:
|
||||
|
|
|
@ -35,7 +35,10 @@ description:
|
|||
author: Quentin Stafford-Fraser (@quentinsf)
|
||||
version_added: "2.0"
|
||||
notes:
|
||||
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
|
||||
- >
|
||||
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
|
||||
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as
|
||||
your host, you may want to add C(serial: 1) to the plays.
|
||||
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
|
||||
options:
|
||||
|
||||
|
|
|
@ -36,9 +36,13 @@ description:
|
|||
author: Quentin Stafford-Fraser (@quentinsf)
|
||||
version_added: "2.0"
|
||||
notes:
|
||||
- Sadly, you I(do) need to know your webfaction hostname for the C(host) parameter. But at least, unlike the API, you don't need to know the IP address - you can use a DNS name.
|
||||
- Sadly, you I(do) need to know your webfaction hostname for the C(host) parameter. But at least, unlike the API, you don't need to know the IP
|
||||
address. You can use a DNS name.
|
||||
- If a site of the same name exists in the account but on a different host, the operation will exit.
|
||||
- "You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API - the location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as your host, you may want to add C(serial: 1) to the plays."
|
||||
- >
|
||||
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
|
||||
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you don't specify I(localhost) as
|
||||
your host, you may want to add C(serial: 1) to the plays.
|
||||
- See `the webfaction API <http://docs.webfaction.com/xmlrpc-api/>`_ for more info.
|
||||
|
||||
options:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue