1Amazon Web Services Guide 2========================= 3 4.. _aws_intro: 5 6Introduction 7```````````` 8 9Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this 10section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context. 11 12Requirements for the AWS modules are minimal. 13 14All of the modules require and are tested against recent versions of botocore and boto3. Starting with the 2.0 AWS collection releases, it is generally the policy of the collections to support the versions of these libraries released 12 months prior to the most recent major collection revision. Individual modules may require a more recent library version to support specific features or may require the boto library, check the module documentation for the minimum required version for each module. You must have the boto3 Python module installed on your control machine. You can install these modules from your OS distribution or using the python package installer: ``pip install boto3``. 15 16Starting with the 2.0 releases of both collections, Python 2.7 support will be ended in accordance with AWS' `end of Python 2.7 support <https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/>`_ and Python 3.6 or greater will be required. 17 18 19Whereas classically Ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control. 20 21In your playbook steps we'll typically be using the following pattern for provisioning steps:: 22 23 - hosts: localhost 24 gather_facts: False 25 tasks: 26 - ... 27 28.. _aws_authentication: 29 30Authentication 31`````````````` 32 33Authentication with the AWS-related modules is handled by either 34specifying your access and secret key as ENV variables or module arguments. 35 36For environment variables:: 37 38 export AWS_ACCESS_KEY_ID='AK123' 39 export AWS_SECRET_ACCESS_KEY='abc123' 40 41For storing these in a vars_file, ideally encrypted with ansible-vault:: 42 43 --- 44 ec2_access_key: "--REMOVED--" 45 ec2_secret_key: "--REMOVED--" 46 47Note that if you store your credentials in vars_file, you need to refer to them in each AWS-module. For example:: 48 49 - ec2 50 aws_access_key: "{{ec2_access_key}}" 51 aws_secret_key: "{{ec2_secret_key}}" 52 image: "..." 53 54.. _aws_provisioning: 55 56Provisioning 57```````````` 58 59The ec2 module provisions and de-provisions instances within EC2. 60 61An example of making sure there are only 5 instances tagged 'Demo' in EC2 follows. 62 63In the example below, the "exact_count" of instances is set to 5. This means if there are 0 instances already existing, then 645 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would 65be terminated. 66 67What is being counted is specified by the "count_tag" parameter. The parameter "instance_tags" is used to apply tags to the newly created 68instance.:: 69 70 # demo_setup.yml 71 72 - hosts: localhost 73 gather_facts: False 74 75 tasks: 76 77 - name: Provision a set of instances 78 ec2: 79 key_name: my_key 80 group: test 81 instance_type: t2.micro 82 image: "{{ ami_id }}" 83 wait: true 84 exact_count: 5 85 count_tag: 86 Name: Demo 87 instance_tags: 88 Name: Demo 89 register: ec2 90 91The data about what instances are created is being saved by the "register" keyword in the variable named "ec2". 92 93From this, we'll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.:: 94 95 # demo_setup.yml 96 97 - hosts: localhost 98 gather_facts: False 99 100 tasks: 101 102 - name: Provision a set of instances 103 ec2: 104 key_name: my_key 105 group: test 106 instance_type: t2.micro 107 image: "{{ ami_id }}" 108 wait: true 109 exact_count: 5 110 count_tag: 111 Name: Demo 112 instance_tags: 113 Name: Demo 114 register: ec2 115 116 - name: Add all instance public IPs to host group 117 add_host: hostname={{ item.public_ip }} groups=ec2hosts 118 loop: "{{ ec2.instances }}" 119 120With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps:: 121 122 # demo_setup.yml 123 124 - name: Provision a set of instances 125 hosts: localhost 126 # ... AS ABOVE ... 127 128 - hosts: ec2hosts 129 name: configuration play 130 user: ec2-user 131 gather_facts: true 132 133 tasks: 134 135 - name: Check NTP service 136 service: name=ntpd state=started 137 138.. _aws_security_groups: 139 140Security Groups 141``````````````` 142 143Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa. 144In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.:: 145 146 - name: fetch raw ip ranges for aws s3 147 set_fact: 148 raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}" 149 150 - name: prepare list structure for ec2_group module 151 set_fact: 152 s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}" 153 loop: "{{ raw_s3_ranges }}" 154 155 - name: set S3 IP ranges to egress rules 156 ec2_group: 157 name: aws_s3_ip_ranges 158 description: allow outgoing traffic to aws S3 service 159 region: eu-central-1 160 state: present 161 vpc_id: vpc-123456 162 purge_rules: true 163 purge_rules_egress: true 164 rules: [] 165 rules_egress: "{{ s3_ranges }}" 166 tags: 167 Name: aws_s3_ip_ranges 168 169.. _aws_host_inventory: 170 171Host Inventory 172`````````````` 173 174Once your nodes are spun up, you'll probably want to talk to them again. With a cloud setup, it's best to not maintain a static list of cloud hostnames 175in text files. Rather, the best way to handle this is to use the aws_ec2 inventory plugin. See :ref:`dynamic_inventory`. 176 177The plugin will also return instances that were created outside of Ansible and allow Ansible to manage them. 178 179.. _aws_tags_and_groups: 180 181Tags And Groups And Variables 182````````````````````````````` 183 184When using the inventory plugin, you can configure extra inventory structure based on the metadata returned by AWS. 185 186For instance, you might use ``keyed_groups`` to create groups from instance tags:: 187 188 plugin: aws_ec2 189 keyed_groups: 190 - prefix: tag 191 key: tags 192 193 194You can then target all instances with a "class" tag where the value is "webserver" in a play:: 195 196 - hosts: tag_class_webserver 197 tasks: 198 - ping 199 200You can also use these groups with 'group_vars' to set variables that are automatically applied to matching instances. See :ref:`splitting_out_vars`. 201 202.. _aws_pull: 203 204Autoscaling with Ansible Pull 205````````````````````````````` 206 207Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible modules shown in the cloud documentation that 208can configure autoscaling policy. 209 210When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node. 211 212To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally. 213 214One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context. 215For this reason, the autoscaling solution provided below in the next section can be a better approach. 216 217Read :ref:`ansible-pull` for more information on pull-mode playbooks. 218 219.. _aws_autoscale: 220 221Autoscaling with Ansible Tower 222`````````````````````````````` 223 224:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call 225a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way 226to reconfigure ephemeral nodes. See the Tower install and product documentation for more details. 227 228A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared 229with remote hosts. 230 231.. _aws_cloudformation_example: 232 233Ansible With (And Versus) CloudFormation 234```````````````````````````````````````` 235 236CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document. 237 238Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document. 239This is recommended for most users. 240 241However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template 242to Amazon. 243 244When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch 245those images, or ansible will be invoked through user data once the image comes online, or a combination of the two. 246 247Please see the examples in the Ansible CloudFormation module for more details. 248 249.. _aws_image_build: 250 251AWS Image Building With Ansible 252``````````````````````````````` 253 254Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this, 255one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with 256the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible's 257ec2_ami module. 258 259Generally speaking, we find most users using Packer. 260 261See the Packer documentation of the `Ansible local Packer provisioner <https://www.packer.io/docs/provisioners/ansible-local.html>`_ and `Ansible remote Packer provisioner <https://www.packer.io/docs/provisioners/ansible.html>`_. 262 263If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable. 264 265.. _aws_next_steps: 266 267Next Steps: Explore Modules 268``````````````````````````` 269 270Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the "Cloud" category of the module 271documentation for a full list with examples. 272 273.. seealso:: 274 275 :ref:`list_of_collections` 276 Browse existing collections, modules, and plugins 277 :ref:`working_with_playbooks` 278 An introduction to playbooks 279 :ref:`playbooks_delegation` 280 Delegation, useful for working with loud balancers, clouds, and locally executed steps. 281 `User Mailing List <https://groups.google.com/group/ansible-devel>`_ 282 Have a question? Stop by the google group! 283 `irc.libera.chat <https://libera.chat/>`_ 284 #ansible IRC chat channel 285