1Amazon Web Services Guide
2=========================
3
4.. _aws_intro:
5
6Introduction
7````````````
8
9Ansible contains a number of modules for controlling Amazon Web Services (AWS).  The purpose of this
10section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.
11
12Requirements for the AWS modules are minimal.
13
14All of the modules require and are tested against recent versions of boto.  You'll need this Python module installed on your control machine.  Boto can be installed from your OS distribution or python's "pip install boto".
15
16Whereas classically ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
17
18In your playbook steps we'll typically be using the following pattern for provisioning steps::
19
20    - hosts: localhost
21      gather_facts: False
22      tasks:
23        - ...
24
25.. _aws_authentication:
26
27Authentication
28``````````````
29
30Authentication with the AWS-related modules is handled by either
31specifying your access and secret key as ENV variables or module arguments.
32
33For environment variables::
34
35    export AWS_ACCESS_KEY_ID='AK123'
36    export AWS_SECRET_ACCESS_KEY='abc123'
37
38For storing these in a vars_file, ideally encrypted with ansible-vault::
39
40    ---
41    ec2_access_key: "--REMOVED--"
42    ec2_secret_key: "--REMOVED--"
43
44Note that if you store your credentials in vars_file, you need to refer to them in each AWS-module. For example::
45
46    - ec2
47      aws_access_key: "{{ec2_access_key}}"
48      aws_secret_key: "{{ec2_secret_key}}"
49      image: "..."
50
51.. _aws_provisioning:
52
53Provisioning
54````````````
55
56The ec2 module provisions and de-provisions instances within EC2.
57
58An example of making sure there are only 5 instances tagged 'Demo' in EC2 follows.
59
60In the example below, the "exact_count" of instances is set to 5.  This means if there are 0 instances already existing, then
615 new instances would be created.  If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would
62be terminated.
63
64What is being counted is specified by the "count_tag" parameter.  The parameter "instance_tags" is used to apply tags to the newly created
65instance.::
66
67    # demo_setup.yml
68
69    - hosts: localhost
70      gather_facts: False
71
72      tasks:
73
74        - name: Provision a set of instances
75          ec2:
76             key_name: my_key
77             group: test
78             instance_type: t2.micro
79             image: "{{ ami_id }}"
80             wait: true
81             exact_count: 5
82             count_tag:
83                Name: Demo
84             instance_tags:
85                Name: Demo
86          register: ec2
87
88The data about what instances are created is being saved by the "register" keyword in the variable named "ec2".
89
90From this, we'll use the add_host module to dynamically create a host group consisting of these new instances.  This facilitates performing configuration actions on the hosts immediately in a subsequent task.::
91
92    # demo_setup.yml
93
94    - hosts: localhost
95      gather_facts: False
96
97      tasks:
98
99        - name: Provision a set of instances
100          ec2:
101             key_name: my_key
102             group: test
103             instance_type: t2.micro
104             image: "{{ ami_id }}"
105             wait: true
106             exact_count: 5
107             count_tag:
108                Name: Demo
109             instance_tags:
110                Name: Demo
111          register: ec2
112
113       - name: Add all instance public IPs to host group
114         add_host: hostname={{ item.public_ip }} groups=ec2hosts
115         loop: "{{ ec2.instances }}"
116
117With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps::
118
119    # demo_setup.yml
120
121    - name: Provision a set of instances
122      hosts: localhost
123      # ... AS ABOVE ...
124
125    - hosts: ec2hosts
126      name: configuration play
127      user: ec2-user
128      gather_facts: true
129
130      tasks:
131
132         - name: Check NTP service
133           service: name=ntpd state=started
134
135.. _aws_security_groups:
136
137Security Groups
138```````````````
139
140Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa.
141In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.::
142
143    - name: fetch raw ip ranges for aws s3
144      set_fact:
145        raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}"
146
147    - name: prepare list structure for ec2_group module
148      set_fact:
149        s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}"
150      with_items: "{{ raw_s3_ranges }}"
151
152    - name: set S3 IP ranges to egress rules
153      ec2_group:
154        name: aws_s3_ip_ranges
155        description: allow outgoing traffic to aws S3 service
156        region: eu-central-1
157        state: present
158        vpc_id: vpc-123456
159        purge_rules: true
160        purge_rules_egress: true
161        rules: []
162        rules_egress: "{{ s3_ranges }}"
163        tags:
164          Name: aws_s3_ip_ranges
165
166.. _aws_host_inventory:
167
168Host Inventory
169``````````````
170
171Once your nodes are spun up, you'll probably want to talk to them again.  With a cloud setup, it's best to not maintain a static list of cloud hostnames
172in text files.  Rather, the best way to handle this is to use the ec2 dynamic inventory script. See :ref:`dynamic_inventory`.
173
174This will also dynamically select nodes that were even created outside of Ansible, and allow Ansible to manage them.
175
176See :ref:`dynamic_inventory` for how to use this, then return to this chapter.
177
178.. _aws_tags_and_groups:
179
180Tags And Groups And Variables
181`````````````````````````````
182
183When using the ec2 inventory script, hosts automatically appear in groups based on how they are tagged in EC2.
184
185For instance, if a host is given the "class" tag with the value of "webserver",
186it will be automatically discoverable via a dynamic group like so::
187
188   - hosts: tag_class_webserver
189     tasks:
190       - ping
191
192Using this philosophy can be a great way to keep systems separated by the function they perform.
193
194In this example, if we wanted to define variables that are automatically applied to each machine tagged with the 'class' of 'webserver', 'group_vars'
195in ansible can be used.  See :ref:`splitting_out_vars`.
196
197Similar groups are available for regions and other classifications, and can be similarly assigned variables using the same mechanism.
198
199.. _aws_pull:
200
201Autoscaling with Ansible Pull
202`````````````````````````````
203
204Amazon Autoscaling features automatically increase or decrease capacity based on load.  There are also Ansible modules shown in the cloud documentation that
205can configure autoscaling policy.
206
207When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
208
209To do this, pre-bake machine images which contain the necessary ansible-pull invocation.  Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
210
211One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context.
212For this reason, the autoscaling solution provided below in the next section can be a better approach.
213
214Read :ref:`ansible-pull` for more information on pull-mode playbooks.
215
216.. _aws_autoscale:
217
218Autoscaling with Ansible Tower
219``````````````````````````````
220
221:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases.  In this mode, a simple curl script can call
222a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up.  This can be a great way
223to reconfigure ephemeral nodes.  See the Tower install and product documentation for more details.
224
225A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared
226with remote hosts.
227
228.. _aws_cloudformation_example:
229
230Ansible With (And Versus) CloudFormation
231````````````````````````````````````````
232
233CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document.
234
235Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document.
236This is recommended for most users.
237
238However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template
239to Amazon.
240
241When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch
242those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
243
244Please see the examples in the Ansible CloudFormation module for more details.
245
246.. _aws_image_build:
247
248AWS Image Building With Ansible
249```````````````````````````````
250
251Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation.  To do this,
252one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with
253the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module.   Possible tools include Packer, aminator, and Ansible's
254ec2_ami module.
255
256Generally speaking, we find most users using Packer.
257
258See the Packer documentation of the `Ansible local Packer provisioner <https://www.packer.io/docs/provisioners/ansible-local.html>`_ and `Ansible remote Packer provisioner <https://www.packer.io/docs/provisioners/ansible.html>`_.
259
260If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
261
262.. _aws_next_steps:
263
264Next Steps: Explore Modules
265```````````````````````````
266
267Ansible ships with lots of modules for configuring a wide array of EC2 services.  Browse the "Cloud" category of the module
268documentation for a full list with examples.
269
270.. seealso::
271
272   :ref:`all_modules`
273       All the documentation for Ansible modules
274   :ref:`working_with_playbooks`
275       An introduction to playbooks
276   :ref:`playbooks_delegation`
277       Delegation, useful for working with loud balancers, clouds, and locally executed steps.
278   `User Mailing List <https://groups.google.com/group/ansible-devel>`_
279       Have a question?  Stop by the google group!
280   `irc.libera.chat <https://libera.chat/>`_
281       #ansible IRC chat channel
282
283