1CloudStack Cloud Guide
2======================
3
4.. _cloudstack_introduction:
5
6Introduction
7````````````
8The purpose of this section is to explain how to put Ansible modules together to use Ansible in a CloudStack context. You will find more usage examples in the details section of each module.
9
10Ansible contains a number of extra modules for interacting with CloudStack based clouds. All modules support check mode, are designed to be idempotent, have been created and tested, and are maintained by the community.
11
12.. note:: Some of the modules will require domain admin or root admin privileges.
13
14Prerequisites
15`````````````
16Prerequisites for using the CloudStack modules are minimal. In addition to Ansible itself, all of the modules require the python library ``cs`` https://pypi.org/project/cs/
17
18You'll need this Python module installed on the execution host, usually your workstation.
19
20.. code-block:: bash
21
22    $ pip install cs
23
24Or alternatively starting with Debian 9 and Ubuntu 16.04:
25
26.. code-block:: bash
27
28    $ sudo apt install python-cs
29
30.. note:: cs also includes a command line interface for ad-hoc interaction with the CloudStack API e.g. ``$ cs listVirtualMachines state=Running``.
31
32Limitations and Known Issues
33````````````````````````````
34VPC support has been improved since Ansible 2.3 but is still not yet fully implemented. The community is working on the VPC integration.
35
36Credentials File
37````````````````
38You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file.
39
40The python library cs looks for the credentials file in the following order (last one wins):
41
42* A ``.cloudstack.ini`` (note the dot) file in the home directory.
43* A ``CLOUDSTACK_CONFIG`` environment variable pointing to an .ini file.
44* A ``cloudstack.ini`` (without the dot) file in the current working directory, same directory as your playbooks are located.
45
46The structure of the ini file must look like this:
47
48.. code-block:: bash
49
50    $ cat $HOME/.cloudstack.ini
51    [cloudstack]
52    endpoint = https://cloud.example.com/client/api
53    key = api key
54    secret = api secret
55    timeout = 30
56
57.. Note:: The section ``[cloudstack]`` is the default section. ``CLOUDSTACK_REGION`` environment variable can be used to define the default section.
58
59.. versionadded:: 2.4
60
61The ENV variables support ``CLOUDSTACK_*`` as written in the documentation of the library ``cs``,  like e.g ``CLOUDSTACK_TIMEOUT``, ``CLOUDSTACK_METHOD``, etc. has been implemented into Ansible. It is even possible to have some incomplete config in your cloudstack.ini:
62
63.. code-block:: bash
64
65    $ cat $HOME/.cloudstack.ini
66    [cloudstack]
67    endpoint = https://cloud.example.com/client/api
68    timeout = 30
69
70and fulfill the missing data by either setting ENV variables or tasks params:
71
72.. code-block:: yaml
73
74    ---
75    - name: provision our VMs
76      hosts: cloud-vm
77      tasks:
78        - name: ensure VMs are created and running
79          delegate_to: localhost
80          cs_instance:
81            api_key: your api key
82            api_secret: your api secret
83            ...
84
85Regions
86```````
87If you use more than one CloudStack region, you can define as many sections as you want and name them as you like, e.g.:
88
89.. code-block:: bash
90
91    $ cat $HOME/.cloudstack.ini
92    [exoscale]
93    endpoint = https://api.exoscale.ch/compute
94    key = api key
95    secret = api secret
96
97    [example_cloud_one]
98    endpoint = https://cloud-one.example.com/client/api
99    key = api key
100    secret = api secret
101
102    [example_cloud_two]
103    endpoint = https://cloud-two.example.com/client/api
104    key = api key
105    secret = api secret
106
107.. Hint:: Sections can also be used to for login into the same region using different accounts.
108
109By passing the argument ``api_region`` with the CloudStack modules, the region wanted will be selected.
110
111.. code-block:: yaml
112
113    - name: ensure my ssh public key exists on Exoscale
114      cs_sshkeypair:
115        name: my-ssh-key
116        public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
117        api_region: exoscale
118      delegate_to: localhost
119
120Or by looping over a regions list if you want to do the task in every region:
121
122.. code-block:: yaml
123
124    - name: ensure my ssh public key exists in all CloudStack regions
125      local_action: cs_sshkeypair
126        name: my-ssh-key
127        public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
128        api_region: "{{ item }}"
129        loop:
130          - exoscale
131          - example_cloud_one
132          - example_cloud_two
133
134Environment Variables
135`````````````````````
136.. versionadded:: 2.3
137
138Since Ansible 2.3 it is possible to use environment variables for domain (``CLOUDSTACK_DOMAIN``), account (``CLOUDSTACK_ACCOUNT``), project (``CLOUDSTACK_PROJECT``), VPC (``CLOUDSTACK_VPC``) and zone (``CLOUDSTACK_ZONE``). This simplifies the tasks by not repeating the arguments for every tasks.
139
140Below you see an example how it can be used in combination with Ansible's block feature:
141
142.. code-block:: yaml
143
144    - hosts: cloud-vm
145      tasks:
146        - block:
147            - name: ensure my ssh public key
148              cs_sshkeypair:
149                name: my-ssh-key
150                public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
151
152            - name: ensure my ssh public key
153              cs_instance:
154                  display_name: "{{ inventory_hostname_short }}"
155                  template: Linux Debian 7 64-bit 20GB Disk
156                  service_offering: "{{ cs_offering }}"
157                  ssh_key: my-ssh-key
158                  state: running
159
160          delegate_to: localhost
161          environment:
162            CLOUDSTACK_DOMAIN: root/customers
163            CLOUDSTACK_PROJECT: web-app
164            CLOUDSTACK_ZONE: sf-1
165
166.. Note:: You are still able overwrite the environment variables using the module arguments, e.g. ``zone: sf-2``
167
168.. Note:: Unlike ``CLOUDSTACK_REGION`` these additional environment variables are ignored in the CLI ``cs``.
169
170Use Cases
171`````````
172The following should give you some ideas how to use the modules to provision VMs to the cloud. As always, there isn't only one way to do it. But as always: keep it simple for the beginning is always a good start.
173
174Use Case: Provisioning in a Advanced Networking CloudStack setup
175++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
176Our CloudStack cloud has an advanced networking setup, we would like to provision web servers, which get a static NAT and open firewall ports 80 and 443. Further we provision database servers, to which we do not give any access to. For accessing the VMs by SSH we use a SSH jump host.
177
178This is how our inventory looks like:
179
180.. code-block:: none
181
182    [cloud-vm:children]
183    webserver
184    db-server
185    jumphost
186
187    [webserver]
188    web-01.example.com  public_ip=198.51.100.20
189    web-02.example.com  public_ip=198.51.100.21
190
191    [db-server]
192    db-01.example.com
193    db-02.example.com
194
195    [jumphost]
196    jump.example.com  public_ip=198.51.100.22
197
198As you can see, the public IPs for our web servers and jumphost has been assigned as variable ``public_ip`` directly in the inventory.
199
200The configure the jumphost, web servers and database servers, we use ``group_vars``. The ``group_vars`` directory contains 4 files for configuration of the groups: cloud-vm, jumphost, webserver and db-server. The cloud-vm is there for specifying the defaults of our cloud infrastructure.
201
202.. code-block:: yaml
203
204    # file: group_vars/cloud-vm
205    ---
206    cs_offering: Small
207    cs_firewall: []
208
209Our database servers should get more CPU and RAM, so we define to use a ``Large`` offering for them.
210
211.. code-block:: yaml
212
213    # file: group_vars/db-server
214    ---
215    cs_offering: Large
216
217The web servers should get a ``Small`` offering as we would scale them horizontally, which is also our default offering. We also ensure the known web ports are opened for the world.
218
219.. code-block:: yaml
220
221    # file: group_vars/webserver
222    ---
223    cs_firewall:
224      - { port: 80 }
225      - { port: 443 }
226
227Further we provision a jump host which has only port 22 opened for accessing the VMs from our office IPv4 network.
228
229.. code-block:: yaml
230
231    # file: group_vars/jumphost
232    ---
233    cs_firewall:
234      - { port: 22, cidr: "17.17.17.0/24" }
235
236Now to the fun part. We create a playbook to create our infrastructure we call it ``infra.yml``:
237
238.. code-block:: yaml
239
240    # file: infra.yaml
241    ---
242    - name: provision our VMs
243      hosts: cloud-vm
244      tasks:
245        - name: run all enclosed tasks from localhost
246          delegate_to: localhost
247          block:
248            - name: ensure VMs are created and running
249              cs_instance:
250                name: "{{ inventory_hostname_short }}"
251                template: Linux Debian 7 64-bit 20GB Disk
252                service_offering: "{{ cs_offering }}"
253                state: running
254
255            - name: ensure firewall ports opened
256              cs_firewall:
257                ip_address: "{{ public_ip }}"
258                port: "{{ item.port }}"
259                cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
260              loop: "{{ cs_firewall }}"
261              when: public_ip is defined
262
263            - name: ensure static NATs
264              cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
265              when: public_ip is defined
266
267In the above play we defined 3 tasks and use the group ``cloud-vm`` as target to handle all VMs in the cloud but instead SSH to these VMs, we use ``delegate_to: localhost`` to execute the API calls locally from our workstation.
268
269In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an existing VM, you must add ``force: yes`` to the task, which would stop the VM, change the offering and start the VM again.
270
271In the second task we ensure the ports are opened if we give a public IP to the VM.
272
273In the third task we add static NAT to the VMs having a public IP defined.
274
275
276.. Note:: The public IP addresses must have been acquired in advance, also see ``cs_ip_address``
277
278.. Note:: For some modules, e.g. ``cs_sshkeypair`` you usually want this to be executed only once, not for every VM. Therefore you would make a separate play for it targeting localhost. You find an example in the use cases below.
279
280Use Case: Provisioning on a Basic Networking CloudStack setup
281+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
282
283A basic networking CloudStack setup is slightly different: Every VM gets a public IP directly assigned and security groups are used for access restriction policy.
284
285This is how our inventory looks like:
286
287.. code-block:: none
288
289    [cloud-vm:children]
290    webserver
291
292    [webserver]
293    web-01.example.com
294    web-02.example.com
295
296The default for your VMs looks like this:
297
298.. code-block:: yaml
299
300    # file: group_vars/cloud-vm
301    ---
302    cs_offering: Small
303    cs_securitygroups: [ 'default']
304
305Our webserver will also be in security group ``web``:
306
307.. code-block:: yaml
308
309    # file: group_vars/webserver
310    ---
311    cs_securitygroups: [ 'default', 'web' ]
312
313The playbook looks like the following:
314
315.. code-block:: yaml
316
317    # file: infra.yaml
318    ---
319    - name: cloud base setup
320      hosts: localhost
321      tasks:
322      - name: upload ssh public key
323        cs_sshkeypair:
324          name: defaultkey
325          public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
326
327      - name: ensure security groups exist
328        cs_securitygroup:
329          name: "{{ item }}"
330        loop:
331          - default
332          - web
333
334      - name: add inbound SSH to security group default
335        cs_securitygroup_rule:
336          security_group: default
337          start_port: "{{ item }}"
338          end_port: "{{ item }}"
339        loop:
340          - 22
341
342      - name: add inbound TCP rules to security group web
343        cs_securitygroup_rule:
344          security_group: web
345          start_port: "{{ item }}"
346          end_port: "{{ item }}"
347        loop:
348          - 80
349          - 443
350
351    - name: install VMs in the cloud
352      hosts: cloud-vm
353      tasks:
354      - delegate_to: localhost
355        block:
356        - name: create and run VMs on CloudStack
357          cs_instance:
358            name: "{{ inventory_hostname_short }}"
359            template: Linux Debian 7 64-bit 20GB Disk
360            service_offering: "{{ cs_offering }}"
361            security_groups: "{{ cs_securitygroups }}"
362            ssh_key: defaultkey
363            state: Running
364          register: vm
365
366        - name: show VM IP
367          debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
368
369        - name: assign IP to the inventory
370          set_fact: ansible_ssh_host={{ vm.default_ip }}
371
372        - name: waiting for SSH to come up
373          wait_for: port=22 host={{ vm.default_ip }} delay=5
374
375In the first play we setup the security groups, in the second play the VMs will created be assigned to these groups. Further you see, that we assign the public IP returned from the modules to the host inventory. This is needed as we do not know the IPs we will get in advance. In a next step you would configure the DNS servers with these IPs for accessing the VMs with their DNS name.
376
377In the last task we wait for SSH to be accessible, so any later play would be able to access the VM by SSH without failure.
378