1:orphan:
2
3.. _testing_units_modules:
4
5****************************
6Unit Testing Ansible Modules
7****************************
8
9.. highlight:: python
10
11.. contents:: Topics
12
13Introduction
14============
15
16This document explains why, how and when you should use unit tests for Ansible modules.
17The document doesn't apply to other parts of Ansible for which the recommendations are
18normally closer to the Python standard. There is basic documentation for Ansible unit
19tests in the developer guide :ref:`testing_units`. This document should
20be readable for a new Ansible module author. If you find it incomplete or confusing,
21please open a bug or ask for help on Ansible IRC.
22
23What Are Unit Tests?
24====================
25
26Ansible includes a set of unit tests in the :file:`test/units` directory. These tests primarily cover the
27internals but can also cover Ansible modules. The structure of the unit tests matches
28the structure of the code base, so the tests that reside in the :file:`test/units/modules/` directory
29are organized by module groups.
30
31Integration tests can be used for most modules, but there are situations where
32cases cannot be verified using integration tests. This means that Ansible unit test cases
33may extend beyond testing only minimal units and in some cases will include some
34level of functional testing.
35
36
37Why Use Unit Tests?
38===================
39
40Ansible unit tests have advantages and disadvantages. It is important to understand these.
41Advantages include:
42
43* Most unit tests are much faster than most Ansible integration tests. The complete suite
44  of unit tests can be run regularly by a developer on their local system.
45* Unit tests can be run by developers who don't have access to the system which the module is
46  designed to work on, allowing a level of verification that changes to core functions
47  haven't broken module expectations.
48* Unit tests can easily substitute system functions allowing testing of software that
49  would be impractical. For example, the ``sleep()`` function can be replaced and we check
50  that a ten minute sleep was called without actually waiting ten minutes.
51* Unit tests are run on different Python versions. This allows us to
52  ensure that the code behaves in the same way on different Python versions.
53
54There are also some potential disadvantages of unit tests. Unit tests don't normally
55directly test actual useful valuable features of software, instead just internal
56implementation
57
58* Unit tests that test the internal, non-visible features of software may make
59  refactoring difficult if those internal features have to change (see also naming in How
60  below)
61* Even if the internal feature is working correctly it is possible that there will be a
62  problem between the internal code tested and the actual result delivered to the user
63
64Normally the Ansible integration tests (which are written in Ansible YAML) provide better
65testing for most module functionality. If those tests already test a feature and perform
66well there may be little point in providing a unit test covering the same area as well.
67
68When To Use Unit Tests
69======================
70
71There are a number of situations where unit tests are a better choice than integration
72tests. For example, testing things which are impossible, slow or very difficult to test
73with integration tests, such as:
74
75* Forcing rare / strange / random situations that can't be forced, such as specific network
76  failures and exceptions
77* Extensive testing of slow configuration APIs
78* Situations where the integration tests cannot be run as part of the main Ansible
79  continuous integration running in Shippable.
80
81
82
83Providing quick feedback
84------------------------
85
86Example:
87  A single step of the rds_instance test cases can take up to 20
88  minutes (the time to create an RDS instance in Amazon). The entire
89  test run can last for well over an hour. All 16 of the unit tests
90  complete execution in less than 2 seconds.
91
92The time saving provided by being able to run the code in a unit test makes it worth
93creating a unit test when bug fixing a module, even if those tests do not often identify
94problems later. As a basic goal, every module should have at least one unit test which
95will give quick feedback in easy cases without having to wait for the integration tests to
96complete.
97
98Ensuring correct use of external interfaces
99-------------------------------------------
100
101Unit tests can check the way in which external services are run to ensure that they match
102specifications or are as efficient as possible *even when the final output will not be changed*.
103
104Example:
105  Package managers are often far more efficient when installing multiple packages at once
106  rather than each package separately. The final result is the
107  same: the packages are all installed, so the efficiency is difficult to verify through
108  integration tests. By providing a mock package manager and verifying that it is called
109  once, we can build a valuable test for module efficiency.
110
111Another related use is in the situation where an API has versions which behave
112differently. A programmer working on a new version may change the module to work with the
113new API version and unintentionally break the old version. A test case
114which checks that the call happens properly for the old version can help avoid the
115problem. In this situation it is very important to include version numbering in the test case
116name (see `Naming unit tests`_ below).
117
118Providing specific design tests
119--------------------------------
120
121By building a requirement for a particular part of the
122code and then coding to that requirement, unit tests _can_ sometimes improve the code and
123help future developers understand that code.
124
125Unit tests that test internal implementation details of code, on the other hand, almost
126always do more harm than good.  Testing that your packages to install are stored in a list
127would slow down and confuse a future developer who might need to change that list into a
128dictionary for efficiency. This problem can be reduced somewhat with clear test naming so
129that the future developer immediately knows to delete the test case, but it is often
130better to simply leave out the test case altogether and test for a real valuable feature
131of the code, such as installing all of the packages supplied as arguments to the module.
132
133
134How to unit test Ansible modules
135================================
136
137There are a number of techniques for unit testing modules. Beware that most
138modules without unit tests are structured in a way that makes testing quite difficult and
139can lead to very complicated tests which need more work than the code. Effectively using unit
140tests may lead you to restructure your code. This is often a good thing and leads
141to better code overall. Good restructuring can make your code clearer and easier to understand.
142
143
144Naming unit tests
145-----------------
146
147Unit tests should have logical names. If a developer working on the module being tested
148breaks the test case, it should be easy to figure what the unit test covers from the name.
149If a unit test is designed to verify compatibility with a specific software or API version
150then include the version in the name of the unit test.
151
152As an example, ``test_v2_state_present_should_call_create_server_with_name()`` would be a
153good name, ``test_create_server()`` would not be.
154
155
156Use of Mocks
157------------
158
159Mock objects (from https://docs.python.org/3/library/unittest.mock.html) can be very
160useful in building unit tests for special / difficult cases, but they can also
161lead to complex and confusing coding situations. One good use for mocks would be in
162simulating an API. As for 'six', the 'mock' python package is bundled with Ansible (use
163``import units.compat.mock``).
164
165Ensuring failure cases are visible with mock objects
166----------------------------------------------------
167
168Functions like :meth:`module.fail_json` are normally expected to terminate execution. When you
169run with a mock module object this doesn't happen since the mock always returns another mock
170from a function call. You can set up the mock to raise an exception as shown above, or you can
171assert that these functions have not been called in each test. For example::
172
173  module = MagicMock()
174  function_to_test(module, argument)
175  module.fail_json.assert_not_called()
176
177This applies not only to calling the main module but almost any other
178function in a module which gets the module object.
179
180
181Mocking of the actual module
182----------------------------
183
184The setup of an actual module is quite complex (see `Passing Arguments`_ below) and often
185isn't needed for most functions which use a module. Instead you can use a mock object as
186the module and create any module attributes needed by the function you are testing. If
187you do this, beware that the module exit functions need special handling as mentioned
188above, either by throwing an exception or ensuring that they haven't been called. For example::
189
190    class AnsibleExitJson(Exception):
191        """Exception class to be raised by module.exit_json and caught by the test case"""
192        pass
193
194    # you may also do the same to fail json
195    module = MagicMock()
196    module.exit_json.side_effect = AnsibleExitJson(Exception)
197    with self.assertRaises(AnsibleExitJson) as result:
198        return = my_module.test_this_function(module, argument)
199    module.fail_json.assert_not_called()
200    assert return["changed"] == True
201
202API definition with unit test cases
203-----------------------------------
204
205API interaction is usually best tested with the function tests defined in Ansible's
206integration testing section, which run against the actual API. There are several cases
207where the unit tests are likely to work better.
208
209Defining a module against an API specification
210~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
211
212This case is especially important for modules interacting with web services, which provide
213an API that Ansible uses but which are beyond the control of the user.
214
215By writing a custom emulation of the calls that return data from the API, we can ensure
216that only the features which are clearly defined in the specification of the API are
217present in the message. This means that we can check that we use the correct
218parameters and nothing else.
219
220
221*Example:  in rds_instance unit tests a simple instance state is defined*::
222
223    def simple_instance_list(status, pending):
224        return {u'DBInstances': [{u'DBInstanceArn': 'arn:aws:rds:us-east-1:1234567890:db:fakedb',
225                                  u'DBInstanceStatus': status,
226                                  u'PendingModifiedValues': pending,
227                                  u'DBInstanceIdentifier': 'fakedb'}]}
228
229This is then used to create a list of states::
230
231    rds_client_double = MagicMock()
232    rds_client_double.describe_db_instances.side_effect = [
233        simple_instance_list('rebooting', {"a": "b", "c": "d"}),
234        simple_instance_list('available', {"c": "d", "e": "f"}),
235        simple_instance_list('rebooting', {"a": "b"}),
236        simple_instance_list('rebooting', {"e": "f", "g": "h"}),
237        simple_instance_list('rebooting', {}),
238        simple_instance_list('available', {"g": "h", "i": "j"}),
239        simple_instance_list('rebooting', {"i": "j", "k": "l"}),
240        simple_instance_list('available', {}),
241        simple_instance_list('available', {}),
242    ]
243
244These states are then used as returns from a mock object to ensure that the ``await`` function
245waits through all of the states that would mean the RDS instance has not yet completed
246configuration::
247
248   rds_i.await_resource(rds_client_double, "some-instance", "available", mod_mock,
249                        await_pending=1)
250   assert(len(sleeper_double.mock_calls) > 5), "await_pending didn't wait enough"
251
252By doing this we check that the ``await`` function will keep waiting through
253potentially unusual that it would be impossible to reliably trigger through the
254integration tests but which happen unpredictably in reality.
255
256Defining a module to work against multiple API versions
257~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
258
259This case is especially important for modules interacting with many different versions of
260software; for example, package installation modules that might be expected to work with
261many different operating system versions.
262
263By using previously stored data from various versions of an API we can ensure that the
264code is tested against the actual data which will be sent from that version of the system
265even when the version is very obscure and unlikely to be available during testing.
266
267Ansible special cases for unit testing
268======================================
269
270There are a number of special cases for unit testing the environment of an Ansible module.
271The most common are documented below, and suggestions for others can be found by looking
272at the source code of the existing unit tests or asking on the Ansible IRC channel or mailing
273lists.
274
275Module argument processing
276--------------------------
277
278There are two problems with running the main function of a module:
279
280* Since the module is supposed to accept arguments on ``STDIN`` it is a bit difficult to
281  set up the arguments correctly so that the module will get them as parameters.
282* All modules should finish by calling either the :meth:`module.fail_json` or
283  :meth:`module.exit_json`, but these won't work correctly in a testing environment.
284
285Passing Arguments
286-----------------
287
288.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
289   closed since the function below will be provided in a library file.
290
291To pass arguments to a module correctly, use the ``set_module_args`` method which accepts a dictionary
292as its parameter. Module creation and argument processing is
293handled through the :class:`AnsibleModule` object in the basic section of the utilities. Normally
294this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
295variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module::
296
297    import json
298    from units.modules.utils import set_module_args
299    from ansible.module_utils._text import to_bytes
300
301    def test_already_registered(self):
302        set_module_args({
303            'activationkey': 'key',
304            'username': 'user',
305            'password': 'pass',
306        })
307
308Handling exit correctly
309-----------------------
310
311.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
312   closed since the exit and failure functions below will be provided in a library file.
313
314The :meth:`module.exit_json` function won't work properly in a testing environment since it
315writes error information to ``STDOUT`` upon exit, where it
316is difficult to examine. This can be mitigated by replacing it (and :meth:`module.fail_json`) with
317a function that raises an exception::
318
319    def exit_json(*args, **kwargs):
320        if 'changed' not in kwargs:
321            kwargs['changed'] = False
322        raise AnsibleExitJson(kwargs)
323
324Now you can ensure that the first function called is the one you expected simply by
325testing for the correct exception::
326
327    def test_returned_value(self):
328        set_module_args({
329            'activationkey': 'key',
330            'username': 'user',
331            'password': 'pass',
332        })
333
334        with self.assertRaises(AnsibleExitJson) as result:
335            my_module.main()
336
337The same technique can be used to replace :meth:`module.fail_json` (which is used for failure
338returns from modules) and for the ``aws_module.fail_json_aws()`` (used in modules for Amazon
339Web Services).
340
341Running the main function
342-------------------------
343
344If you do want to run the actual main function of a module you must import the module, set
345the arguments as above, set up the appropriate exit exception and then run the module::
346
347    # This test is based around pytest's features for individual test functions
348    import pytest
349    import ansible.modules.module.group.my_module as my_module
350
351    def test_main_function(monkeypatch):
352        monkeypatch.setattr(my_module.AnsibleModule, "exit_json", fake_exit_json)
353        set_module_args({
354            'activationkey': 'key',
355            'username': 'user',
356            'password': 'pass',
357        })
358        my_module.main()
359
360
361Handling calls to external executables
362--------------------------------------
363
364Module must use :meth:`AnsibleModule.run_command` in order to execute an external command. This
365method needs to be mocked:
366
367Here is a simple mock of :meth:`AnsibleModule.run_command` (taken from :file:`test/units/modules/packaging/os/test_rhn_register.py`)::
368
369        with patch.object(basic.AnsibleModule, 'run_command') as run_command:
370            run_command.return_value = 0, '', ''  # successful execution, no output
371                with self.assertRaises(AnsibleExitJson) as result:
372                    self.module.main()
373                self.assertFalse(result.exception.args[0]['changed'])
374        # Check that run_command has been called
375        run_command.assert_called_once_with('/usr/bin/command args')
376        self.assertEqual(run_command.call_count, 1)
377        self.assertFalse(run_command.called)
378
379
380A Complete Example
381------------------
382
383The following example is a complete skeleton that reuses the mocks explained above and adds a new
384mock for :meth:`Ansible.get_bin_path`::
385
386    import json
387
388    from units.compat import unittest
389    from units.compat.mock import patch
390    from ansible.module_utils import basic
391    from ansible.module_utils._text import to_bytes
392    from ansible.modules.namespace import my_module
393
394
395    def set_module_args(args):
396        """prepare arguments so that they will be picked up during module creation"""
397        args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
398        basic._ANSIBLE_ARGS = to_bytes(args)
399
400
401    class AnsibleExitJson(Exception):
402        """Exception class to be raised by module.exit_json and caught by the test case"""
403        pass
404
405
406    class AnsibleFailJson(Exception):
407        """Exception class to be raised by module.fail_json and caught by the test case"""
408        pass
409
410
411    def exit_json(*args, **kwargs):
412        """function to patch over exit_json; package return data into an exception"""
413        if 'changed' not in kwargs:
414            kwargs['changed'] = False
415        raise AnsibleExitJson(kwargs)
416
417
418    def fail_json(*args, **kwargs):
419        """function to patch over fail_json; package return data into an exception"""
420        kwargs['failed'] = True
421        raise AnsibleFailJson(kwargs)
422
423
424    def get_bin_path(self, arg, required=False):
425        """Mock AnsibleModule.get_bin_path"""
426        if arg.endswith('my_command'):
427            return '/usr/bin/my_command'
428        else:
429            if required:
430                fail_json(msg='%r not found !' % arg)
431
432
433    class TestMyModule(unittest.TestCase):
434
435        def setUp(self):
436            self.mock_module_helper = patch.multiple(basic.AnsibleModule,
437                                                     exit_json=exit_json,
438                                                     fail_json=fail_json,
439                                                     get_bin_path=get_bin_path)
440            self.mock_module_helper.start()
441            self.addCleanup(self.mock_module_helper.stop)
442
443        def test_module_fail_when_required_args_missing(self):
444            with self.assertRaises(AnsibleFailJson):
445                set_module_args({})
446                self.module.main()
447
448
449        def test_ensure_command_called(self):
450            set_module_args({
451                'param1': 10,
452                'param2': 'test',
453            })
454
455            with patch.object(basic.AnsibleModule, 'run_command') as mock_run_command:
456                stdout = 'configuration updated'
457                stderr = ''
458                rc = 0
459                mock_run_command.return_value = rc, stdout, stderr  # successful execution
460
461                with self.assertRaises(AnsibleExitJson) as result:
462                    my_module.main()
463                self.assertFalse(result.exception.args[0]['changed']) # ensure result is changed
464
465            mock_run_command.assert_called_once_with('/usr/bin/my_command --value 10 --name test')
466
467
468Restructuring modules to enable testing module set up and other processes
469-------------------------------------------------------------------------
470
471Often modules have a ``main()`` function which sets up the module and then performs other
472actions. This can make it difficult to check argument processing. This can be made easier by
473moving module configuration and initialization into a separate function. For example::
474
475    argument_spec = dict(
476        # module function variables
477        state=dict(choices=['absent', 'present', 'rebooted', 'restarted'], default='present'),
478        apply_immediately=dict(type='bool', default=False),
479        wait=dict(type='bool', default=False),
480        wait_timeout=dict(type='int', default=600),
481        allocated_storage=dict(type='int', aliases=['size']),
482        db_instance_identifier=dict(aliases=["id"], required=True),
483    )
484
485    def setup_module_object():
486        module = AnsibleAWSModule(
487            argument_spec=argument_spec,
488            required_if=required_if,
489            mutually_exclusive=[['old_instance_id', 'source_db_instance_identifier',
490                                 'db_snapshot_identifier']],
491        )
492        return module
493
494    def main():
495        module = setup_module_object()
496        validate_parameters(module)
497        conn = setup_client(module)
498        return_dict = run_task(module, conn)
499        module.exit_json(**return_dict)
500
501This now makes it possible to run tests against the module initiation function::
502
503    def test_rds_module_setup_fails_if_db_instance_identifier_parameter_missing():
504        # db_instance_identifier parameter is missing
505        set_module_args({
506            'state': 'absent',
507            'apply_immediately': 'True',
508         })
509
510        with self.assertRaises(AnsibleFailJson) as result:
511            self.module.setup_json
512
513See also ``test/units/module_utils/aws/test_rds.py``
514
515Note that the ``argument_spec`` dictionary is visible in a module variable. This has
516advantages, both in allowing explicit testing of the arguments and in allowing the easy
517creation of module objects for testing.
518
519The same restructuring technique can be valuable for testing other functionality, such as the part of the module which queries the object that the module configures.
520
521Traps for maintaining Python 2 compatibility
522============================================
523
524If you use the ``mock`` library from the Python 2.6 standard library, a number of the
525assert functions are missing but will return as if successful. This means that test cases should take great care *not* use
526functions marked as _new_ in the Python 3 documentation, since the tests will likely always
527succeed even if the code is broken when run on older versions of Python.
528
529A helpful development approach to this should be to ensure that all of the tests have been
530run under Python 2.6 and that each assertion in the test cases has been checked to work by breaking
531the code in Ansible to trigger that failure.
532
533.. warning:: Maintain Python 2.6 compatibility
534
535    Please remember that modules need to maintain compatibility with Python 2.6 so the unittests for
536    modules should also be compatible with Python 2.6.
537
538
539.. seealso::
540
541   :ref:`testing_units`
542       Ansible unit tests documentation
543   :ref:`testing_running_locally`
544       Running tests locally including gathering and reporting coverage data
545   :ref:`developing_modules_general`
546       Get started developing a module
547   `Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
548       The documentation of the unittest framework in python 3
549   `Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
550       The documentation of the earliest supported unittest framework - from Python 2.6
551   `pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
552       The documentation of pytest - the framework actually used to run Ansible unit tests
553   `Development Mailing List <https://groups.google.com/group/ansible-devel>`_
554       Mailing list for development topics
555   `Testing Your Code (from The Hitchhiker's Guide to Python!) <https://docs.python-guide.org/writing/tests/>`_
556       General advice on testing Python code
557   `Uncle Bob's many videos on YouTube <https://www.youtube.com/watch?v=QedpQjxBPMA&list=PLlu0CT-JnSasQzGrGzddSczJQQU7295D2>`_
558       Unit testing is a part of the of various philosophies of software development, including
559       Extreme Programming (XP), Clean Coding.  Uncle Bob talks through how to benefit from this
560   `"Why Most Unit Testing is Waste" <https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf>`_
561       An article warning against the costs of unit testing
562   `'A Response to "Why Most Unit Testing is Waste"' <https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-is-waste/>`_
563       An response pointing to how to maintain the value of unit tests
564