• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

pinocchio/H18-May-2021-1,005749

pinocchio.egg-info/H03-May-2022-2422

PKG-INFOH A D18-May-20211,002 2422

README.rstH A D17-May-20218.6 KiB272186

setup.cfgH A D18-May-202138 53

setup.pyH A D18-May-20211.5 KiB4439

README.rst

1Pinocchio is a set of extensions to the nose_ unit testing framework
2for Python.
3
4You can get the most recent version from pypi:
5
6    https://pypi.python.org/pypi/pinocchio/
7
8You can install it via easy_install or pip ::
9
10    easy_install pinocchio
11    pip install pinocchio
12
13Pinocchio only works with nose 0.9a1 and above. Pinocchio is compatible with both Python versions 2 and 3.
14
15.. Contents::
16
17Extensions
18==========
19
20stopwatch -- selecting tests based on execution time
21-----------------------------------------------------
22
23Sometimes your unit tests just seem to take *forever*.  Well, now
24you can get rid of the slow ones automatically!
25
26The pinocchio.stopwatch extension module lets you time the unit tests
27being run, and -- once times have been recorded -- then lets you
28select only those that run faster than a given amount of time.  As a
29bonus, the test names and run times are stored in a simple format -- a
30pickled dictionary -- so you can target specific tests for speedup,
31too.
32
33Use cases
34~~~~~~~~~
35
36There's really only one use case here: your tests take too long to
37run, so you and your developers aren't running them very frequently.
38The stopwatch module lets you pick out the fast ones to be run via
39the command line; you can always run the slow ones in your continuous
40build system, right?
41
42Options
43~~~~~~~
44
45``--with-stopwatch`` enables test timing.
46
47``--stopwatch-file`` changes the filename used to save the pickled test
48times from '.nose-stopwatch-times' to the specified file.
49
50``--faster-than`` sets an upper time limit (in seconds) for which tests
51should be run.
52
53Examples
54~~~~~~~~
55
56See ``examples/test_stopwatch.py`` for some examples; use ::
57
58   nosetests -w examples/ --with-stopwatch --faster-than 1 test_stopwatch.py
59   nosetests -w examples/ --with-stopwatch --faster-than 1 test_stopwatch.py
60
61to run a subset of the tests.  Note that you need to run it twice --
62once to record the times (i.e. all tests will be run, independent of
63the --faster-than parameter) and again to select only the "fast"
64tests.
65
66decorator -- adding attributes to tests
67---------------------------------------
68
69The attrib extension module for nose is a great way to select subsets
70of tests based on attributes you've given the test functions, classes,
71or methods.  But what if you don't want to modify the source code
72to add the attributes?
73
74The pinocchio.decorator extension module lets you provide the names of
75functions, classes, and methods to which to add tags.  For example, ::
76
77   TestModule.test_function: a
78   TestModule.TestClass: b
79   TestModule.TestClass.test_method: c
80
81would set attributes on a function, a class, and a method.  Then ::
82
83   nosetests -a a
84
85would run only the function, ::
86
87   nosetests -a b
88
89would run all methods on the class, and ::
90
91   nosetests -a c
92
93would run only the method.
94
95Use cases
96~~~~~~~~~
97
98There are a couple of scenarios where this can come in handy:
99
100 * You're working on a bunch of tests that are failing, and you only
101   want to execute the failing tests.  If modifying the failing test
102   code itself would be more work than simply listing them in a
103   file, then use the decorator extension.
104
105 * You have a 'private' set of unit tests that bear on the code you're
106   working on.  You want to iterate quickly, just running those unit
107   tests.  Again, if modifying the test code itself is more work than
108   simply listing the relevant tests in a file, use the decorator extension.
109
110 * You have a few unit tests that are failing, and you *don't* want to
111   execute them (that is, the reverse of the first scenario).
112
113Options
114~~~~~~~
115
116``--decorator-file`` specifies the file containing the tags to use.
117
118Examples
119~~~~~~~~
120
121See ``examples/test_decorator.py`` for some examples; use
122``examples/test_decorator.attrib`` as the decorator file. For example,
123try the following commands::
124
125   nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a one
126   nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a two
127   nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a three
128
129figleafsections -- find out what tests are executing which parts of your code
130-----------------------------------------------------------------------------
131
132(You'll need to install `figleaf <http://darcs.idyll.org/~t/projects/figleaf/doc/>`__ to use this plugin; it will install the ``figleaf`` package and the ``annotate-sections`` script.)
133
134This plugin lets you record code coverage per unit test, and then
135annotate your Python source code with which unit tests are running
136which lines of code.  It's a useful way to figure out which nose tests
137are exercising what parts of your program.
138
139See http://ivory.idyll.org/blog/feb-07/figleaf-goodness.html for some
140detailed examples.
141
142To try it out, do this::
143
144   nosetests -w examples/ --with-figleafsections examples/test_sections.py
145   annotate-sections examples/test_sections.py
146
147The output will be placed in the file ``examples/test_sections.py``.
148
149outputsave -- save your stdout into files
150-----------------------------------------
151
152This plugin records the stdout from each test into a separate file,
153with a prefix indicating whether or not the test succeeded.
154
155Use cases
156~~~~~~~~~
157
158The main use case is when you have MANY failing tests and you want to
159take a look at the output without having to page through the nose
160error output linearly.
161
162Options
163~~~~~~~
164
165``--with-outputsave`` enables the plugin.  Output from successful tests
166will be placed in ``success-<testname>``, output from failed tests will
167be placed in ``fail-<testname>``, and output from errors will be placed
168in ``error-<testname>``.
169
170``--omit-success`` does not save output from successful tests, i.e. only
171'fail-' and 'error-' output files will be created.
172
173``--save-directory`` places all saved output into the given directory,
174creating it if it does not exist.
175
176Examples
177~~~~~~~~
178
179Try::
180
181   nosetests -w examples/ --with-outputsave --save-directory=output examples/test_outputsave.py
182
183Then look at the 'output' directory.
184
185spec -- generate test description from test class/method names
186---------------------------------------------------------------
187
188spec lets you generate a "specification" similar to testdox_ . The
189ppec plugin can generate simple documentation directly from class and
190method names of test cases. For example, a test case like::
191
192  class TestFoobar:
193      def test_is_a_singleton(self):
194          pass
195      def test_can_be_automatically_documented(self):
196          pass
197
198during the test run will generate the following specification: ::
199
200  Foobar
201  - is a singleton
202  - can be automatically documented
203
204Test functions put directly into a module will have a context based
205on the name of the containing module. For example, if you define
206functions test_are_marked_as_deprecated() and
207test_doesnt_work_with_sets() in a module test_containers.py,
208you'll get the following specs::
209
210  Containers
211  - are marked as deprecated
212  - doesn't work with sets
213
214Use cases
215~~~~~~~~~
216
217If you follow a good naming convention for your tests you'll get free
218up-to-date specification of your application - it will be as accurate
219as your tests are.
220
221Options
222~~~~~~~
223
224``--with-spec`` enables the plugin, and automatically sets the verbose
225level for nose to "detailed output".  During the test run all your
226test descriptions will be shown as a special kind of specification -
227your test classes set up a context and test methods set up a single
228specification.
229
230``--spec-color`` enables colored output. Successful tests will be marked
231as green, while failed/error cases as red. Skipped and deprecated test
232cases will be shown in yellow. You need an ANSI terminal to use this.
233
234``--spec-doctests`` enables experimental support for doctests.
235
236``--spec-file=SPEC_FILE`` outputs specification to a separate file instead
237of the default nose stream. When this option is used nose reporter is not
238replaced, so error details will still go to stderr.
239
240Examples
241~~~~~~~~
242
243Try::
244
245   nosetests --with-spec --spec-color examples/test_spec.py
246
247(Yes, you should see an error.)
248
249Look at examples/test_spec.py source code and tests inside
250tests/spec_test_cases/ directory to see how test cases are
251mapped into specifications.
252
253License
254=======
255
256pinocchio is available under the MIT license.
257
258Author Information
259==================
260
261The author of the stopwatch, decorator, figleafsections, and
262outputsave extensions is Titus Brown.  You can contact him at
263titus@idyll.org, or check out his main site at
264http://ivory.idyll.org/.
265
266The author of the spec plugin is Michal Kwiatkowski.  His homepage is
267at http://joker.linuxstuff.pl/ and his mail address is
268michal@trivas.pl.
269
270.. _nose: https://nose.readthedocs.org/en/latest/
271.. _testdox: http://agiledox.sourceforge.net/
272