1/****************************************************************************
2**
3** Copyright (C) 2019 The Qt Company Ltd.
4** Copyright (C) 2016 Intel Corporation.
5** Contact: https://www.qt.io/licensing/
6**
7** This file is part of the documentation of the Qt Toolkit.
8**
9** $QT_BEGIN_LICENSE:FDL$
10** Commercial License Usage
11** Licensees holding valid commercial Qt licenses may use this file in
12** accordance with the commercial license agreement provided with the
13** Software or, alternatively, in accordance with the terms contained in
14** a written agreement between you and The Qt Company. For licensing terms
15** and conditions see https://www.qt.io/terms-conditions. For further
16** information use the contact form at https://www.qt.io/contact-us.
17**
18** GNU Free Documentation License Usage
19** Alternatively, this file may be used under the terms of the GNU Free
20** Documentation License version 1.3 as published by the Free Software
21** Foundation and appearing in the file included in the packaging of
22** this file. Please review the following information to ensure
23** the GNU Free Documentation License version 1.3 requirements
24** will be met: https://www.gnu.org/licenses/fdl-1.3.html.
25** $QT_END_LICENSE$
26**
27****************************************************************************/
28
29/*!
30    \page qtest-overview.html
31    \title Qt Test Overview
32    \brief Overview of the Qt unit testing framework.
33
34    \ingroup frameworks-technologies
35    \ingroup qt-basic-concepts
36
37    \keyword qtestlib
38
39    Qt Test is a framework for unit testing Qt based applications and libraries.
40    Qt Test provides
41    all the functionality commonly found in unit testing frameworks as
42    well as extensions for testing graphical user interfaces.
43
44    Qt Test is designed to ease the writing of unit tests for Qt
45    based applications and libraries:
46
47    \table
48    \header \li Feature \li Details
49    \row
50        \li \b Lightweight
51        \li Qt Test consists of about 6000 lines of code and 60
52           exported symbols.
53    \row
54        \li \b Self-contained
55        \li Qt Test requires only a few symbols from the Qt Core module
56           for non-gui testing.
57    \row
58        \li \b {Rapid testing}
59        \li Qt Test needs no special test-runners; no special
60           registration for tests.
61    \row
62        \li \b {Data-driven testing}
63        \li A test can be executed multiple times with different test data.
64    \row
65        \li \b {Basic GUI testing}
66        \li Qt Test offers functionality for mouse and keyboard simulation.
67    \row
68        \li \b {Benchmarking}
69        \li Qt Test supports benchmarking and provides several measurement back-ends.
70    \row
71         \li \b {IDE friendly}
72         \li Qt Test outputs messages that can be interpreted by Qt Creator, Visual
73            Studio, and KDevelop.
74    \row
75         \li \b Thread-safety
76         \li The error reporting is thread safe and atomic.
77    \row
78         \li \b Type-safety
79         \li Extensive use of templates prevent errors introduced by
80            implicit type casting.
81    \row
82         \li \b {Easily extendable}
83         \li Custom types can easily be added to the test data and test output.
84    \endtable
85
86    You can use a Qt Creator wizard to create a project that contains Qt tests
87    and build and run them directly from Qt Creator. For more information, see
88    \l {Qt Creator: Running Autotests}{Running Autotests}.
89
90    \section1 Creating a Test
91
92    To create a test, subclass QObject and add one or more private slots to it. Each
93    private slot is a test function in your test. QTest::qExec() can be used to execute
94    all test functions in the test object.
95
96    In addition, you can define the following private slots that are \e not
97    treated as test functions. When present, they will be executed by the
98    testing framework and can be used to initialize and clean up either the
99    entire test or the current test function.
100
101    \list
102    \li \c{initTestCase()} will be called before the first test function is executed.
103    \li \c{initTestCase_data()} will be called to create a global test data table.
104    \li \c{cleanupTestCase()} will be called after the last test function was executed.
105    \li \c{init()} will be called before each test function is executed.
106    \li \c{cleanup()} will be called after every test function.
107    \endlist
108
109    Use \c initTestCase() for preparing the test. Every test should leave the
110    system in a usable state, so it can be run repeatedly. Cleanup operations
111    should be handled in \c cleanupTestCase(), so they get run even if the test
112    fails.
113
114    Use \c init() for preparing a test function. Every test function should
115    leave the system in a usable state, so it can be run repeatedly. Cleanup
116    operations should be handled in \c cleanup(), so they get run even if the
117    test function fails and exits early.
118
119    Alternatively, you can use RAII (resource acquisition is initialization),
120    with cleanup operations called in destructors, to ensure they happen when
121    the test function returns and the object moves out of scope.
122
123    If \c{initTestCase()} fails, no test function will be executed. If \c{init()} fails,
124    the following test function will not be executed, the test will proceed to the next
125    test function.
126
127    Example:
128    \snippet code/doc_src_qtestlib.cpp 0
129
130    Finally, if the test class has a static public \c{void initMain()} method,
131    it is called by the QTEST_MAIN macros before the QApplication object
132    is instantiated. For example, this allows for setting application
133    attributes like Qt::AA_DisableHighDpiScaling. This was added in 5.14.
134
135    For more examples, refer to the \l{Qt Test Tutorial}.
136
137    \if !defined(qtforpython)
138    \section1 Building a Test
139
140    You can build an executable that contains one test class that typically
141    tests one class of production code. However, usually you would want to
142    test several classes in a project by running one command.
143
144    See \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test} for a step by
145    step explanation.
146
147    \section2 Building with CMake and CTest
148
149    You can use \l {Building with CMake and CTest} to create a test.
150    \l{https://cmake.org/cmake/help/latest/manual/ctest.1.html}{CTest} enables
151    you to include or exclude tests based on a regular expression that is
152    matched against the test name. You can further apply the \c LABELS property
153    to a test and CTest can then include or exclude tests based on those labels.
154    All labeled targets will be run when \c {test} target is called on the
155    command line.
156
157    There are several other advantages with CMake. For example, the result of
158    a test run can be published on a web server using CDash with virtually no
159    effort.
160
161    CTest scales to very different unit test frameworks, and works out of the
162    box with QTest.
163
164    The following is an example of a CMakeLists.txt file that specifies the
165    project name and the language used (here, \e mytest and C++), the Qt
166    modules required for building the test (Qt5Test), and the files that are
167    included in the test (\e tst_mytest.cpp).
168
169    \quotefile code/doc_src_cmakelists.txt
170
171    For more information about the options you have, see \l {Build with CMake}.
172
173    \section2 Building with qmake
174
175    If you are using \c qmake as your build tool, just add the
176    following to your project file:
177
178    \snippet code/doc_src_qtestlib.pro 1
179
180    If you would like to run the test via \c{make check}, add the
181    additional line:
182
183    \snippet code/doc_src_qtestlib.pro 2
184
185    To prevent the test from being installed to your target, add the
186    additional line:
187
188    \snippet code/doc_src_qtestlib.pro 3
189
190    See the \l{Building a Testcase}{qmake manual} for
191    more information about \c{make check}.
192
193    \section2 Building with Other Tools
194
195    If you are using other build tools, make sure that you add the location
196    of the Qt Test header files to your include path (usually \c{include/QtTest}
197    under your Qt installation directory). If you are using a release build
198    of Qt, link your test to the \c QtTest library. For debug builds, use
199    \c{QtTest_debug}.
200
201    \endif
202
203    \section1 Qt Test Command Line Arguments
204
205    \section2 Syntax
206
207    The syntax to execute an autotest takes the following simple form:
208
209    \snippet code/doc_src_qtestlib.qdoc 2
210
211    Substitute \c testname with the name of your executable. \c
212    testfunctions can contain names of test functions to be
213    executed. If no \c testfunctions are passed, all tests are run. If you
214    append the name of an entry in \c testdata, the test function will be
215    run only with that test data.
216
217    For example:
218
219    \snippet code/doc_src_qtestlib.qdoc 3
220
221    Runs the test function called \c toUpper with all available test data.
222
223    \snippet code/doc_src_qtestlib.qdoc 4
224
225    Runs the \c toUpper test function with all available test data,
226    and the \c toInt test function with the test data called \c
227    zero (if the specified test data doesn't exist, the associated test
228    will fail).
229
230    \snippet code/doc_src_qtestlib.qdoc 5
231
232    Runs the \c testMyWidget function test, outputs every signal
233    emission and waits 500 milliseconds after each simulated
234    mouse/keyboard event.
235
236    \section2 Options
237
238    \section3 Logging Options
239
240    The following command line options determine how test results are reported:
241
242    \list
243    \li \c -o \e{filename,format} \br
244    Writes output to the specified file, in the specified format (one of
245    \c txt, \c xml, \c lightxml, \c junitxml or \c tap).  The special filename \c -
246    may be used to log to standard output.
247    \li \c -o \e filename \br
248    Writes output to the specified file.
249    \li \c -txt \br
250    Outputs results in plain text.
251    \li \c -xml \br
252    Outputs results as an XML document.
253    \li \c -lightxml \br
254    Outputs results as a stream of XML tags.
255    \li \c -junitxml \br
256    Outputs results as an JUnit XML document.
257    \li \c -csv \br
258    Outputs results as comma-separated values (CSV). This mode is only suitable for
259    benchmarks, since it suppresses normal pass/fail messages.
260    \li \c -teamcity \br
261    Outputs results in TeamCity format.
262    \li \c -tap \br
263    Outputs results in Test Anything Protocol (TAP) format.
264    \endlist
265
266    The first version of the \c -o option may be repeated in order to log
267    test results in multiple formats, but no more than one instance of this
268    option can log test results to standard output.
269
270    If the first version of the \c -o option is used, neither the second version
271    of the \c -o option nor the \c -txt, \c -xml, \c -lightxml, \c -teamcity,
272    \c -junitxml or \c -tap options should be used.
273
274    If neither version of the \c -o option is used, test results will be logged to
275    standard output.  If no format option is used, test results will be logged in
276    plain text.
277
278    \section3 Test Log Detail Options
279
280    The following command line options control how much detail is reported
281    in test logs:
282
283    \list
284    \li \c -silent \br
285    Silent output; only shows fatal errors, test failures and minimal status
286    messages.
287    \li \c -v1 \br
288    Verbose output; shows when each test function is entered.
289    (This option only affects plain text output.)
290    \li \c -v2 \br
291    Extended verbose output; shows each \l QCOMPARE() and \l QVERIFY().
292    (This option affects all output formats and implies \c -v1 for plain text output.)
293    \li \c -vs \br
294    Shows all signals that get emitted and the slot invocations resulting from
295    those signals.
296    (This option affects all output formats.)
297    \endlist
298
299    \section3 Testing Options
300
301    The following command-line options influence how tests are run:
302
303    \list
304    \li \c -functions \br
305    Outputs all test functions available in the test, then quits.
306    \li \c -datatags \br
307    Outputs all data tags available in the test.
308    A global data tag is preceded by ' __global__ '.
309    \li \c -eventdelay \e ms \br
310    If no delay is specified for keyboard or mouse simulation
311    (\l QTest::keyClick(),
312    \l QTest::mouseClick() etc.), the value from this parameter
313    (in milliseconds) is substituted.
314    \li \c -keydelay \e ms \br
315    Like -eventdelay, but only influences keyboard simulation and not mouse
316    simulation.
317    \li \c -mousedelay \e ms \br
318    Like -eventdelay, but only influences mouse simulation and not keyboard
319    simulation.
320    \li \c -maxwarnings \e number \br
321    Sets the maximum number of warnings to output. 0 for unlimited, defaults to
322    2000.
323    \li \c -nocrashhandler \br
324    Disables the crash handler on Unix platforms.
325    On Windows, it re-enables the Windows Error Reporting dialog, which is
326    turned off by default. This is useful for debugging crashes.
327
328    \li \c -platform \e name \br
329    This command line argument applies to all Qt applications, but might be
330    especially useful in the context of auto-testing. By using the "offscreen"
331    platform plugin (-platform offscreen) it's possible to have tests that use
332    QWidget or QWindow run without showing anything on the screen. Currently
333    the offscreen platform plugin is only fully supported on X11.
334    \endlist
335
336    \section3 Benchmarking Options
337
338    The following command line options control benchmark testing:
339
340    \list
341    \li \c -callgrind \br
342    Uses Callgrind to time benchmarks (Linux only).
343    \li \c -tickcounter \br
344    Uses CPU tick counters to time benchmarks.
345    \li \c -eventcounter \br
346    Counts events received during benchmarks.
347    \li \c -minimumvalue \e n \br
348    Sets the minimum acceptable measurement value.
349    \li \c -minimumtotal \e n \br
350    Sets the minimum acceptable total for repeated executions of a test function.
351    \li \c -iterations \e n \br
352    Sets the number of accumulation iterations.
353    \li \c -median \e n \br
354    Sets the number of median iterations.
355    \li \c -vb \br
356    Outputs verbose benchmarking information.
357    \endlist
358
359    \section3 Miscellaneous Options
360
361    \list
362    \li \c -help \br
363    Outputs the possible command line arguments and gives some useful help.
364    \endlist
365
366    \section1 Creating a Benchmark
367
368    To create a benchmark, follow the instructions for creating a test and then add a
369    \l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that
370    you want to benchmark. In the following code snippet, the macro is used:
371
372    \snippet code/doc_src_qtestlib.cpp 12
373
374    A test function that measures performance should contain either a single
375    \c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple
376    occurrences make no sense, because only one performance result can be
377    reported per test function, or per data tag in a data-driven setup.
378
379    Avoid changing the test code that forms (or influences) the body of a
380    \c QBENCHMARK macro, or the test code that computes the value passed to
381    \c setBenchmarkResult(). Differences in successive performance results
382    should ideally be caused only by changes to the product you are testing.
383    Changes to the test code can potentially result in misleading report of
384    a change in performance. If you do need to change the test code, make
385    that clear in the commit message.
386
387    In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult()
388    should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(),
389    and so on. You can then flag a performance result as \e invalid if another
390    code path than the intended one was measured. A performance analysis tool
391    can use this information to filter out invalid results.
392    For example, an unexpected error condition will typically cause the program
393    to bail out prematurely from the normal program execution, and thus falsely
394    show a dramatic performance increase.
395
396    \section2 Selecting the Measurement Back-end
397
398    The code inside the QBENCHMARK macro will be measured, and possibly also repeated
399    several times in order to get an accurate measurement. This depends on the selected
400    measurement back-end. Several back-ends are available. They can be selected on the
401    command line:
402
403    \target testlib-benchmarking-measurement
404
405    \table
406    \header \li Name
407         \li Command-line Argument
408         \li Availability
409    \row \li Walltime
410         \li (default)
411         \li All platforms
412    \row \li CPU tick counter
413         \li -tickcounter
414         \li Windows, \macos, Linux, many UNIX-like systems.
415    \row \li Event Counter
416         \li -eventcounter
417         \li All platforms
418    \row \li Valgrind Callgrind
419         \li -callgrind
420         \li Linux (if installed)
421    \row \li Linux Perf
422         \li -perf
423         \li Linux
424    \endtable
425
426    In short, walltime is always available but requires many repetitions to
427    get a useful result.
428    Tick counters are usually available and can provide
429    results with fewer repetitions, but can be susceptible to CPU frequency
430    scaling issues.
431    Valgrind provides exact results, but does not take
432    I/O waits into account, and is only available on a limited number of
433    platforms.
434    Event counting is available on all platforms and it provides the number of events
435    that were received by the event loop before they are sent to their corresponding
436    targets (this might include non-Qt events).
437
438    The Linux Performance Monitoring solution is available only on Linux and
439    provides many different counters, which can be selected by passing an
440    additional option \c {-perfcounter countername}, such as \c {-perfcounter
441    cache-misses}, \c {-perfcounter branch-misses}, or \c {-perfcounter
442    l1d-load-misses}. The default counter is \c {cpu-cycles}. The full list of
443    counters can be obtained by running any benchmark executable with the
444    option \c -perfcounterlist.
445
446    \note
447    \list
448    \li Using the performance counter may require enabling access to non-privileged
449        applications.
450    \li Devices that do not support high-resolution timers default to
451        one-millisecond granularity.
452    \endlist
453
454    See \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark} in the Qt Test
455    Tutorial for more benchmarking examples.
456
457    \section1 Using Global Test Data
458
459    You can define \c{initTestCase_data()} to set up a global test data table.
460    Each test is run once for each row in the global test data table. When the
461    test function itself \l{Chapter 2: Data-driven Testing}{is data-driven},
462    it is run for each local data row, for each global data row. So, if there
463    are \c g rows in the global data table and \c d rows in the test's own
464    data-table, the number of runs of this test is \c g times \c d.
465
466    Global data is fetched from the table using the \l QFETCH_GLOBAL() macro.
467
468    The following are typical use cases for global test data:
469
470    \list
471        \li Selecting among the available database backends in QSql tests to run
472            every test against every database.
473        \li Doing all networking tests with and without SSL (HTTP versus HTTPS)
474            and proxying.
475        \li Testing a timer with a high precision clock and with a coarse one.
476        \li Selecting whether a parser shall read from a QByteArray or from a
477            QIODevice.
478    \endlist
479
480    For example, to test each number provided by \c {roundTripInt_data()} with
481    each locale provided by \c {initTestCase_data()}:
482
483    \snippet code/src_qtestlib_qtestcase_snippet.cpp 31
484*/
485
486/*!
487    \page qtest-tutorial.html
488    \brief A short introduction to testing with Qt Test.
489    \nextpage {Chapter 1: Writing a Unit Test}{Chapter 1}
490    \ingroup best-practices
491
492    \title Qt Test Tutorial
493
494    This tutorial gives a short introduction to how to use some of the
495    features of the Qt Test framework. It is divided into five
496    chapters:
497
498    \list 1
499    \li \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test}
500    \li \l {Chapter 2: Data Driven Testing}{Data Driven Testing}
501    \li \l {Chapter 3: Simulating GUI Events}{Simulating GUI Events}
502    \li \l {Chapter 4: Replaying GUI Events}{Replaying GUI Events}
503    \li \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark}
504    \li \l {Chapter 6: Skipping Tests with QSKIP}{Skipping Tests}
505    \endlist
506
507*/
508
509
510/*!
511    \example tutorial1
512
513    \nextpage {Chapter 2: Data Driven Testing}{Chapter 2}
514
515    \title Chapter 1: Writing a Unit Test
516    \brief How to write a unit test.
517
518    In this first chapter we will see how to write a simple unit test
519    for a class, and how to execute it.
520
521    \section1 Writing a Test
522
523    Let's assume you want to test the behavior of our QString class.
524    First, you need a class that contains your test functions. This class
525    has to inherit from QObject:
526
527    \snippet tutorial1/testqstring.cpp 0
528
529    \note You need to include the QTest header and declare the test functions as
530    private slots so the test framework finds and executes it.
531
532    Then you need to implement the test function itself. The
533    implementation could look like this:
534
535    \snippet code/doc_src_qtestlib.cpp 8
536
537    The \l QVERIFY() macro evaluates the expression passed as its
538    argument. If the expression evaluates to true, the execution of
539    the test function continues. Otherwise, a message describing the
540    failure is appended to the test log, and the test function stops
541    executing.
542
543    But if you want a more verbose output to the test log, you should
544    use the \l QCOMPARE() macro instead:
545
546    \snippet tutorial1/testqstring.cpp 1
547
548    If the strings are not equal, the contents of both strings are
549    appended to the test log, making it immediately visible why the
550    comparison failed.
551
552    Finally, to make our test case a stand-alone executable, the
553    following two lines are needed:
554
555    \snippet tutorial1/testqstring.cpp 2
556
557    The \l QTEST_MAIN() macro expands to a simple \c main()
558    method that runs all the test functions. Note that if both the
559    declaration and the implementation of our test class are in a \c
560    .cpp file, we also need to include the generated moc file to make
561    Qt's introspection work.
562
563    \section1 Executing a Test
564
565    Now that we finished writing our test, we want to execute
566    it. Assuming that our test was saved as \c testqstring.cpp in an
567    empty directory, we build the test using qmake to create a project
568    and generate a makefile.
569
570    \snippet code/doc_src_qtestlib.qdoc 9
571
572    \note If you're using windows, replace \c make with \c
573    nmake or whatever build tool you use.
574
575    Running the resulting executable should give you the following
576    output:
577
578    \snippet code/doc_src_qtestlib.qdoc 10
579
580    Congratulations! You just wrote and executed your first unit test
581    using the Qt Test framework.
582*/
583
584/*!
585    \example tutorial2
586
587    \previouspage {Chapter 1: Writing a Unit Test}{Chapter 1}
588    \nextpage {Chapter 3: Simulating Gui Events}{Chapter 3}
589
590    \title Chapter 2: Data Driven Testing
591    \brief How to create data driven tests.
592
593    In this chapter we will demonstrate how to execute a test
594    multiple times with different test data.
595
596    So far, we have hard coded the data we wanted to test into our
597    test function. If we add more test data, the function might look like
598    this:
599
600    \snippet code/doc_src_qtestlib.cpp 11
601
602    To prevent that the function ends up being cluttered by repetitive
603    code, Qt Test supports adding test data to a test function. All
604    we need is to add another private slot to our test class:
605
606    \snippet tutorial2/testqstring.cpp 0
607
608    \section1 Writing the Data Function
609
610    A test function's associated data function carries the same name,
611    appended by \c{_data}. Our data function looks like this:
612
613    \snippet tutorial2/testqstring.cpp 1
614
615    First, we define the two elements of our test table using the \l
616    QTest::addColumn() function: a test string, and the
617    expected result of applying the QString::toUpper() function to
618    that string.
619
620    Then we add some data to the table using the \l
621    QTest::newRow() function. Each set of data will become a
622    separate row in the test table.
623
624    \l QTest::newRow() takes one argument: a name that will be associated
625    with the data set and used in the test log to identify the data set.
626    Then we stream the data set into the new table row. First an arbitrary
627    string, and then the expected result of applying the
628    QString::toUpper() function to that string.
629
630    You can think of the test data as a two-dimensional table. In
631    our case, it has two columns called \c string and \c result and
632    three rows. In addition a name as well as an index is associated
633    with each row:
634
635    \table
636    \header
637        \li index
638        \li name
639        \li string
640        \li result
641    \row
642        \li 0
643        \li all lower
644        \li "hello"
645        \li HELLO
646    \row
647        \li 1
648        \li mixed
649        \li "Hello"
650        \li HELLO
651    \row
652        \li 2
653        \li all upper
654        \li "HELLO"
655        \li HELLO
656    \endtable
657
658    When data is streamed into the row, each datum is asserted to match
659    the type of the column whose value it supplies. If any assertion fails,
660    the test is aborted.
661
662    \section1 Rewriting the Test Function
663
664    Our test function can now be rewritten:
665
666    \snippet tutorial2/testqstring.cpp 2
667
668    The TestQString::toUpper() function will be executed three times,
669    once for each entry in the test table that we created in the
670    associated TestQString::toUpper_data() function.
671
672    First, we fetch the two elements of the data set using the \l
673    QFETCH() macro. \l QFETCH() takes two arguments: The data type of
674    the element and the element name. Then we perform the test using
675    the \l QCOMPARE() macro.
676
677    This approach makes it very easy to add new data to the test
678    without modifying the test itself.
679
680    And again, to make our test case a stand-alone executable,
681    the following two lines are needed:
682
683    \snippet tutorial2/testqstring.cpp 3
684
685    As before, the QTEST_MAIN() macro expands to a simple main()
686    method that runs all the test functions, and since both the
687    declaration and the implementation of our test class are in a .cpp
688    file, we also need to include the generated moc file to make Qt's
689    introspection work.
690*/
691
692/*!
693    \example tutorial3
694
695    \previouspage {Chapter 2: Data Driven Testing}{Chapter 2}
696    \nextpage {Chapter 4: Replaying GUI Events}{Chapter 4}
697
698    \title Chapter 3: Simulating GUI Events
699    \brief Howe to simulate GUI events.
700
701    Qt Test features some mechanisms to test graphical user
702    interfaces. Instead of simulating native window system events,
703    Qt Test sends internal Qt events. That means there are no
704    side-effects on the machine the tests are running on.
705
706    In this chapter we will see how to write a simple GUI test.
707
708    \section1 Writing a GUI Test
709
710    This time, let's assume you want to test the behavior of our
711    QLineEdit class. As before, you will need a class that contains
712    your test function:
713
714    \snippet tutorial3/testgui.cpp 0
715
716    The only difference is that you need to include the Qt GUI class
717    definitions in addition to the QTest namespace.
718
719    \snippet tutorial3/testgui.cpp 1
720
721    In the implementation of the test function we first create a
722    QLineEdit. Then we simulate writing "hello world" in the line edit
723    using the \l QTest::keyClicks() function.
724
725    \note The widget must also be shown in order to correctly test keyboard
726    shortcuts.
727
728    QTest::keyClicks() simulates clicking a sequence of keys on a
729    widget. Optionally, a keyboard modifier can be specified as well
730    as a delay (in milliseconds) of the test after each key click. In
731    a similar way, you can use the QTest::keyClick(),
732    QTest::keyPress(), QTest::keyRelease(), QTest::mouseClick(),
733    QTest::mouseDClick(), QTest::mouseMove(), QTest::mousePress()
734    and QTest::mouseRelease() functions to simulate the associated
735    GUI events.
736
737    Finally, we use the \l QCOMPARE() macro to check if the line edit's
738    text is as expected.
739
740    As before, to make our test case a stand-alone executable, the
741    following two lines are needed:
742
743    \snippet tutorial3/testgui.cpp 2
744
745    The QTEST_MAIN() macro expands to a simple main() method that
746    runs all the test functions, and since both the declaration and
747    the implementation of our test class are in a .cpp file, we also
748    need to include the generated moc file to make Qt's introspection
749    work.
750*/
751
752/*!
753    \example tutorial4
754
755    \previouspage {Chapter 3: Simulating GUI Events}{Chapter 3}
756    \nextpage {Chapter 5: Writing a Benchmark}{Chapter 5}
757
758    \title Chapter 4: Replaying GUI Events
759    \brief How to replay GUI events.
760
761    In this chapter, we will show how to simulate a GUI event,
762    and how to store a series of GUI events as well as replay them on
763    a widget.
764
765    The approach to storing a series of events and replaying them is
766    quite similar to the approach explained in \l {Chapter 2:
767    Data Driven Testing}{chapter 2}. All you need to do is to add a data
768    function to your test class:
769
770    \snippet tutorial4/testgui.cpp 0
771
772    \section1 Writing the Data Function
773
774    As before, a test function's associated data function carries the
775    same name, appended by \c{_data}.
776
777    \snippet tutorial4/testgui.cpp 1
778
779    First, we define the elements of the table using the
780    QTest::addColumn() function: A list of GUI events, and the
781    expected result of applying the list of events on a QWidget. Note
782    that the type of the first element is \l QTestEventList.
783
784    A QTestEventList can be populated with GUI events that can be
785    stored as test data for later usage, or be replayed on any
786    QWidget.
787
788    In our current data function, we create two \l
789    {QTestEventList} elements. The first list consists of a single click to
790    the 'a' key. We add the event to the list using the
791    QTestEventList::addKeyClick() function. Then we use the
792    QTest::newRow() function to give the data set a name, and
793    stream the event list and the expected result into the table.
794
795    The second list consists of two key clicks: an 'a' with a
796    following 'backspace'. Again we use the
797    QTestEventList::addKeyClick() to add the events to the list, and
798    QTest::newRow() to put the event list and the expected
799    result into the table with an associated name.
800
801    \section1 Rewriting the Test Function
802
803    Our test can now be rewritten:
804
805    \snippet tutorial4/testgui.cpp 2
806
807    The TestGui::testGui() function will be executed two times,
808    once for each entry in the test data that we created in the
809    associated TestGui::testGui_data() function.
810
811    First, we fetch the two elements of the data set using the \l
812    QFETCH() macro. \l QFETCH() takes two arguments: the data type of
813    the element and the element name. Then we create a QLineEdit, and
814    apply the list of events on that widget using the
815    QTestEventList::simulate() function.
816
817    Finally, we use the QCOMPARE() macro to check if the line edit's
818    text is as expected.
819
820    As before, to make our test case a stand-alone executable,
821    the following two lines are needed:
822
823    \snippet tutorial4/testgui.cpp 3
824
825    The QTEST_MAIN() macro expands to a simple main() method that
826    runs all the test functions, and since both the declaration and
827    the implementation of our test class are in a .cpp file, we also
828    need to include the generated moc file to make Qt's introspection
829    work.
830*/
831
832/*!
833    \example tutorial5
834
835    \previouspage {Chapter 4: Replaying GUI Events}{Chapter 4}
836    \nextpage {Chapter 6: Skipping Tests with QSKIP}{Chapter 6}
837
838    \title Chapter 5: Writing a Benchmark
839    \brief How to write a benchmark.
840
841    In this final chapter we will demonstrate how to write benchmarks
842    using Qt Test.
843
844    \section1 Writing a Benchmark
845    To create a benchmark we extend a test function with a QBENCHMARK macro.
846    A benchmark test function will then typically consist of setup code and
847    a QBENCHMARK macro that contains the code to be measured. This test
848    function benchmarks QString::localeAwareCompare().
849
850    \snippet tutorial5/benchmarking.cpp 0
851
852    Setup can be done at the beginning of the function, the clock is not
853    running at this point. The code inside the QBENCHMARK macro will be
854    measured, and possibly repeated several times in order to  get an
855    accurate measurement.
856
857    Several \l {testlib-benchmarking-measurement}{back-ends} are available
858    and can be selected on the command line.
859
860    \section1 Data Functions
861
862    Data functions are useful for creating benchmarks that compare
863    multiple data inputs, for example locale aware compare against standard
864    compare.
865
866    \snippet tutorial5/benchmarking.cpp 1
867
868    The test function then uses the data to determine what to benchmark.
869
870    \snippet tutorial5/benchmarking.cpp 2
871
872    The "if (useLocaleCompare)" switch is placed outside the QBENCHMARK
873    macro to avoid measuring its overhead. Each benchmark test function
874    can have one active QBENCHMARK macro.
875
876    \section1 External Tools
877
878    Tools for handling and visualizing test data are available as part of
879    the \l {qtestlib-tools} project.
880    These include a tool for comparing performance data obtained from test
881    runs and a utility to generate Web-based graphs of performance data.
882
883    See the \l{qtestlib-tools Announcement}{qtestlib-tools announcement}
884    for more information on these tools and a simple graphing example.
885
886*/
887/*!
888    \page qttestlib-tutorial6.html
889
890    \previouspage {Chapter 5: Writing a Benchmark}{Chapter 5}
891
892    \title Chapter 6: Skipping Tests with QSKIP
893    \brief How to skip tests in certain cases.
894
895    \section2 Using QSKIP(\a description) in a test function
896
897    If the QSKIP() macro is called from a test function, it stops
898    the execution of the test without adding a failure to the test log.
899    It can be used to skip tests that are certain to fail. The text in
900    the QSKIP \a description parameter is appended to the test log,
901    and explains why the test was not carried out.
902
903    QSKIP can be used to skip testing when the implementation is not yet
904    complete or not supported on a certain platform. When there are known
905    failures, it is recommended to use QEXPECT_FAIL, so that the test is
906    always completely executed.
907
908    Example of QSKIP in a test function:
909
910    \snippet code/doc_src_qtqskip_snippet.cpp 0
911
912    In a data-driven test, each call to QSKIP() skips only the current
913    row of test data. If the data-driven test contains an unconditional
914    call to QSKIP, it produces a skip message for each row of test data.
915
916    \section2 Using QSKIP in a _data function
917
918    If called from a _data function, the QSKIP() macro stops
919    execution of the _data function. This prevents execution of the
920    associated test function.
921
922    See below for an example:
923
924    \snippet code/doc_src_qtqskip.cpp 1
925
926    \section2 Using QSKIP from initTestCase() or initTestCase_data()
927
928    If called from \c initTestCase() or \c initTestCase_data(), the
929    QSKIP() macro will skip all test and _data functions.
930*/
931