Name Date Size #Lines LOC

..08-May-2022-

lib/perftest/H08-May-2022-459267

READMEH A D08-May-20226.9 KiB212157

backtrace.cH A D08-May-20221.1 KiB5529

backtrace.expH A D08-May-20221.7 KiB7034

backtrace.pyH A D08-May-20221.8 KiB5123

disassemble.expH A D08-May-20221.7 KiB6126

disassemble.pyH A D08-May-20221.5 KiB3917

gm-hello.ccH A D08-May-2022862 268

gm-pervasive-typedef.ccH A D08-May-2022891 3112

gm-pervasive-typedef.hH A D08-May-2022974 319

gm-std.ccH A D08-May-2022928 3715

gm-std.hH A D08-May-20221.6 KiB5825

gm-use-cerr.ccH A D08-May-2022908 3012

gm-utils.hH A D08-May-2022933 267

gmonster-null-lookup.pyH A D08-May-20221.8 KiB4723

gmonster-pervasive-typedef.pyH A D08-May-20221.7 KiB4621

gmonster-print-cerr.pyH A D08-May-20222.1 KiB5322

gmonster-ptype-string.pyH A D08-May-20222 KiB4924

gmonster-runto-main.pyH A D08-May-20221.5 KiB4120

gmonster-select-file.pyH A D08-May-20221.6 KiB4422

gmonster1-null-lookup.expH A D08-May-20221,003 276

gmonster1-pervasive-typedef.expH A D08-May-20221.1 KiB296

gmonster1-print-cerr.expH A D08-May-2022982 276

gmonster1-ptype-string.expH A D08-May-20221 KiB276

gmonster1-runto-main.expH A D08-May-2022993 276

gmonster1-select-file.expH A D08-May-20221,005 276

gmonster1.ccH A D08-May-2022775 257

gmonster1.expH A D08-May-20224 KiB11764

gmonster2-null-lookup.expH A D08-May-20221 KiB286

gmonster2-pervasive-typedef.expH A D08-May-20221.1 KiB296

gmonster2-print-cerr.expH A D08-May-20221,012 276

gmonster2-ptype-string.expH A D08-May-20221 KiB286

gmonster2-runto-main.expH A D08-May-20221,022 276

gmonster2-select-file.expH A D08-May-20221 KiB286

gmonster2.ccH A D08-May-2022775 257

gmonster2.expH A D08-May-20224 KiB11966

single-step.cH A D08-May-2022939 3615

single-step.expH A D08-May-20221.6 KiB5930

single-step.pyH A D08-May-20221.2 KiB3515

skip-command.ccH A D08-May-2022978 4725

skip-command.expH A D08-May-20224.1 KiB13989

skip-command.pyH A D08-May-20221.2 KiB3515

skip-prologue.cH A D08-May-20221.1 KiB4923

skip-prologue.expH A D08-May-20222 KiB7944

skip-prologue.pyH A D08-May-20221.5 KiB4318

solib.cH A D08-May-20221.8 KiB8454

solib.expH A D08-May-20222.4 KiB9045

solib.pyH A D08-May-20222.6 KiB7440

template-breakpoints.ccH A D08-May-20222.2 KiB9870

template-breakpoints.expH A D08-May-20221.6 KiB6631

template-breakpoints.pyH A D08-May-20221.3 KiB3414

README

1The GDB Performance Testsuite
2=============================
3
4This README contains notes on hacking on GDB's performance testsuite.
5For notes on GDB's regular testsuite or how to run the performance testsuite,
6see ../README.
7
8Generated tests
9***************
10
11The testcase generator lets us easily test GDB on large programs.
12The "monster" tests are mocks of real programs where GDB's
13performance has been a problem.  Often it is difficult to build
14these monster programs, but when measuring performance one doesn't
15need the "real" program, all one needs is something that looks like
16the real program along the axis one is measuring; for example, the
17number of CUs (compilation units).
18
19Structure of generated tests
20****************************
21
22Generated tests consist of a binary and potentially any number of
23shared libraries.  One of these shared libraries, called "tail", is
24special.  It is used to provide mocks of system provided code, and
25contains no generated code.  Typically system-provided libraries
26are searched last which can have significant performance consequences,
27so we provide a means to exercise that.
28
29The binary and the generated shared libraries can have a mix of
30manually written and generated code.  Manually written code is
31specified with the {binary,gen_shlib}_extra_sources config parameters,
32which are lists of source files in testsuite/gdb.perf.  Generated
33files are controlled with various configuration knobs.
34
35Once a large test program is built, it makes sense to use it as much
36as possible (i.e., with multiple tests).  Therefore perf data collection
37for generated tests is split into two passes: the first pass builds
38all the generated tests, and the second pass runs all the performance
39tests.  The first pass is called "build-perf" and the second pass is
40called "check-perf".  See ../README for instructions on running the tests.
41
42Generated test directory layout
43*******************************
44
45All output lives under testsuite/gdb.perf in the build directory.
46
47Because some of the tests can get really large (and take potentially
48minutes to compile), parallelism is built into their compilation.
49Note however that we don't run the tests in parallel as it can skew
50the results.
51
52To keep things simple and stay consistent, we use the same
53mechanism used by "make check-parallel".  There is one catch: we need
54one .exp for each "worker" but the .exp file must come from the source
55tree.  To avoid generating .exp files for each worker we invoke
56lib/build-piece.exp for each worker with different arguments.
57The file build.piece.exp lives in "lib" to prevent dejagnu from finding
58it when it goes to look for .exp scripts to run.
59
60Another catch is that each parallel build worker needs its own directory
61so that their gdb.{log,sum} files don't collide.  On the other hand
62its easier if their output (all the object files and shared libraries)
63are in the same directory.
64
65The above considerations yield the resulting layout:
66
67$objdir/testsuite/gdb.perf/
68
69	gdb.log, gdb.sum: result of doing final link and running tests
70
71	workers/
72
73		gdb.log, gdb.sum: result of gen-workers step
74
75		$program_name/
76
77			${program_name}-0.worker
78			...
79			${program_name}-N.worker: input to build-pieces step
80
81	outputs/
82
83		${program_name}/
84
85			${program_name}-0/
86			...
87			${program_name}-N/
88
89				gdb.log, gdb.sum: for each build-piece worker
90
91			pieces/
92
93				generated sources, object files, shlibs
94
95			${run_name_1}: binary for test config #1
96			...
97			${run_name_N}: binary for test config #N
98
99Generated test configuration knobs
100**********************************
101
102The monster program generator provides various knobs for building various
103kinds of monster programs.  For a list of the knobs see function
104GenPerfTest::init_testcase in testsuite/lib/perftest.exp.
105Most knobs are self-explanatory.
106Here is a description of the less obvious ones.
107
108binary_extra_sources
109
110	This is the list of non-machine generated sources that go
111	into the test binary.  There must be at least one: the one
112	with main.
113
114class_specs
115
116	List of pairs of keys and values.
117	Supported keys are:
118	count: number of classes
119	  Default: 1
120	name: list of namespaces and class name prefix
121	  E.g., { ns0 ns1 foo } -> ns0::ns1::foo_<cu#>_{0,1,...}
122	  There is no default, this value must be specified.
123	nr_members: number of members
124	  Default: 0
125	nr_static_members: number of static members
126	  Default: 0
127	nr_methods: number of methods
128	  Default: 0
129	nr_inline_methods: number of inline methods
130	  Default: 0
131	nr_static_methods: number of static methods
132	  Default: 0
133	nr_static_inline_methods: number of static inline methods
134	  Default: 0
135
136	E.g.,
137	class foo {};
138	namespace ns1 { class bar {}; }
139	would be represented as:
140	{
141	  { count 1 name { foo } }
142	  { count 1 name { ns1 bar } }
143	}
144
145	The naming of each class is "class_<cu_nr>_<class_nr>",
146	where <cu_nr> is the number of the compilation unit the
147	class is defined in.
148
149	There's currently no support for nesting classes in classes,
150	or for specifying baseclasses or templates.
151
152Misc. configuration knobs
153*************************
154
155These knobs control building or running of the test and are specified
156like any global Tcl variable.
157
158CAT_PROGRAM
159
160	Default is /bin/cat, you shouldn't need to change this.
161
162SHA1SUM_PROGRAM
163
164	Default is /usr/bin/sha1sum.
165
166PERF_TEST_COMPILE_PARALLELISM
167
168	An integer, specifies the amount of parallelism in the builds.
169	Akin to make's -j flag.  The default is 10.
170
171Writing a generated test program
172********************************
173
174The best way to write a generated test program is to take an existing
175one as boilerplate.  Two good examples are gmonster1.exp and gmonster2.exp.
176gmonster1.exp builds a big binary with various custom manually written
177code, and gmonster2 is (essentially) the equivalent binary split up over
178several shared libraries.
179
180Writing a performance test that uses a generated program
181********************************************************
182
183The best way to write a test is to take an existing one as boilerplate.
184Good examples are gmonster1-*.exp and gmonster2-*.exp.
185
186The naming used thus far is that "foo.exp" builds the test program
187and there is one "foo-bar.exp" file for each performance test
188that uses test program "foo".
189
190In addition to writing the test driver .exp script, one must also
191write a python script that is used to run the test.
192This contents of this script is defined by the performance testsuite
193harness.  It defines a class, which is a subclass of one of the
194classes in gdb.perf/lib/perftest/perftest.py.
195See gmonster-null-lookup.py for an example.
196
197Note: Since gmonster1 and gmonster2 are treated as being variations of
198the same program, each test shares the same python script.
199E.g., gmonster1-null-lookup.exp and gmonster2-null-lookup.exp
200both use gmonster-null-lookup.py.
201
202Running performance tests for generated programs
203************************************************
204
205There are two steps: build and run.
206
207Example:
208
209bash$ make -j10 build-perf RUNTESTFLAGS="gmonster1.exp"
210bash$ make -j10 check-perf RUNTESTFLAGS="gmonster1-null-lookup.exp" \
211    GDB_PERFTEST_MODE=run
212