• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..16-Nov-2021-

README.mdH A D16-Nov-20214.9 KiB11781

bm_build.pyH A D16-Nov-20212.9 KiB8959

bm_constants.pyH A D16-Nov-20211.4 KiB3315

bm_diff.pyH A D16-Nov-20217.9 KiB235190

bm_main.pyH A D16-Nov-20215.1 KiB154118

bm_run.pyH A D16-Nov-20214.2 KiB12295

bm_speedup.pyH A D16-Nov-20211.7 KiB6942

README.md

1The bm_diff Family
2====
3
4This family of python scripts can be incredibly useful for fast iteration over
5different performance tweaks. The tools allow you to save performance data from
6a baseline commit, then quickly compare data from your working branch to that
7baseline data to see if you have made any performance wins.
8
9The tools operate with three concrete steps, which can be invoked separately,
10or all together via the driver script, bm_main.py. This readme will describe
11the typical workflow for these scripts, then it will include sections on the
12details of every script for advanced usage.
13
14## Normal Workflow
15
16Let's say you are working on a performance optimization for grpc_error. You have
17made some significant changes and want to see some data. From your branch, run
18(ensure everything is committed first):
19
20`tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
21
22This will build the `bm_error` binary on your branch, and then it will checkout
23master and build it there too. It will then run these benchmarks 5 times each.
24Lastly it will compute the statistically significant performance differences
25between the two branches. This should show the nice performance wins your
26changes have made.
27
28If you have already invoked bm_main with `-d master`, you should instead use
29`-o` for subsequent runs. This allows the script to skip re-building and
30re-running the unchanged master branch. For example:
31
32`tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o`
33
34This will only build and run `bm_error` on your branch. It will then compare
35the output to the saved runs from master.
36
37## Advanced Workflow
38
39If you have a deeper knowledge of these scripts, you can use them to do more
40fine tuned benchmark comparisons. For example, you could build, run, and save
41the benchmark output from two different base branches. Then you could diff both
42of these baselines against your working branch to see how the different metrics
43change. The rest of this doc goes over the details of what each of the
44individual modules accomplishes.
45
46## bm_build.py
47
48This scrips builds the benchmarks. It takes in a name parameter, and will
49store the binaries based on that. Both `opt` and `counter` configurations
50will be used. The `opt` is used to get cpu_time and real_time, and the
51`counters` build is used to track other metrics like allocs, atomic adds,
52etc etc etc.
53
54For example, if you were to invoke (we assume everything is run from the
55root of the repo):
56
57`tools/profiling/microbenchmarks/bm_diff/bm_build.py -b bm_error -n baseline`
58
59then the microbenchmark binaries will show up under
60`bm_diff_baseline/{opt,counters}/bm_error`
61
62## bm_run.py
63
64This script runs the benchmarks. It takes a name parameter that must match the
65name that was passed to `bm_build.py`. The script then runs the benchmark
66multiple times (default is 20, can be toggled via the loops parameter). The
67output is saved as `<benchmark name>.<config>.<name>.<loop idx>.json`
68
69For example, if you were to run:
70
71`tools/profiling/microbenchmarks/bm_diff/bm_run.py -b bm_error -b baseline -l 5`
72
73Then an example output file would be `bm_error.opt.baseline.0.json`
74
75## bm_diff.py
76
77This script takes in the output from two benchmark runs, computes the diff
78between them, and prints any significant improvements or regressions. It takes
79in two name parameters, old and new. These must have previously been built and
80run.
81
82For example, assuming you had already built and run a 'baseline' microbenchmark
83from master, and then you also built and ran a 'current' microbenchmark from
84the branch you were working on, you could invoke:
85
86`tools/profiling/microbenchmarks/bm_diff/bm_diff.py -b bm_error -o baseline -n current -l 5`
87
88This would output the percent difference between your branch and master.
89
90## bm_main.py
91
92This is the driver script. It uses the previous three modules and does
93everything for you. You pass in the benchmarks to be run, the number of loops,
94number of CPUs to use, and the commit to compare to. Then the script will:
95* Build the benchmarks at head, then checkout the branch to compare to and
96  build the benchmarks there
97* Run both sets of microbenchmarks
98* Run bm_diff.py to compare the two, outputs the difference.
99
100For example, one might run:
101
102`tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
103
104This would compare the current branch's error benchmarks to master.
105
106This script is invoked by our infrastructure on every PR to protect against
107regressions and demonstrate performance wins.
108
109However, if you are iterating over different performance tweaks quickly, it is
110unnecessary to build and run the baseline commit every time. That is why we
111provide a different flag in case you are sure that the baseline benchmark has
112already been built and run. In that case use the --old flag to pass in the name
113of the baseline. This will only build and run the current branch. For example:
114
115`tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o old`
116
117