• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

Makefile.amH A D03-Dec-2021924 232

Makefile.inH A D03-May-202244.3 KiB1,3311,263

READMEH A D03-Dec-20213.2 KiB9161

config.benchH A D03-Dec-20211.2 KiB3429

formulasH A D03-Dec-20212.7 KiB132123

gen.pyH A D03-Dec-202128.3 KiB899740

prepare.shH A D03-Dec-2021862 2519

rundbamin.plH A D03-Dec-2021273 1813

stat.shH A D03-Dec-20214.1 KiB108100

stats.shH A D03-Dec-2021867 4334

tabl.plH A D03-Dec-20217 KiB257222

tabl1.plH A D03-Dec-20214.3 KiB189157

tabl2.plH A D03-Dec-20216.8 KiB268218

tabl3.plH A D03-Dec-20216.8 KiB269219

tabl4.plH A D03-Dec-20216.8 KiB271224

README

1This benchmark was used in the FORTE'14 paper "Mechanizing the
2Minimization of Deterministic Generalized Büchi Automata", by
3Souheib Baarir and Alexandre Duret-Lutz.  The generated data
4used in the paper are at
5
6  https://www.lrde.epita.fr/~adl/forte14/
7
8Note that the encoding used in the SAT-based minimization have evolved
9since the paper, so if you want to reproduce exactly the benchmark
10from the paper, you should download a copy of Spot 1.2.3.
11
12This benchmark has grown since FORTE'14. Some new SAT-based methods have
13been implemented. This benchmark measures all the methods and identifies the
14best.
15
16To reproduce, follow these instructions:
17
181) Compile Spot and install if it is not already done.
19
202) Compile Glucose 3 from http://www.labri.fr/perso/lsimon/glucose/
21   and make sure the 'glucose' binary is in your PATH.
22
233) Compile ltl2dstar from http://www.ltl2dstar.de/ and
24   make sure the 'ltl2dstar' binary is in your path.
25
264) Compile DBAminimizer from
27   http://www.react.uni-saarland.de/tools/dbaminimizer
28
29   and define the path to minimize.py with
30
31   % export DBA_MINIMIZER=$HOME/dbaminimizer/minimize.py
32
335) Then make sure you are in this directory:
34
35   % cd bench/dtgbasat/
36
376) Classify the set of formulas with
38
39   % ./prepare.sh
40
41   This will read the formulas from file "formulas" and output a file
42   "info.ltl", in which each formula is associated to a class (the ①,
43   ②, ③, and ④ of the paper), and a number of acceptance conditions
44   (the "m" of the paper).
45
467) Build a makefile to run all experiments
47
48   % ./stats.sh
49
508) You may optionally change the directory that contains temporary
51    files with
52
53   % export TMPDIR=someplace
54
55   (These temporary files can be several GB large.)
56
57   Note that runtime will be limited to 2h, but memory is not bounded.
58   You should set such a limit with "ulimit" if you like.  For instance
59   % ulimit -v 41943040
60
619) Before running the experiment, as Spot has now more SAT-based minimization
62   methods, you may want to reduce or choose the methods to be benched.
63   Have a look at the 'config.bench' file. By default, it contains all the
64   possible methods. Leave it unchanged if you want to compare all methods.
65   If it is changed, you need to re-generate the stat-gen.sh file by running:
66   % ./gen.py script --timeout <int> --unit <h|m|s>
67
6810) Actually run all experiments
69
70    % make -j4 -f stat.mk
71
72    This will build a CSV file called "all.csv".
73
7411) You may generate LaTeX code for the tables with 'gen.py'. This script is
75    really helpful as it permits you to generate even partial results. If after
76    all benchmarks, you want to compare only two methods among the others, just
77    comment the others in the configuration file 'config.bench' and run this
78    script.
79    % ./gen.py results
80
81    This scripts read the all.csv file generated at the end of the benchmark
82    and outputs two PDF documents:
83    -results.pdf: Which contains all statistics about each formula for each
84     method.
85    -resume.pdf which present two tables that count how many times a method
86     is better than another.
87
88
89For more instruction about how to use ltl2tgba and dstar2tgba to
90compute minimal DTGBA or DTBA, please read doc/userdoc/satmin.html
91