• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..30-Mar-2022-

apm_configs/H30-Mar-2022-21

output/H30-Mar-2022-21

quality_assessment/H03-May-2022-5,2153,725

BUILD.gnH A D30-Mar-20225.2 KiB169155

OWNERSH A D30-Mar-202295 65

README.mdH A D30-Mar-20224.8 KiB126108

apm_quality_assessment.pyH A D30-Mar-20228.1 KiB186142

apm_quality_assessment.shH A D30-Mar-20222.8 KiB9258

apm_quality_assessment_boxplot.pyH A D30-Mar-20225.2 KiB14797

apm_quality_assessment_export.pyH A D30-Mar-20221.8 KiB6237

apm_quality_assessment_gencfgs.pyH A D30-Mar-20223.3 KiB9970

apm_quality_assessment_optimize.pyH A D30-Mar-20226.4 KiB180133

apm_quality_assessment_unittest.pyH A D30-Mar-2022909 2914

README.md

1# APM Quality Assessment tool
2
3Python wrapper of APM simulators (e.g., `audioproc_f`) with which quality
4assessment can be automatized. The tool allows to simulate different noise
5conditions, input signals, APM configurations and it computes different scores.
6Once the scores are computed, the results can be easily exported to an HTML page
7which allows to listen to the APM input and output signals and also the
8reference one used for evaluation.
9
10## Dependencies
11 - OS: Linux
12 - Python 2.7
13 - Python libraries: enum34, numpy, scipy, pydub (0.17.0+), pandas (0.20.1+),
14                     pyquery (1.2+), jsmin (2.2+), csscompressor (0.9.4)
15 - It is recommended that a dedicated Python environment is used
16   - install `virtualenv`
17   - `$ sudo apt-get install python-virtualenv`
18   - setup a new Python environment (e.g., `my_env`)
19   - `$ cd ~ && virtualenv my_env`
20   - activate the new Python environment
21   - `$ source ~/my_env/bin/activate`
22   - add dependcies via `pip`
23   - `(my_env)$ pip install enum34 numpy pydub scipy pandas pyquery jsmin \`
24                `csscompressor`
25 - PolqaOem64 (see http://www.polqa.info/)
26    - Tested with POLQA Library v1.180 / P863 v2.400
27 - Aachen Impulse Response (AIR) Database
28    - Download https://www2.iks.rwth-aachen.de/air/air_database_release_1_4.zip
29 - Input probing signals and noise tracks (you can make your own dataset - *1)
30
31## Build
32 - Compile WebRTC
33 - Go to `out/Default/py_quality_assessment` and check that
34   `apm_quality_assessment.py` exists
35
36## Unit tests
37 - Compile WebRTC
38 - Go to `out/Default/py_quality_assessment`
39 - Run `python -m unittest discover -p "*_unittest.py"`
40
41## First time setup
42 - Deploy PolqaOem64 and set the `POLQA_PATH` environment variable
43   - e.g., `$ export POLQA_PATH=/var/opt/PolqaOem64`
44 - Deploy the AIR Database and set the `AECHEN_IR_DATABASE_PATH` environment
45 variable
46   - e.g., `$ export AECHEN_IR_DATABASE_PATH=/var/opt/AIR_1_4`
47 - Deploy probing signal tracks into
48   - `out/Default/py_quality_assessment/probing_signals` (*1)
49 - Deploy noise tracks into
50   - `out/Default/py_quality_assessment/noise_tracks` (*1, *2)
51
52(*1) You can use custom files as long as they are mono tracks sampled at 48kHz
53encoded in the 16 bit signed format (it is recommended that the tracks are
54converted and exported with Audacity).
55
56## Usage (scores computation)
57 - Go to `out/Default/py_quality_assessment`
58 - Check the `apm_quality_assessment.sh` as an example script to parallelize the
59   experiments
60 - Adjust the script according to your preferences (e.g., output path)
61 - Run `apm_quality_assessment.sh`
62 - The script will end by opening the browser and showing ALL the computed
63   scores
64
65## Usage (export reports)
66Showing all the results at once can be confusing. You therefore may want to
67export separate reports. In this case, you can use the
68`apm_quality_assessment_export.py` script as follows:
69
70 - Set `--output_dir, -o` to the same value used in `apm_quality_assessment.sh`
71 - Use regular expressions to select/filter out scores by
72    - APM configurations: `--config_names, -c`
73    - capture signals: `--capture_names, -i`
74    - render signals: `--render_names, -r`
75    - echo simulator: `--echo_simulator_names, -e`
76    - test data generators: `--test_data_generators, -t`
77    - scores: `--eval_scores, -s`
78 - Assign a suffix to the report name using `-f <suffix>`
79
80For instance:
81
82```
83$ ./apm_quality_assessment_export.py \
84  -o output/ \
85  -c "(^default$)|(.*AE.*)" \
86  -t \(white_noise\) \
87  -s \(polqa\) \
88  -f echo
89```
90
91## Usage (boxplot)
92After generating stats, it can help to visualize how a score depends on a
93certain APM simulator parameter. The `apm_quality_assessment_boxplot.py` script
94helps with that, producing plots similar to [this
95one](https://matplotlib.org/mpl_examples/pylab_examples/boxplot_demo_06.png).
96
97Suppose some scores come from running the APM simulator `audioproc_f` with
98or without the level controller: `--lc=1` or `--lc=0`. Then two boxplots
99side by side can be generated with
100
101```
102$ ./apm_quality_assessment_boxplot.py \
103      -o /path/to/output
104      -v <score_name>
105      -n /path/to/dir/with/apm_configs
106      -z lc
107```
108
109## Troubleshooting
110The input wav file must be:
111  - sampled at a sample rate that is a multiple of 100 (required by POLQA)
112  - in the 16 bit format (required by `audioproc_f`)
113  - encoded in the Microsoft WAV signed 16 bit PCM format (Audacity default
114    when exporting)
115
116Depending on the license, the POLQA tool may take “breaks” as a way to limit the
117throughput. When this happens, the APM Quality Assessment tool is slowed down.
118For more details about this limitation, check Section 10.9.1 in the POLQA manual
119v.1.18.
120
121In case of issues with the POLQA score computation, check
122`py_quality_assessment/eval_scores.py` and adapt
123`PolqaScore._parse_output_file()`.
124The code can be also fixed directly into the build directory (namely,
125`out/Default/py_quality_assessment/eval_scores.py`).
126