1 /**********************************************************************
2
3 Audacity: A Digital Audio Editor
4
5 @file MIDIPlay.cpp
6 @brief Inject added MIDI playback capability into Audacity's audio engine
7
8 Paul Licameli split from AudioIO.cpp and AudioIOBase.cpp
9
10 *//*****************************************************************//**
11
12 \class MIDIPlay
13 \brief Callbacks that AudioIO uses, to synchronize audio and MIDI playback
14
15 \par EXPERIMENTAL_MIDI_OUT
16 If EXPERIMENTAL_MIDI_OUT is defined, this class manages
17 MIDI playback. It is decoupled from AudioIO by the abstract interface
18 AudioIOExt. Some of its methods execute on the main thread and some on the
19 low-latency PortAudio thread.
20
21 \par MIDI With Audio
22 When Audio and MIDI play simultaneously, MIDI synchronizes to Audio.
23 This is necessary because the Audio sample clock is not the same
24 hardware as the system time used to schedule MIDI messages. MIDI
25 is synchronized to Audio because it is simple to pause or rush
26 the dispatch of MIDI messages, but generally impossible to pause
27 or rush synchronous audio samples (without distortion).
28
29 \par
30 MIDI output is driven by the low latency thread (PortAudio's callback)
31 that also sends samples to the output device. The relatively low
32 latency to the output device allows Audacity to stop audio output
33 quickly. We want the same behavior for MIDI, but there is not
34 periodic callback from PortMidi (because MIDI is asynchronous).
35
36 \par
37 When Audio is running, MIDI is synchronized to Audio. Globals are set
38 in the Audio callback (audacityAudioCallback) for use by a time
39 function that reports milliseconds to PortMidi. (Details below.)
40
41 \par MIDI Without Audio
42 When Audio is not running, PortMidi uses its own millisecond timer
43 since there is no audio to synchronize to. (Details below.)
44
45 \par Implementation Notes and Details for MIDI
46 When opening devices, successAudio and successMidi indicate errors
47 if false, so normally both are true. Use playbackChannels,
48 captureChannels and mMidiPlaybackTracks.empty() to determine if
49 Audio or MIDI is actually in use.
50
51 \par Audio Time
52 Normally, the current time during playback is given by the variable
53 mTime. mTime normally advances by frames / samplerate each time an
54 audio buffer is output by the audio callback. However, Audacity has
55 a speed control that can perform continuously variable time stretching
56 on audio. This is achieved in two places: the playback "mixer" that
57 generates the samples for output processes the audio according to
58 the speed control. In a separate algorithm, the audio callback updates
59 mTime by (frames / samplerate) * factor, where factor reflects the
60 speed at mTime. This effectively integrates speed to get position.
61 Negative speeds are allowed too, for instance in scrubbing.
62
63 \par The Big Picture
64 @verbatim
65
66 Sample
67 Time (in seconds, = total_sample_count / sample_rate)
68 ^
69 | / /
70 | y=x-mSystemTimeMinusAudioTime / /
71 | / # /
72 | / /
73 | / # <- callbacks (#) showing
74 | /# / lots of timing jitter.
75 | top line is "full buffer" / / Some are later,
76 | condition / / indicating buffer is
77 | / / getting low. Plot
78 | / # / shows sample time
79 | / # / (based on how many
80 | / # / samples previously
81 | / / *written*) vs. real
82 | / # / time.
83 | /<------->/ audio latency
84 | /# v/
85 | / / bottom line is "empty buffer"
86 | / # / condition = DAC output time =
87 | / /
88 | / # <-- rapid callbacks as buffer is filled
89 | / /
90 0 +...+---------#---------------------------------------------------->
91 0 ^ | | real time
92 | | first callback time
93 | mSystemMinusAudioTime
94 |
95 Probably the actual real times shown in this graph are very large
96 in practice (> 350,000 sec.), so the X "origin" might be when
97 the computer was booted or 1970 or something.
98
99
100 @endverbatim
101
102 To estimate the true DAC time (needed to synchronize MIDI), we need
103 a mapping from track time to DAC time. The estimate is the theoretical
104 time of the full buffer (top diagonal line) + audio latency. To
105 estimate the top diagonal line, we "draw" the line to be at least
106 as high as any sample time corresponding to a callback (#), and we
107 slowly lower the line in case the sample clock is slow or the system
108 clock is fast, preventing the estimated line from drifting too far
109 from the actual callback observations. The line is occasionally
110 "bumped" up by new callback observations, but continuously
111 "lowered" at a very low rate. All adjustment is accomplished
112 by changing mSystemMinusAudioTime, shown here as the X-intercept.\n
113 theoreticalFullBufferTime = realTime - mSystemMinusAudioTime\n
114 To estimate audio latency, notice that the first callback happens on
115 an empty buffer, but the buffer soon fills up. This will cause a rapid
116 re-estimation of mSystemMinusAudioTime. (The first estimate of
117 mSystemMinusAudioTime will simply be the real time of the first
118 callback time.) By watching these changes, which happen within ms of
119 starting, we can estimate the buffer size and thus audio latency.
120 So, to map from track time to real time, we compute:\n
121 DACoutputTime = trackTime + mSystemMinusAudioTime\n
122 There are some additional details to avoid counting samples while
123 paused or while waiting for initialization, MIDI latency, etc.
124 Also, in the code, track time is measured with respect to the track
125 origin, so there's an extra term to add (mT0) if you start somewhere
126 in the middle of the track.
127 Finally, when a callback occurs, you might expect there is room in
128 the output buffer for the requested frames, so maybe the "full buffer"
129 sample time should be based not on the first sample of the callback, but
130 the last sample time + 1 sample. I suspect, at least on Linux, that the
131 callback occurs as soon as the last callback completes, so the buffer is
132 really full, and the callback thread is going to block waiting for space
133 in the output buffer.
134
135 \par Midi Time
136 MIDI is not warped according to the speed control. This might be
137 something that should be changed. (Editorial note: Wouldn't it
138 make more sense to display audio at the correct time and allow
139 users to stretch audio the way they can stretch MIDI?) For now,
140 MIDI plays at 1 second per second, so it requires an unwarped clock.
141 In fact, MIDI time synchronization requires a millisecond clock that
142 does not pause. Note that mTime will stop progress when the Pause
143 button is pressed, even though audio samples (zeros) continue to
144 be output.
145
146 \par
147 Therefore, we define the following interface for MIDI timing:
148 \li \c AudioTime() is the time based on all samples written so far, including zeros output during pauses. AudioTime() is based on the start location mT0, not zero.
149 \li \c PauseTime() is the amount of time spent paused, based on a count of zero-padding samples output.
150 \li \c MidiTime() is an estimate in milliseconds of the current audio output time + 1s. In other words, what audacity track time corresponds to the audio (plus pause insertions) at the DAC output?
151
152 \par AudioTime() and PauseTime() computation
153 AudioTime() is simply mT0 + mNumFrames / mRate.
154 mNumFrames is incremented in each audio callback. Similarly, PauseTime()
155 is pauseFrames / rate. pauseFrames is also incremented in
156 each audio callback when a pause is in effect or audio output is ready to start.
157
158 \par MidiTime() computation
159 MidiTime() is computed based on information from PortAudio's callback,
160 which estimates the system time at which the current audio buffer will
161 be output. Consider the (unimplemented) function RealToTrack() that
162 maps real audio write time to track time. If writeTime is the system
163 time for the first sample of the current output buffer, and
164 if we are in the callback, so AudioTime() also refers to the first sample
165 of the buffer, then \n
166 RealToTrack(writeTime) = AudioTime() - PauseTime()\n
167 We want to know RealToTrack of the current time (when we are not in the
168 callback, so we use this approximation for small d: \n
169 RealToTrack(t + d) = RealToTrack(t) + d, or \n
170 Letting t = writeTime and d = (systemTime - writeTime), we can
171 substitute to get:\n
172 RealToTrack(systemTime)
173 = RealToTrack(writeTime) + systemTime - writeTime\n
174 = AudioTime() - PauseTime() + (systemTime - writeTime) \n
175 MidiTime() should include pause time, so that it increases smoothly,
176 and audioLatency so that MidiTime() corresponds to the time of audio
177 output rather than audio write times. Also MidiTime() is offset by 1
178 second to avoid negative time at startup, so add 1: \n
179 MidiTime(systemTime) in seconds\n
180 = RealToTrack(systemTime) + PauseTime() - audioLatency + 1 \n
181 = AudioTime() + (systemTime - writeTime) - audioLatency + 1 \n
182 (Note that audioLatency is called mAudioOutLatency in the code.)
183 When we schedule a MIDI event with track time TT, we need
184 to map TT to a PortMidi timestamp. The PortMidi timestamp is exactly
185 MidiTime(systemTime) in ms units, and \n
186 MidiTime(x) = RealToTrack(x) + PauseTime() + 1, so \n
187 timestamp = TT + PauseTime() + 1 - midiLatency \n
188 Note 1: The timestamp is incremented by the PortMidi stream latency
189 (midiLatency) so we subtract midiLatency here for the timestamp
190 passed to PortMidi. \n
191 Note 2: Here, we're setting x to the time at which RealToTrack(x) = TT,
192 so then MidiTime(x) is the desired timestamp. To be completely
193 correct, we should assume that MidiTime(x + d) = MidiTime(x) + d,
194 and consider that we compute MidiTime(systemTime) based on the
195 *current* system time, but we really want the MidiTime(x) for some
196 future time corresponding when MidiTime(x) = TT.)
197
198 \par
199 Also, we should assume PortMidi was opened with mMidiLatency, and that
200 MIDI messages become sound with a delay of mSynthLatency. Therefore,
201 the final timestamp calculation is: \n
202 timestamp = TT + PauseTime() + 1 - (mMidiLatency + mSynthLatency) \n
203 (All units here are seconds; some conversion is needed in the code.)
204
205 \par
206 The difference AudioTime() - PauseTime() is the time "cursor" for
207 MIDI. When the speed control is used, MIDI and Audio will become
208 unsynchronized. In particular, MIDI will not be synchronized with
209 the visual cursor, which moves with scaled time reported in mTime.
210
211 \par Timing in Linux
212 It seems we cannot get much info from Linux. We can read the time
213 when we get a callback, and we get a variable frame count (it changes
214 from one callback to the next). Returning to the RealToTrack()
215 equations above: \n
216 RealToTrack(outputTime) = AudioTime() - PauseTime() - bufferDuration \n
217 where outputTime should be PortAudio's estimate for the most recent output
218 buffer, but at least on my Dell Latitude E7450, PortAudio is getting zero
219 from ALSA, so we need to find a proxy for this.
220
221 \par Estimating outputTime (Plan A, assuming double-buffered, fixed-size buffers, please skip to Plan B)
222 One can expect the audio callback to happen as soon as there is room in
223 the output for another block of samples, so we could just measure system
224 time at the top of the callback. Then we could add the maximum delay
225 buffered in the system. E.g. if there is simple double buffering and the
226 callback is computing one of the buffers, the callback happens just as
227 one of the buffers empties, meaning the other buffer is full, so we have
228 exactly one buffer delay before the next computed sample is output.
229
230 If computation falls behind a bit, the callback will be later, so the
231 delay to play the next computed sample will be less. I think a reasonable
232 way to estimate the actual output time is to assume that the computer is
233 mostly keeping up and that *most* callbacks will occur immediately when
234 there is space. Note that the most likely reason for the high-priority
235 audio thread to fall behind is the callback itself, but the start of the
236 callback should be pretty consistently keeping up.
237
238 Also, we do not have to have a perfect estimate of the time. Suppose we
239 estimate a linear mapping from sample count to system time by saying
240 that the sample count maps to the system time at the most recent callback,
241 and set the slope to 1% slower than real time (as if the sample clock is
242 slow). Now, at each callback, if the callback seems to occur earlier than
243 expected, we can adjust the mapping to be earlier. The earlier the
244 callback, the more accurate it must be. On the other hand, if the callback
245 is later than predicted, it must be a delayed callback (or else the
246 sample clock is more than 1% slow, which is really a hardware problem.)
247 How bad can this be? Assuming callbacks every 30ms (this seems to be what
248 I'm observing in a default setup), you'll be a maximum of 1ms off even if
249 2 out of 3 callbacks are late. This is pretty reasonable given that
250 PortMIDI clock precision is 1ms. If buffers are larger and callback timing
251 is more erratic, errors will be larger, but even a few ms error is
252 probably OK.
253
254 \par Estimating outputTime (Plan B, variable framesPerBuffer in callback, please skip to Plan C)
255 ALSA is complicated because we get varying values of
256 framesPerBuffer from callback to callback. Assume you get more frames
257 when the callback is later (because there is more accumulated input to
258 deliver and more more accumulated room in the output buffers). So take
259 the current time and subtract the duration of the frame count in the
260 current callback. This should be a time position that is relatively
261 jitter free (because we estimated the lateness by frame count and
262 subtracted that out). This time position intuitively represents the
263 current ADC time, or if no input, the time of the tail of the output
264 buffer. If we wanted DAC time, we'd have to add the total output
265 buffer duration, which should be reported by PortAudio. (If PortAudio
266 is wrong, we'll be systematically shifted in time by the error.)
267
268 Since there is still bound to be jitter, we can smooth these estimates.
269 First, we will assume a linear mapping from system time to audio time
270 with slope = 1, so really it's just the offset we need.
271
272 To improve the estimate, we get a new offset every callback, so we can
273 create a "smooth" offset by using a simple regression model (also
274 this could be seen as a first order filter). The following formula
275 updates smooth_offset with a new offset estimate in the callback:
276 smooth_offset = smooth_offset * 0.9 + new_offset_estimate * 0.1
277 Since this is smooth, we'll have to be careful to give it a good initial
278 value to avoid a long convergence.
279
280 \par Estimating outputTime (Plan C)
281 ALSA is complicated because we get varying values of
282 framesPerBuffer from callback to callback. It seems there is a lot
283 of variation in callback times and buffer space. One solution would
284 be to go to fixed size double buffer, but Audacity seems to work
285 better as is, so Plan C is to rely on one invariant which is that
286 the output buffer cannot overflow, so there's a limit to how far
287 ahead of the DAC time we can be writing samples into the
288 buffer. Therefore, we'll assume that the audio clock runs slow by
289 about 0.2% and we'll assume we're computing at that rate. If the
290 actual output position is ever ahead of the computed position, we'll
291 increase the computed position to the actual position. Thus whenever
292 the buffer is less than near full, we'll stay ahead of DAC time,
293 falling back at a rate of about 0.2% until eventually there's
294 another near-full buffer callback that will push the time back ahead.
295
296 \par Interaction between MIDI, Audio, and Pause
297 When Pause is used, PauseTime() will increase at the same rate as
298 AudioTime(), and no more events will be output. Because of the
299 time advance of mAudioOutputLatency + latency and the
300 fact that
301 AudioTime() advances stepwise by mAudioBufferDuration, some extra MIDI
302 might be output, but the same is true of audio: something like
303 mAudioOutputLatency audio samples will be in the output buffer
304 (with up to mAudioBufferDuration additional samples, depending on
305 when the Pause takes effect). When playback is resumed, there will
306 be a slight delay corresponding to the extra data previously sent.
307 Again, the same is true of audio. Audio and MIDI will not pause and
308 resume at exactly the same times, but their pause and resume times
309 will be within the low tens of milliseconds, and the streams will
310 be synchronized in any case. I.e. if audio pauses 10ms earlier than
311 MIDI, it will resume 10ms earlier as well.
312
313 \par PortMidi Latency Parameter
314 PortMidi has a "latency" parameter that is added to all timestamps.
315 This value must be greater than zero to enable timestamp-based timing,
316 but serves no other function, so we will set it to 1. All timestamps
317 must then be adjusted down by 1 before messages are sent. This
318 adjustment is on top of all the calculations described above. It just
319 seem too complicated to describe everything in complete detail in one
320 place.
321
322 \par Midi with a time track
323 When a variable-speed time track is present, MIDI events are output
324 with the times used by the time track (rather than the raw times).
325 This ensures MIDI is synchronized with audio.
326
327 \par Midi While Recording Only or Without Audio Playback
328 To reduce duplicate code and to ensure recording is synchronised with
329 MIDI, a portaudio stream will always be used, even when there is no
330 actual audio output. For recording, this ensures that the recorded
331 audio will by synchronized with the MIDI (otherwise, it gets out-of-
332 sync if played back with correct timing).
333
334 \par NoteTrack PlayLooped
335 When mPlayLooped is true, output is supposed to loop from mT0 to mT1.
336 For NoteTracks, we interpret this to mean that any note-on or control
337 change in the range mT0 <= t < mT1 is sent (notes that start before
338 mT0 are not played even if they extend beyond mT0). Then, all notes
339 are turned off. Events in the range mT0 <= t < mT1 are then repeated,
340 offset by (mT1 - mT0), etc. We do NOT go back to the beginning and
341 play all control changes (update events) up to mT0, nor do we "undo"
342 any state changes between mT0 and mT1.
343
344 \par NoteTrack PlayLooped Implementation
345 The mIterator object (an Alg_iterator) returns NULL when there are
346 no more events scheduled before mT1. At mT1, we want to output
347 all notes off messages, but the FillOtherBuffers() loop will exit
348 if mNextEvent is NULL, so we create a "fake" mNextEvent for this
349 special "event" of sending all notes off. After that, we destroy
350 the iterator and use PrepareMidiIterator() to set up a NEW one.
351 At each iteration, time must advance by (mT1 - mT0), so the
352 accumulated complete loop time (in "unwarped," track time) is computed
353 by MidiLoopOffset().
354
355 **********************************************************************/
356
357 #include "MIDIPlay.h"
358 #include "AudioIO.h"
359
360 #include "BasicUI.h"
361 #include "Prefs.h"
362 #include "portaudio.h"
363 #include <portmidi.h>
364 #include <porttime.h>
365 #include <thread>
366
367 #define ROUND(x) (int) ((x)+0.5)
368
369 class NoteTrack;
370 using NoteTrackConstArray = std::vector < std::shared_ptr< const NoteTrack > >;
371
372 namespace {
373
374 /*
375 Adapt and rename the implementation of PaUtil_GetTime from commit
376 c5d2c51bd6fe354d0ee1119ba932bfebd3ebfacc of portaudio
377 */
378 #if defined( __APPLE__ )
379
380 #include <mach/mach_time.h>
381
382 /* Scaler to convert the result of mach_absolute_time to seconds */
383 static double machSecondsConversionScaler_ = 0.0;
384
385 /* Initialize it */
InitializeTime__anonbba6ffd30111::InitializeTime386 static struct InitializeTime { InitializeTime() {
387 mach_timebase_info_data_t info;
388 kern_return_t err = mach_timebase_info( &info );
389 if( err == 0 )
390 machSecondsConversionScaler_ = 1e-9 * (double) info.numer / (double) info.denom;
391 } } initializeTime;
392
util_GetTime(void)393 static PaTime util_GetTime( void )
394 {
395 return mach_absolute_time() * machSecondsConversionScaler_;
396 }
397
398 #elif defined( __WXMSW__ )
399
400 #include "profileapi.h"
401 #include "sysinfoapi.h"
402 #include "timeapi.h"
403
404 static int usePerformanceCounter_;
405 static double secondsPerTick_;
406
407 static struct InitializeTime { InitializeTime() {
408 LARGE_INTEGER ticksPerSecond;
409
410 if( QueryPerformanceFrequency( &ticksPerSecond ) != 0 )
411 {
412 usePerformanceCounter_ = 1;
413 secondsPerTick_ = 1.0 / (double)ticksPerSecond.QuadPart;
414 }
415 else
416 {
417 usePerformanceCounter_ = 0;
418 }
419 } } initializeTime;
420
421 static double util_GetTime( void )
422 {
423 LARGE_INTEGER time;
424
425 if( usePerformanceCounter_ )
426 {
427 /*
428 Note: QueryPerformanceCounter has a known issue where it can skip forward
429 by a few seconds (!) due to a hardware bug on some PCI-ISA bridge hardware.
430 This is documented here:
431 http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323&
432
433 The work-arounds are not very paletable and involve querying GetTickCount
434 at every time step.
435
436 Using rdtsc is not a good option on multi-core systems.
437
438 For now we just use QueryPerformanceCounter(). It's good, most of the time.
439 */
440 QueryPerformanceCounter( &time );
441 return time.QuadPart * secondsPerTick_;
442 }
443 else
444 {
445 #if defined(WINAPI_FAMILY) && (WINAPI_FAMILY == WINAPI_FAMILY_APP)
446 return GetTickCount64() * .001;
447 #else
448 return timeGetTime() * .001;
449 #endif
450 }
451 }
452
453 #elif defined(HAVE_CLOCK_GETTIME)
454
455 #include <time.h>
456
457 static PaTime util_GetTime( void )
458 {
459 struct timespec tp;
460 clock_gettime(CLOCK_REALTIME, &tp);
461 return (PaTime)(tp.tv_sec + tp.tv_nsec * 1e-9);
462 }
463
464 #else
465
466 #include <sys/time.h>
467
468 static PaTime util_GetTime( void )
469 {
470 struct timeval tv;
471 gettimeofday( &tv, NULL );
472 return (PaTime) tv.tv_usec * 1e-6 + tv.tv_sec;
473 }
474
475 #endif
476
477 enum {
478 // This is the least positive latency we can
479 // specify to Pm_OpenOutput, 1 ms, which prevents immediate
480 // scheduling of events:
481 MIDI_MINIMAL_LATENCY_MS = 1
482 };
483
484 // return the system time as a double
485 static double streamStartTime = 0; // bias system time to small number
486
SystemTime(bool usingAlsa)487 static double SystemTime(bool usingAlsa)
488 {
489 #ifdef __WXGTK__
490 if (usingAlsa) {
491 struct timespec now;
492 // CLOCK_MONOTONIC_RAW is unaffected by NTP or adj-time
493 clock_gettime(CLOCK_REALTIME, &now);
494 //return now.tv_sec + now.tv_nsec * 0.000000001;
495 return (now.tv_sec + now.tv_nsec * 0.000000001) - streamStartTime;
496 }
497 #else
498 static_cast<void>(usingAlsa);//compiler food.
499 #endif
500
501 return util_GetTime() - streamStartTime;
502 }
503
504 bool MIDIPlay::mMidiStreamActive = false;
505 bool MIDIPlay::mMidiOutputComplete = true;
506
507 AudioIOExt::RegisteredFactory sMIDIPlayFactory{
__anonbba6ffd30302()508 [](const auto &playbackSchedule){
509 return std::make_unique<MIDIPlay>(playbackSchedule);
510 }
511 };
512
MIDIPlay(const PlaybackSchedule & schedule)513 MIDIPlay::MIDIPlay(const PlaybackSchedule &schedule)
514 : mPlaybackSchedule{ schedule }
515 {
516 #ifdef AUDIO_IO_GB_MIDI_WORKAROUND
517 // Pre-allocate with a likely sufficient size, exceeding probable number of
518 // channels
519 mPendingNotesOff.reserve(64);
520 #endif
521
522 PmError pmErr = Pm_Initialize();
523
524 if (pmErr != pmNoError) {
525 auto errStr =
526 XO("There was an error initializing the midi i/o layer.\n");
527 errStr += XO("You will not be able to play midi.\n\n");
528 wxString pmErrStr = LAT1CTOWX(Pm_GetErrorText(pmErr));
529 if (!pmErrStr.empty())
530 errStr += XO("Error: %s").Format( pmErrStr );
531 // XXX: we are in libaudacity, popping up dialogs not allowed! A
532 // long-term solution will probably involve exceptions
533 using namespace BasicUI;
534 ShowMessageBox(
535 errStr,
536 MessageBoxOptions{}
537 .Caption(XO("Error Initializing Midi"))
538 .ButtonStyle(Button::Ok)
539 .IconStyle(Icon::Error));
540
541 // Same logic for PortMidi as described above for PortAudio
542 }
543 }
544
~MIDIPlay()545 MIDIPlay::~MIDIPlay()
546 {
547 Pm_Terminate();
548 }
549
StartOtherStream(const TransportTracks & tracks,const PaStreamInfo * info,double,double rate)550 bool MIDIPlay::StartOtherStream(const TransportTracks &tracks,
551 const PaStreamInfo* info, double, double rate)
552 {
553 mMidiPlaybackTracks.clear();
554 for (const auto &pTrack : tracks.otherPlayableTracks) {
555 pTrack->TypeSwitch( [&](const NoteTrack *pNoteTrack){
556 mMidiPlaybackTracks.push_back(
557 pNoteTrack->SharedPointer<const NoteTrack>());
558 } );
559 }
560
561 streamStartTime = 0;
562 streamStartTime = SystemTime(mUsingAlsa);
563
564 mNumFrames = 0;
565 // we want this initial value to be way high. It should be
566 // sufficient to assume AudioTime is zero and therefore
567 // mSystemMinusAudioTime is SystemTime(), but we'll add 1000s
568 // for good measure. On the first callback, this should be
569 // reduced to SystemTime() - mT0, and note that mT0 is always
570 // positive.
571 mSystemMinusAudioTimePlusLatency =
572 mSystemMinusAudioTime = SystemTime(mUsingAlsa) + 1000;
573 mAudioOutLatency = 0.0; // set when stream is opened
574 mCallbackCount = 0;
575 mAudioFramesPerBuffer = 0;
576
577 // We use audio latency to estimate how far ahead of DACS we are writing
578 if (info) {
579 // this is an initial guess, but for PA/Linux/ALSA it's wrong and will be
580 // updated with a better value:
581 mAudioOutLatency = info->outputLatency;
582 mSystemMinusAudioTimePlusLatency += mAudioOutLatency;
583 }
584
585 // TODO: it may be that midi out will not work unless audio in or out is
586 // active -- this would be a bug and may require a change in the
587 // logic here.
588
589 bool successMidi = true;
590
591 if(!mMidiPlaybackTracks.empty()){
592 successMidi = StartPortMidiStream(rate);
593 }
594
595 // On the other hand, if MIDI cannot be opened, we will not complain
596 // return successMidi;
597 return true;
598 }
599
AbortOtherStream()600 void MIDIPlay::AbortOtherStream()
601 {
602 mMidiPlaybackTracks.clear();
603 }
604
MidiTime(void * pInfo)605 PmTimestamp MidiTime(void *pInfo)
606 {
607 return static_cast<MIDIPlay*>(pInfo)->MidiTime();
608 }
609
610 // Set up state to iterate NoteTrack events in sequence.
611 // Sends MIDI control changes up to the starting point mT0
612 // if send is true. Output is delayed by offset to facilitate
613 // looping (each iteration is delayed more).
PrepareMidiIterator(bool send,double offset)614 void MIDIPlay::PrepareMidiIterator(bool send, double offset)
615 {
616 int i;
617 int nTracks = mMidiPlaybackTracks.size();
618 // instead of initializing with an Alg_seq, we use begin_seq()
619 // below to add ALL Alg_seq's.
620 mIterator.emplace(nullptr, false);
621 // Iterator not yet initialized, must add each track...
622 for (i = 0; i < nTracks; i++) {
623 const auto t = mMidiPlaybackTracks[i].get();
624 Alg_seq_ptr seq = &t->GetSeq();
625 // mark sequence tracks as "in use" since we're handing this
626 // off to another thread and want to make sure nothing happens
627 // to the data until playback finishes. This is just a sanity check.
628 seq->set_in_use(true);
629 mIterator->begin_seq(seq,
630 // casting away const, but allegro just uses the pointer as an opaque "cookie"
631 const_cast<NoteTrack*>(t),
632 t->GetOffset() + offset);
633 }
634 GetNextEvent(); // prime the pump for FillOtherBuffers
635
636 // Start MIDI from current cursor position
637 mSendMidiState = true;
638 while (mNextEvent &&
639 mNextEventTime < mPlaybackSchedule.mT0 + offset) {
640 if (send) OutputEvent(0);
641 GetNextEvent();
642 }
643 mSendMidiState = false;
644 }
645
StartPortMidiStream(double rate)646 bool MIDIPlay::StartPortMidiStream(double rate)
647 {
648 #ifdef __WXGTK__
649 // Duplicating a bit of AudioIO::StartStream
650 // Detect whether ALSA is the chosen host, and do the various involved MIDI
651 // timing compensations only then.
652 mUsingAlsa = (AudioIOHost.Read() == L"ALSA");
653 #endif
654
655 int i;
656 int nTracks = mMidiPlaybackTracks.size();
657 // Only start MIDI stream if there is an open track
658 if (nTracks == 0)
659 return false;
660
661 //wxPrintf("StartPortMidiStream: mT0 %g mTime %g\n",
662 // mT0, mTime);
663
664 /* get midi playback device */
665 PmDeviceID playbackDevice = Pm_GetDefaultOutputDeviceID();
666 auto playbackDeviceName = MIDIPlaybackDevice.Read();
667 mSynthLatency = MIDISynthLatency_ms.Read();
668 if (wxStrcmp(playbackDeviceName, wxT("")) != 0) {
669 for (i = 0; i < Pm_CountDevices(); i++) {
670 const PmDeviceInfo *info = Pm_GetDeviceInfo(i);
671 if (!info) continue;
672 if (!info->output) continue;
673 wxString interf = wxSafeConvertMB2WX(info->interf);
674 wxString name = wxSafeConvertMB2WX(info->name);
675 interf.Append(wxT(": ")).Append(name);
676 if (wxStrcmp(interf, playbackDeviceName) == 0) {
677 playbackDevice = i;
678 }
679 }
680 } // (else playback device has Pm_GetDefaultOuputDeviceID())
681
682 if (playbackDevice < 0)
683 return false;
684
685 /* open output device */
686 mLastPmError = Pm_OpenOutput(&mMidiStream,
687 playbackDevice,
688 NULL,
689 0,
690 &::MidiTime,
691 this, // supplies pInfo argument to MidiTime
692 MIDI_MINIMAL_LATENCY_MS);
693 if (mLastPmError == pmNoError) {
694 mMidiStreamActive = true;
695 mMidiPaused = false;
696 mMidiLoopPasses = 0;
697 mMidiOutputComplete = false;
698 mMaxMidiTimestamp = 0;
699 PrepareMidiIterator(true, 0);
700
701 // It is ok to call this now, but do not send timestamped midi
702 // until after the first audio callback, which provides necessary
703 // data for MidiTime().
704 Pm_Synchronize(mMidiStream); // start using timestamps
705 }
706 return (mLastPmError == pmNoError);
707 }
708
StopOtherStream()709 void MIDIPlay::StopOtherStream()
710 {
711 if (mMidiStream && mMidiStreamActive) {
712 /* Stop Midi playback */
713 mMidiStreamActive = false;
714
715 mMidiOutputComplete = true;
716
717 // now we can assume "ownership" of the mMidiStream
718 // if output in progress, send all off, etc.
719 AllNotesOff();
720 // AllNotesOff() should be sufficient to stop everything, but
721 // in Linux, if you Pm_Close() immediately, it looks like
722 // messages are dropped. ALSA then seems to send All Sound Off
723 // and Reset All Controllers messages, but not all synthesizers
724 // respond to these messages. This is probably a bug in PortMidi
725 // if the All Off messages do not get out, but for security,
726 // delay a bit so that messages can be delivered before closing
727 // the stream. Add 2ms of "padding" to avoid any rounding errors.
728 while (mMaxMidiTimestamp + 2 > MidiTime()) {
729 using namespace std::chrono;
730 std::this_thread::sleep_for(1ms); // deliver the all-off messages
731 }
732 Pm_Close(mMidiStream);
733 mMidiStream = NULL;
734 mIterator->end();
735
736 // set in_use flags to false
737 int nTracks = mMidiPlaybackTracks.size();
738 for (int i = 0; i < nTracks; i++) {
739 const auto t = mMidiPlaybackTracks[i].get();
740 Alg_seq_ptr seq = &t->GetSeq();
741 seq->set_in_use(false);
742 }
743
744 mIterator.reset(); // just in case someone tries to reference it
745 }
746
747 mMidiPlaybackTracks.clear();
748 }
749
750 static Alg_update gAllNotesOff; // special event for loop ending
751 // the fields of this event are never used, only the address is important
752
UncorrectedMidiEventTime(double pauseTime)753 double MIDIPlay::UncorrectedMidiEventTime(double pauseTime)
754 {
755 double time;
756 if (mPlaybackSchedule.mEnvelope)
757 time =
758 mPlaybackSchedule.RealDuration(mNextEventTime - MidiLoopOffset())
759 + mPlaybackSchedule.mT0 + (mMidiLoopPasses *
760 mPlaybackSchedule.mWarpedLength);
761 else
762 time = mNextEventTime;
763
764 return time + pauseTime;
765 }
766
OutputEvent(double pauseTime)767 void MIDIPlay::OutputEvent(double pauseTime)
768 {
769 int channel = (mNextEvent->chan) & 0xF; // must be in [0..15]
770 int command = -1;
771 int data1 = -1;
772 int data2 = -1;
773
774 double eventTime = UncorrectedMidiEventTime(pauseTime);
775
776 // 0.0005 is for rounding
777 double time = eventTime + 0.0005 -
778 (mSynthLatency * 0.001);
779
780 time += 1; // MidiTime() has a 1s offset
781 // state changes have to go out without delay because the
782 // midi stream time gets reset when playback starts, and
783 // we don't want to leave any control changes scheduled for later
784 if (time < 0 || mSendMidiState) time = 0;
785 PmTimestamp timestamp = (PmTimestamp) (time * 1000); /* s to ms */
786
787 // The special event gAllNotesOff means "end of playback, send
788 // all notes off on all channels"
789 if (mNextEvent == &gAllNotesOff) {
790 bool looping = mPlaybackSchedule.GetPolicy().Looping(mPlaybackSchedule);
791 AllNotesOff(looping);
792 if (looping) {
793 // jump back to beginning of loop
794 ++mMidiLoopPasses;
795 PrepareMidiIterator(false, MidiLoopOffset());
796 } else {
797 mNextEvent = nullptr;
798 }
799 return;
800 }
801
802 // if mNextEvent's channel is visible, play it, visibility can
803 // be updated while playing. Be careful: if we have a note-off,
804 // then we must not pay attention to the channel selection
805 // or mute/solo buttons because we must turn the note off
806 // even if the user changed something after the note began
807 // Note that because multiple tracks can output to the same
808 // MIDI channels, it is not a good idea to send "All Notes Off"
809 // when the user presses the mute button. We have no easy way
810 // to know what notes are sounding on any given muted track, so
811 // we'll just wait for the note-off events to happen.
812 // Also note that note-offs are only sent when we call
813 // mIterator->request_note_off(), so notes that are not played
814 // will not generate random note-offs. There is the interesting
815 // case that if the playback is paused, all-notes-off WILL be sent
816 // and if playback resumes, the pending note-off events WILL also
817 // be sent (but if that is a problem, there would also be a problem
818 // in the non-pause case.
819 if (((mNextEventTrack->IsVisibleChan(channel)) &&
820 // only play if note is not muted:
821 !((mHasSolo || mNextEventTrack->GetMute()) &&
822 !mNextEventTrack->GetSolo())) ||
823 (mNextEvent->is_note() && !mNextIsNoteOn)) {
824 // Note event
825 if (mNextEvent->is_note() && !mSendMidiState) {
826 // Pitch and velocity
827 data1 = mNextEvent->get_pitch();
828 if (mNextIsNoteOn) {
829 data2 = mNextEvent->get_loud(); // get velocity
830 int offset = mNextEventTrack->GetVelocity();
831 data2 += offset; // offset comes from per-track slider
832 // clip velocity to insure a legal note-on value
833 data2 = (data2 < 1 ? 1 : (data2 > 127 ? 127 : data2));
834 // since we are going to play this note, we need to get a note_off
835 mIterator->request_note_off();
836
837 #ifdef AUDIO_IO_GB_MIDI_WORKAROUND
838 mPendingNotesOff.push_back(std::make_pair(channel, data1));
839 #endif
840 }
841 else {
842 data2 = 0; // 0 velocity means "note off"
843 #ifdef AUDIO_IO_GB_MIDI_WORKAROUND
844 auto end = mPendingNotesOff.end();
845 auto iter = std::find(
846 mPendingNotesOff.begin(), end, std::make_pair(channel, data1) );
847 if (iter != end)
848 mPendingNotesOff.erase(iter);
849 #endif
850 }
851 command = 0x90; // MIDI NOTE ON (or OFF when velocity == 0)
852 // Update event
853 } else if (mNextEvent->is_update()) {
854 // this code is based on allegrosmfwr.cpp -- it could be improved
855 // by comparing attribute pointers instead of string compares
856 Alg_update_ptr update = static_cast<Alg_update_ptr>(mNextEvent);
857 const char *name = update->get_attribute();
858
859 if (!strcmp(name, "programi")) {
860 // Instrument change
861 data1 = update->parameter.i;
862 data2 = 0;
863 command = 0xC0; // MIDI PROGRAM CHANGE
864 } else if (!strncmp(name, "control", 7)) {
865 // Controller change
866
867 // The number of the controller being changed is embedded
868 // in the parameter name.
869 data1 = atoi(name + 7);
870 // Allegro normalizes controller values
871 data2 = ROUND(update->parameter.r * 127);
872 command = 0xB0;
873 } else if (!strcmp(name, "bendr")) {
874 // Bend change
875
876 // Reverse Allegro's post-processing of bend values
877 int temp = ROUND(0x2000 * (update->parameter.r + 1));
878 if (temp > 0x3fff) temp = 0x3fff; // 14 bits maximum
879 if (temp < 0) temp = 0;
880 data1 = temp & 0x7f; // low 7 bits
881 data2 = temp >> 7; // high 7 bits
882 command = 0xE0; // MIDI PITCH BEND
883 } else if (!strcmp(name, "pressurer")) {
884 // Pressure change
885 data1 = (int) (update->parameter.r * 127);
886 if (update->get_identifier() < 0) {
887 // Channel pressure
888 data2 = 0;
889 command = 0xD0; // MIDI CHANNEL PRESSURE
890 } else {
891 // Key pressure
892 data2 = data1;
893 data1 = update->get_identifier();
894 command = 0xA0; // MIDI POLY PRESSURE
895 }
896 }
897 }
898 if (command != -1) {
899 // keep track of greatest timestamp used
900 if (timestamp > mMaxMidiTimestamp) {
901 mMaxMidiTimestamp = timestamp;
902 }
903 Pm_WriteShort(mMidiStream, timestamp,
904 Pm_Message((int) (command + channel),
905 (long) data1, (long) data2));
906 /* wxPrintf("Pm_WriteShort %lx (%p) @ %d, advance %d\n",
907 Pm_Message((int) (command + channel),
908 (long) data1, (long) data2),
909 mNextEvent, timestamp, timestamp - Pt_Time()); */
910 }
911 }
912 }
913
GetNextEvent()914 void MIDIPlay::GetNextEvent()
915 {
916 mNextEventTrack = nullptr; // clear it just to be safe
917 // now get the next event and the track from which it came
918 double nextOffset;
919 if (!mIterator) {
920 mNextEvent = nullptr;
921 return;
922 }
923 auto midiLoopOffset = MidiLoopOffset();
924 mNextEvent = mIterator->next(&mNextIsNoteOn,
925 // Allegro retrieves the "cookie" for the event, which is a NoteTrack
926 reinterpret_cast<void **>(&mNextEventTrack),
927 &nextOffset, mPlaybackSchedule.mT1 + midiLoopOffset);
928
929 mNextEventTime = mPlaybackSchedule.mT1 + midiLoopOffset + 1;
930 if (mNextEvent) {
931 mNextEventTime = (mNextIsNoteOn ? mNextEvent->time :
932 mNextEvent->get_end_time()) + nextOffset;;
933 }
934 if (mNextEventTime > (mPlaybackSchedule.mT1 + midiLoopOffset)){ // terminate playback at mT1
935 mNextEvent = &gAllNotesOff;
936 mNextEventTime = mPlaybackSchedule.mT1 + midiLoopOffset - ALG_EPS;
937 mNextIsNoteOn = true; // do not look at duration
938 mIterator->end();
939 mIterator.reset(); // debugging aid
940 }
941 }
942
943
SetHasSolo(bool hasSolo)944 bool MIDIPlay::SetHasSolo(bool hasSolo)
945 {
946 mHasSolo = hasSolo;
947 return mHasSolo;
948 }
949
950
FillOtherBuffers(double rate,unsigned long pauseFrames,bool paused,bool hasSolo)951 void MIDIPlay::FillOtherBuffers(
952 double rate, unsigned long pauseFrames, bool paused, bool hasSolo)
953 {
954 if (!mMidiStream)
955 return;
956
957 // Keep track of time paused. If not paused, fill buffers.
958 if (paused) {
959 if (!mMidiPaused) {
960 mMidiPaused = true;
961 AllNotesOff(); // to avoid hanging notes during pause
962 }
963 return;
964 }
965
966 if (mMidiPaused) {
967 mMidiPaused = false;
968 }
969
970 SetHasSolo(hasSolo);
971
972 // If we compute until mNextEventTime > current audio time,
973 // we would have a built-in compute-ahead of mAudioOutLatency, and
974 // it's probably good to compute MIDI when we compute audio (so when
975 // we stop, both stop about the same time).
976 double time = AudioTime(rate); // compute to here
977 // But if mAudioOutLatency is very low, we might need some extra
978 // compute-ahead to deal with mSynthLatency or even this thread.
979 double actual_latency = (MIDI_MINIMAL_LATENCY_MS + mSynthLatency) * 0.001;
980 if (actual_latency > mAudioOutLatency) {
981 time += actual_latency - mAudioOutLatency;
982 }
983 while (mNextEvent &&
984 UncorrectedMidiEventTime(PauseTime(rate, pauseFrames)) < time) {
985 OutputEvent(PauseTime(rate, pauseFrames));
986 GetNextEvent();
987 }
988 }
989
PauseTime(double rate,unsigned long pauseFrames)990 double MIDIPlay::PauseTime(double rate, unsigned long pauseFrames)
991 {
992 return pauseFrames / rate;
993 }
994
995
996 // MidiTime() is an estimate in milliseconds of the current audio
997 // output (DAC) time + 1s. In other words, what audacity track time
998 // corresponds to the audio (including pause insertions) at the output?
999 //
MidiTime()1000 PmTimestamp MIDIPlay::MidiTime()
1001 {
1002 // note: the extra 0.0005 is for rounding. Round down by casting to
1003 // unsigned long, then convert to PmTimeStamp (currently signed)
1004
1005 // PRL: the time correction is really Midi latency achieved by different
1006 // means than specifying it to Pm_OpenStream. The use of the accumulated
1007 // sample count generated by the audio callback (in AudioTime()) might also
1008 // have the virtue of keeping the Midi output synched with audio.
1009
1010 PmTimestamp ts;
1011 // subtract latency here because mSystemMinusAudioTime gets us
1012 // to the current *write* time, but we're writing ahead by audio output
1013 // latency (mAudioOutLatency).
1014 double now = SystemTime(mUsingAlsa);
1015 ts = (PmTimestamp) ((unsigned long)
1016 (1000 * (now + 1.0005 -
1017 mSystemMinusAudioTimePlusLatency)));
1018 // wxPrintf("AudioIO::MidiTime() %d time %g sys-aud %g\n",
1019 // ts, now, mSystemMinusAudioTime);
1020 return ts + MIDI_MINIMAL_LATENCY_MS;
1021 }
1022
1023
AllNotesOff(bool looping)1024 void MIDIPlay::AllNotesOff(bool looping)
1025 {
1026 #ifdef __WXGTK__
1027 bool doDelay = !looping;
1028 #else
1029 bool doDelay = false;
1030 static_cast<void>(looping);// compiler food.
1031 #endif
1032
1033 // to keep track of when MIDI should all be delivered,
1034 // update mMaxMidiTimestamp to now:
1035 PmTimestamp now = MidiTime();
1036 if (mMaxMidiTimestamp < now) {
1037 mMaxMidiTimestamp = now;
1038 }
1039 #ifdef AUDIO_IO_GB_MIDI_WORKAROUND
1040 // PRL:
1041 // Send individual note-off messages for each note-on not yet paired.
1042
1043 // RBD:
1044 // Even this did not work as planned. My guess is ALSA does not use
1045 // a "stable sort" for timed messages, so that when a note-off is
1046 // added later at the same time as a future note-on, the order is
1047 // not respected, and the note-off can go first, leaving a stuck note.
1048 // The workaround here is to use mMaxMidiTimestamp to ensure that
1049 // note-offs come at least 1ms later than any previous message
1050
1051 // PRL:
1052 // I think we should do that only when stopping or pausing, not when looping
1053 // Note that on Linux, MIDI always uses ALSA, no matter whether portaudio
1054 // uses some other host api.
1055
1056 mMaxMidiTimestamp += 1;
1057 for (const auto &pair : mPendingNotesOff) {
1058 Pm_WriteShort(mMidiStream,
1059 (doDelay ? mMaxMidiTimestamp : 0),
1060 Pm_Message(
1061 0x90 + pair.first, pair.second, 0));
1062 mMaxMidiTimestamp++; // allow 1ms per note-off
1063 }
1064 mPendingNotesOff.clear();
1065
1066 // Proceed to do the usual messages too.
1067 #endif
1068
1069 for (int chan = 0; chan < 16; chan++) {
1070 Pm_WriteShort(mMidiStream,
1071 (doDelay ? mMaxMidiTimestamp : 0),
1072 Pm_Message(0xB0 + chan, 0x7B, 0));
1073 mMaxMidiTimestamp++; // allow 1ms per all-notes-off
1074 }
1075 }
1076
ComputeOtherTimings(double rate,const PaStreamCallbackTimeInfo * timeInfo,unsigned long framesPerBuffer)1077 void MIDIPlay::ComputeOtherTimings(double rate,
1078 const PaStreamCallbackTimeInfo *timeInfo,
1079 unsigned long framesPerBuffer
1080 )
1081 {
1082 if (mCallbackCount++ == 0) {
1083 // This is effectively mSystemMinusAudioTime when the buffer is empty:
1084 mStartTime = SystemTime(mUsingAlsa) - mPlaybackSchedule.mT0;
1085 // later, mStartTime - mSystemMinusAudioTime will tell us latency
1086 }
1087
1088 /* for Linux, estimate a smooth audio time as a slowly-changing
1089 offset from system time */
1090 // rnow is system time as a double to simplify math
1091 double rnow = SystemTime(mUsingAlsa);
1092 // anow is next-sample-to-be-computed audio time as a double
1093 double anow = AudioTime(rate);
1094
1095 if (mUsingAlsa) {
1096 // timeInfo's fields are not all reliable.
1097
1098 // enow is audio time estimated from our clock synchronization protocol,
1099 // which produces mSystemMinusAudioTime. But we want the estimate
1100 // to drift low, so we steadily increase mSystemMinusAudioTime to
1101 // simulate a fast system clock or a slow audio clock. If anow > enow,
1102 // we'll update mSystemMinusAudioTime to keep in sync. (You might think
1103 // we could just use anow as the "truth", but it has a lot of jitter,
1104 // so we are using enow to smooth out this jitter, in fact to < 1ms.)
1105 // Add worst-case clock drift using previous framesPerBuffer:
1106 const auto increase =
1107 mAudioFramesPerBuffer * 0.0002 / rate;
1108 mSystemMinusAudioTime += increase;
1109 mSystemMinusAudioTimePlusLatency += increase;
1110 double enow = rnow - mSystemMinusAudioTime;
1111
1112
1113 // now, use anow instead if it is ahead of enow
1114 if (anow > enow) {
1115 mSystemMinusAudioTime = rnow - anow;
1116 // Update our mAudioOutLatency estimate during the first 20 callbacks.
1117 // During this period, the buffer should fill. Once we have a good
1118 // estimate of mSystemMinusAudioTime (expected in fewer than 20 callbacks)
1119 // we want to stop the updating in case there is clock drift, which would
1120 // cause the mAudioOutLatency estimation to drift as well. The clock drift
1121 // in the first 20 callbacks should be negligible, however.
1122 if (mCallbackCount < 20) {
1123 mAudioOutLatency = mStartTime -
1124 mSystemMinusAudioTime;
1125 }
1126 mSystemMinusAudioTimePlusLatency =
1127 mSystemMinusAudioTime + mAudioOutLatency;
1128 }
1129 }
1130 else {
1131 // If not using Alsa, rely on timeInfo to have meaningful values that are
1132 // more precise than the output latency value reported at stream start.
1133 mSystemMinusAudioTime = rnow - anow;
1134 mSystemMinusAudioTimePlusLatency =
1135 mSystemMinusAudioTime +
1136 (timeInfo->outputBufferDacTime - timeInfo->currentTime);
1137 }
1138
1139 mAudioFramesPerBuffer = framesPerBuffer;
1140 mNumFrames += framesPerBuffer;
1141 }
1142
CountOtherSoloTracks() const1143 unsigned MIDIPlay::CountOtherSoloTracks() const
1144 {
1145 return std::count_if(
1146 mMidiPlaybackTracks.begin(), mMidiPlaybackTracks.end(),
1147 [](const auto &pTrack){ return pTrack->GetSolo(); } );
1148 }
1149
SignalOtherCompletion()1150 void MIDIPlay::SignalOtherCompletion()
1151 {
1152 mMidiOutputComplete = true;
1153 }
1154 }
1155
IsActive()1156 bool MIDIPlay::IsActive()
1157 {
1158 return ( mMidiStreamActive && !mMidiOutputComplete );
1159 }
1160
IsOtherStreamActive() const1161 bool MIDIPlay::IsOtherStreamActive() const
1162 {
1163 return IsActive();
1164 }
1165
Dump() const1166 AudioIODiagnostics MIDIPlay::Dump() const
1167 {
1168 return {
1169 wxT("mididev.txt"),
1170 GetMIDIDeviceInfo(),
1171 wxT("MIDI Device Info")
1172 };
1173 }
1174
1175