The synthesizer itself does not write any audio to the audio output. This allows application developers to manage the audio output themselves if they wish. The next section describes the use of the synthesizer without an audio driver in more detail.
Creating the audio driver is straightforward: set the appropriate settings and create the driver object. Because the FluidSynth has support for several audio systems, you may want to change which one you want to use. The list below shows theaudio systems that are currently supported. It displays the name, as used by the fluidsynth library, and a description.
alsa: Advanced Linux Sound Architecture
oss: Open Sound System (Linux)
jack: JACK Audio Connection Kit (Linux, Mac OS X)
portaudio: Portaudio Library (MacOS 9 & X, Windows, Linux)
sndmgr: Apple SoundManager (Mac OS Classic)
coreaudio: Apple CoreAudio (MacOS X, experimental)
dsound: Microsoft DirectSound (Windows)
The default audio driver depends on the settings with which
FluidSynth was compiled. You can get the default driver with
fluid_settings_getstr_default(settings, "audio.driver"). To get
the list of available drivers use the
fluid_settings_foreach_option
function. Finally, you can set the driver with
fluid_settings_setstr
. In most cases, the
default driver should work out of the box.
Additional options that define the audio quality and latency are "audio.sample-format", "audio.period-size", and "audio.periods". The details are described later.
You create the audio driver with the
new_fluid_audio_driver
function. This
function takes the settings and synthesizer object as
arguments. For example:
void init() { fluid_settings_t* settings; fluid_synth_t* synth; fluid_audio_driver_t* adriver; settings = new_fluid_settings(); /* Set the synthesizer settings, if necessary */ synth = new_fluid_synth(settings); fluid_settings_setstr(settings, "audio.driver", "jack"); adriver = new_fluid_audio_driver(settings, synth); }
As soon as the audio driver is created, it will start playing. The audio driver creates a separate thread that runs in real-time mode (is the application has sufficient privileges) and call the synthesizer object to generate the audio.
There are a number of general audio driver settings. The audio.driver settings defines the audio subsystem that will be used. The audio.periods and audio.period-size settings define the latency and robustness against scheduling delays. There are additional settings for the audio subsystems used. They will be documented later.
Table 2. General audio driver settings
audio.driver | Type | string |
Default | alsa (Linux), dsound (Windows), sndman (MacOS9), coreaudio (MacOS X) | |
Options | alsa, oss, jack, dsound, sndman, coreaudio, portaudio | |
Description | The audio system to be used. | |
audio.periods | Type | int |
Default | 16 (Linux, MacOS X), 8 (Windows) | |
Min-Max | 2-64 | |
Description | The number of the audio buffers used by the driver. This number of buffers, multiplied by the buffer size (see setting audio.period-size), determines the maximum latency of the audio driver. | |
audio.period-size | Type | int |
Default | 64 (Linux, MacOS X), 512 (Windows) | |
Min-Max | 64-8192 | |
Description | The size of the audio buffers (in frames). | |
audio.sample-format | Type | string |
Default | "16bits" | |
Options | "16bits", "float" | |
Description | The format of the audio samples. This is currently only an indication; the audio driver may ignore this setting if it can't handle the specified format. |