The ALSA Driver API
This calls snd_card_disconnect() for disconnecting all belonging componentsand waits until all pending files get closed.It assures that all accesses from user-space finished so that the drivercan release its resources gracefully.
The ALSA Driver API
Creates a new internal PCM instance with no userspace device or procfsentries. This is used by ASoC Back End PCMs in order to create a PCM thatwill only be used internally by kernel drivers. i.e. it cannot be openedby userspace. It provides existing ASoC components drivers with a substreamand access to any private data.
This function will work correctly if the control has been registeredfor a component. Either with snd_soc_add_codec_controls() orsnd_soc_add_platform_controls() or via table based setup for either aCODEC, a platform or component driver. Otherwise the behavior is undefined.
This function will only work correctly if the control has beenregistered with snd_soc_add_platform_controls() or via table based setup ofa snd_soc_platform_driver. Otherwise the behavior is undefined.
But when I look through the asoundlib api here: -project.org/alsa-doc/alsa-lib/ it seems to not have the same functions as the kernel api I linked above. At this point I am confused because I am not sure when to call the kernel api vs the asoundlib api when playing audio.
Application programmers should use the library API rather than the kernel API. The library offers 100% of the functionality of the kernel API, but adds major improvements in usability, making the application code simpler and better looking. In addition, future fixes or compatibility code may be placed in the library code instead of the kernel driver.
We need application developers who choose to use ALSA as the basis for their programs, programmers to work on low level drivers, writers to extend and improve our documentation. If you are interested, please subscribe to a mailing list.
The ALSA library API is the interface to the ALSA drivers. Developers need touse the functions in this API to achieve native ALSA support for theirapplications. The ALSA lib documentation is a valuable developer referenceto the available functions. In many ways it is a tutorial. The lateston-line documentation is generated from the alsa-lib GIT sources.
On Linux, sound servers, like sndio, PulseAudio, JACK (low-latency professional-grade audio editing and mixing) and PipeWire, and higher-level APIs (e.g OpenAL, SDL audio, etc.) work on top of ALSA and its sound card device drivers. ALSA succeeded the older Linux port of the Open Sound System (OSS).
Besides the sound device drivers, ALSA bundles a user-space library for application developers who want to use driver features through an interface that is higher-level than the interface provided for direct interaction with the kernel drivers. Unlike the kernel API, which tries to reflect the capabilities of the hardware directly, ALSA's user-space library presents an abstraction that remains as standardized as possible across disparate underlying hardware elements. This goal is achieved in part by using software plug-ins; for example, many modern sound cards or built-in sound chips do not have a "master volume" control. Instead, for these devices, the user space library provides a software volume control using the "softvol" plug-in, and ordinary application software need not care whether such a control is implemented by underlying hardware or software emulation of such underlying hardware.
Additional to the software framework internal to the Linux kernel, the ALSA project also provides the command-line tools and utilities alsactl, amixer, arecord/aplay and alsamixer, an ncurses-based TUI.
The ALSA utility package, provided by the Linux community, contains the command line utilities for the ALSA project (aplay, arecord, amixer, alsamixer ...). These tools are useful for controlling soundcards. They also provide an example of ALSA API use, for application implementation.
The ALSA Library package contains the ALSA library used by programs (for instance alsa-utils programs) requiring an access to the ALSA sound interface. The ALSA library provides a level of abstraction, such as the PCM and control abstractions, over the audio devices provided by the kernel modules.
The ALSA core provides an API to implement audio drivers and PCM/control interfaces to expose audio devices on the userland. The PCM interface handles the data flow and control. The interface manages controls exported by the ALSA driver (audio path, volumes...).
The aim of the ALSA System on Chip (ASoC) layer is to improve ALSA support for embedded system-on-chip processors and audio codecs. The ASoC framework provides a DMA engine which interfaces with DMA framework to handle the transfer of audio samples. ASoC also supports the dynamic power management of audio pathes through the DAPM driver. ASoC acts as an ALSA driver, which splits an embedded audio system into three types of platform independent drivers: the CPU DAI, the codec and the machine drivers.
The ALSA/ASoC and the audio graph card must be enabled in the kernel configuration, as shown below, to enable the sound support. On top of this, the user has to activate the CPU and Codec drivers according to the chosen hardware. The user can use Linux Menuconfig tool to select the required drivers:
The default alsa.conf is adequate for most installations. For extra functionality and/or advanced control of your sound device, you may need to create additional configuration files. For information on the available configuration parameters, visit -project.org/main/index.php/Asoundrc.
The Advanced Linux Sound Architecture (ALSA) subsystem provides audio and MIDI capabilities to Linux systems, including a user space library to simplify application programming (alsa-lib) and support for the older Open Sound System (OSS) architecture through legacy compatibility modes. Specifically for system-on-chips, the architecture defines an ALSA system-on-chip (ASoC) layer which provides optimized support for embedded devices.
The audio system to be used. In order to use sdl2 as audio driver, the application is responsible for initializing SDL's audio subsystem.Note: sdl2 and waveout are available since fluidsynth 2.1.
This is the number of audio samples most audio drivers will request from the synth at one time. In other words, it's the amount of samples the synth is allowed to render in one go when no state changes (events) are about to happen. Because of that, specifying too big numbers here may cause MIDI events to be poorly quantized (=untimed) when a MIDI driver or the synth's API directly is used, as fluidsynth cannot determine when those events are to arrive. This issue does not matter, when using the MIDI player or the MIDI sequencer, because in this case, fluidsynth does know when events will be received.
Sets the realtime scheduling priority of the audio synthesis thread. This includes the synthesis threads created by the synth (in case synth.cpu-cores was greater 1). A value of 0 disables high priority scheduling. Linux is the only platform which currently makes use of different priority levels as specified by this setting. On other operating systems the thread priority is set to maximum. Drivers which use this option: alsa, oss and pulseaudio
This setting is a comma-separated integer list that maps fluidsynth mono-channels to CoreAudio device output channels. Each position in the list represents the output channel of the CoreAudio device. The value of each position indicates the zero-based index of the fluidsynth output mono-channel to route there (i.e. the buffer index used for fluid_synth_process()). Additionally, the special value of -1 will turn off an output. For example, the default map for a single stereo output is "0,1". A value of "0,0" will copy the left channel to the right, a value of "1,0" will flip left and right, and a value of "-1,1" will play only the right channel. With a six-channel output device, and the synth.audio-channels and synth.audio-groups settings both set to "2", a channel map of "-1,-1,0,1,2,3" will result in notes from odd MIDI channels (audible on the first stereo channel, i.e. mono-indices 0,1) being sent to outputs 3 and 4, and even MIDI channels (audible on the second stereo channel, i.e. mono-indices 2,3) being sent to outputs 5 and 6. If the list specifies less than the number of available outputs channels, outputs beyond those specified will maintain the default channel mapping given by the CoreAudio driver.
Defines the byte order when using the 'file' driver or file renderer to store audio to a file. 'auto' uses the default for the given file type, 'cpu' uses the CPU byte order, 'big' uses big endian byte order and 'little' uses little endian byte order.
Device to use for PortAudio driver output. Note that 'PortAudio Default' is a special value which outputs to the default PortAudio device. The format of the device name is: "::" e.g. "11:Windows DirectSound:SB PCI"
I generated a log file, which shows the specific error: [ERR] OutputALSA::init : snd_pcm_hw_params_set_rate returned -22 = Invalid argument.I realise that this could be a really device/driver specific problem but I have no clue what this error means, could anyone point me in the right direction?
ALSA supports two transfers methods for PCM playback: Read/Write transfer where samples are written to the device using standard read and write functions and Direct Read/Write transfers where samples can be written directly to a mapped memory area and the driver is signaled once this has been done.
ALSA provides an API for both cases and each application using ALSA to access the audio device can choose which API to use. But since mmap was not supported by the bcm2835 driver, applications using the mmap API did not work on the Raspberry Pi. 041b061a72