The Linux kernel sees device drivers as either block or character
drivers. A block device hosts a filesystem; thus, the drivers for your
IDE disks and your tape drives are all block devices. Character
devices are accessed like files, and the character device driver
implements the system's I/O calls (open/close, read/write) on the
requested device. Drivers are compiled into the kernel or as modules
to be dynamically loaded into the kernel space. Once a module occupies
kernel space its services are available exactly as any other available
system function. Applications can then access those services by
reading and writing to the special files found in the
/dev directory (e.g. /dev/dsp or
/dev/audio).
Device drivers may be very simple or very complex. Writing a Linux
soundcard device driver is a relatively complex task, hence the need
for complete technical documentation and specifications from the card
manufacturer. Happily some manufacturers have taken an open attitude
about supplying that information to driver developers, and you can
view the source code for the resulting kernel sound modules in
/usr/src/linux/drivers/sound. Recommended modules for
study include sound_core.c (the top level handler for the
Linux sound system) and opl3sa.c (the driver for the
Yamaha YMF701B, also known as the OPL3-SA chipset).
#include <linux/init.h> /* necessary headers */
#include <linux/module.h>
#undef SB_OK
#include "sound_config.h"
#include "ad1848.h" /* header for the AD1848 support chipset */
#include "mpu401.h" /* header for the MIDI interface */
/*
* Begin SoundBlaster mode setup routines
*/
#ifdef SB_OK /* SoundBlaster mode defined ? */
#include "sb.h" /* header for SoundBlaster mode */
static int sb_initialized = 0;
#endif
static int kilroy_was_here = 0; /* some init values */
static int mpu_initialized = 0;
static int *opl3sa_osp = NULL;
static unsigned char opl3sa_read(int addr) {} /* set up read operation */
static void opl3sa_write(int addr, int data) {} /* set up write operation */
static int __init opl3sa_detect(void) {} /* detect OPL3-SA chipset */
/*
* Probe and attach routines for the Windows Sound System mode of the OPL3-SA
*/
static int __init probe_opl3sa_wss(struct address_info *hw_config) {}
static void __init attach_opl3sa_wss(struct address_info *hw_config) {}
static int __init probe_opl3sa_mpu(struct address_info *hw_config) {}
static void __exit unload_opl3sa_wss(struct address_info *hw_config) {}
static inline void __exit unload_opl3sa_mpu(struct address_info *hw_config) {}
/*
* End WSS routines
*/
#ifdef SB_OK
static inline void __exit unload_opl3sa_sb(struct address_info *hw_config) {}
#endif
/*
* End SoundBlaster routines
*/
static int found_mpu; /* found a MIDI interface ? */
static struct address_info cfg;
static struct address_info cfg_mpu;
static int __initdata io = -1; /* initialize audio I/O port, IRQ, and DMA channels */
static int __initdata irq = -1;
static int __initdata dma = -1;
static int __initdata dma2 = -1;
static int __initdata mpu_io = -1; /* initialize MIDI base address and IRQ */
static int __initdata mpu_irq = -1;
MODULE_PARM(io,"i"); /* set parameter values */
MODULE_PARM(irq,"i");
MODULE_PARM(dma,"i");
MODULE_PARM(dma2,"i");
MODULE_PARM(mpu_io,"i");
MODULE_PARM(mpu_irq,"i");
static int __init init_opl3sa(void) {} /* initialize OPL3-SA */
static void __exit cleanup_opl3sa(void) {} /* exit and cleanup */
module_init(init_opl3sa); /* make it start */
module_exit(cleanup_opl3sa); /* make it stop */
True to form, we see the initialization, read/write, and detection routines for the chipset, load/unload routines for the WSS (Windows Sound System) and MPU (external MIDI interface) functions of the chipset, and the mandatory exit/cleanup routine.
Interested readers should consult the complete OPL3-SA driver source code for a more detailed understanding. See also the Resources listings at the end of this article for guides to more information on writing Linux device drivers. In particular, Alessandro Rubini and Jonathan Corbet's book (Linux Device Drivers) is highly recommended to anyone thinking about writing a Linux driver.
As mentioned earlier, Linux applications do not normally access hardware directly; thus, applications developers do not usually write code to directly access and control a device such as a soundcard. Instead, an application programming interface (API) provides the developer with a hardware-independent set of I/O controls over the device's services.
The OSS/Free API (at
/usr/src/linux/include/linux/soundcard.h) is the default
programming interface for the kernel sound modules. The OSS/Linux API
is an enhanced and expanded version of the OSS/Free programming
interface; it is normally found at
/usr/lib/oss/soundcard.h (the OSS/Linux default
installation path).
The following code fragment illustrates how to control a soundcard by programming with the OSS/Free API:
/*
*
* shameless rip-off of example code by Jeff Tranter
* from his Linux Multimedia Guide
*
*/
#include <unistd.h>
#include <stdio.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <linux/soundcard.h>
int main()
{
int fd; /* device file descriptor */
int arg;
int status;
/*** open the device file (/dev/dsp) for read/write operations ***/
fd = open("/dev/dsp", O_RDWR);
if (fd < 0) {
perror("error opening /dev/dsp");
exit(1);
}
/*** set the sample size (8 or 16 bits) ***/
arg = 16;
status = ioctl(fd, SOUND_PCM_WRITE_BITS, &arg);
if (status == -1) {
perror("error from SOUND_PCM_WRITE_BITS ioctl");
exit(1);
}
/*** set the number of channels ***/
arg = 2;
status = ioctl(fd, SOUND_PCM_WRITE_CHANNELS, &arg);
if (status == -1) {
perror("error from SOUND_PCM_WRITE_CHANNELS ioctl");
exit(1);
}
/*** set the PCM sampling rate for the device ***/
arg = 44100;
status = ioctl(fd, SOUND_PCM_WRITE_RATE, &arg);
if (status == -1) {
perror("error from SOUND_PCM_WRITE_RATE ioctl");
exit(1);
}
/*** The device is now open and ready to read or write data. ***/
/*** close /dev/dsp ***/
status = close(fd);
if (status == -1) {
perror("error closing /dev/dsp");
exit(1);
}
return(0);
}
Note that the device interface code is hardware-independent. It
does not need to know what particular soundcard you have: the API
provides a generalized control interface that lets the application
developer ignore the hardware specifics and write only to the device
file (/dev/dsp in this example). Manipulating the bits in
the hardware registers of the card is left to the driver for that
card. For instance, in the fragment above, we see no indication that
my soundcard is a SoundBlaster; however, my kernel sound driver is
indeed the SoundBlaster module, and when an audio service is requested
that module does its duty, translating the request into my
SoundBlaster's unique command set.
| The Linux Soundcard Driver |
![]() |
![]() |
|
![]() |
|
The OSS/Linux applications interface is an enhanced and expanded version of OSS/Free. You can obtain the OSS/Linux API in PDF format from 4Front's site, along with some examples of coding for PCM audio, the soundcard mixer, and MIDI. The API covers a greater range of cards, and an accordingly broader range of functions, but the basic programming style is similar to OSS/Free.
The Advanced Linux Sound Architecture (ALSA) goes beyond the capabilities of the OSS/Free API and is generally regarided as the likely successor to OSS/Free as the kernel sound API. The ALSA driver can be compiled with an OSS/Free emulation mode completely compatible with the existing kernel sound API; however, given the truly advanced nature of the ALSA driver, I urge developers to write their new audio applications using ALSA in its native mode (i.e., leaving behind OSS/Free legacy code). The next code fragment shows how to do the same thing as the OSS/Free fragment, but in native ALSA mode.
//
// alsa.c
// ALSA API demonstration code
// courtesy Andy Lo A Foe
// 20 April 2001
//
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/asoundlib.h>
snd_pcm_t *sound_handle;
snd_output_t *errlog;
snd_pcm_hw_params_t *hwparams;
int main(void)
{
int err;
// Connect error reporting to stderr
snd_output_stdio_attach(&errlog, stderr, 0);
if (snd_pcm_open(&sound_handle, "hw:0,0", SND_PCM_STREAM_PLAYBACK, 0) < 0)
{
fprintf(stderr, "Error opening hw:0,0\n");
snd_output_close(errlog);
return 1;
}
// Set up the hardware device for 16-bit, 44KHZ, stereo
// First initialize the hwparams struct using the sound_handle
// we got for our sound hardware
snd_pcm_hw_params_alloca(&hwparams);
err = snd_pcm_hw_params_any(sound_handle, hwparams);
if (err < 0)
goto _alsa_error;
// Now request the desired parameters one by one
// Access method, interleaved or non-interleaved
err = snd_pcm_hw_params_set_access(sound_handle, hwparams,
SND_PCM_ACCESS_RW_INTERLEAVED);
if (err < 0)
goto _alsa_error;
// The sample format, signed 16-bit little endian
err = snd_pcm_hw_params_set_format(sound_handle, hwparams,
SND_PCM_FORMAT_S16_LE);
if (err < 0)
goto _alsa_error;
// The sample rate
err = snd_pcm_hw_params_set_rate(sound_handle, hwparams,
44100, 0);
if (err < 0)
goto _alsa_error;
// Number of channels we want, stereo = 2
err = snd_pcm_hw_params_set_channels(sound_handle, hwparams, 2);
if (err < 0)
goto _alsa_error;
// The period size. For all practical purposes this is synonymous
// to OSS/Free's fragment size.
// Note that this in frames (frame = nr_channels * sample_width)
// So a value of 1024 means 4096 bytes (1024 x 2 x 16-bits)
err = snd_pcm_hw_params_set_period_size(sound_handle, hwparams,
1024, 0);
if (err < 0)
goto _alsa_error;
// The number of periods we want to allocate, 4 is reasonable
err = snd_pcm_hw_params_set_periods(sound_handle, hwparams, 4, 0);
if (err 0)
goto _alsa_error;
// Finally setup our hardware with the selected values
err = snd_pcm_hw_params(sound_handle, hwparams);
if (err < 0) {
fprintf(stderr, "Unable to set hardware parameter:\n");
snd_pcm_hw_params_dump(hwparams, errlog);
return 2;
}
fprintf(stdout, "Success!\n");
// At this point you can start sending PCM data to the device
snd_pcm_close(sound_handle);
snd_output_close(errlog);
return 0;
_alsa_error:
fprintf(stderr, "Invalid hardware parameter for device:\n");
snd_pcm_hw_params_dump(hwparams, errlog);
snd_pcm_close(sound_handle);
snd_output_close(errlog);
return 1;
}
As you can see, the ALSA API permits more robust control over the
device, yet retains the hardware independence necessary for a
generalized programming interface. ALSA offers many amenities to audio
developers, particularly for professional applications (such as
multitrack, hard disk recorders) that demand higher performance from
the entire system. Developers do not write directly against kernel
ioctls(), and higher level support is available for PCM
plugins and transparent network audio. Readers interested in the
details of the ALSA API should consult asoundlib.h in the
alsa-lib-x.x.x/include directory.
Andy wrote this code for ALSA 0.9.0beta, but it is likely to remain valid for the 1.0 release. The stabilization of the API will be a long-awaited achievement, one that promises a great future for the development of serious and professional Linux audio applications. There is widespread hope that the ALSA programming interface will enter the kernel sources, eventually replacing OSS/Free and providing the kernel with a more modern and more flexible sound system.
Previous: Finding and Installing Drivers
|
Next: The Future of Linux Audio Driver Support
|
Copyright © 2009 O'Reilly Media, Inc.