Sound output on an embedded platform is generated via a multi-stage process. At the highest level, audio data is passed from the application to kernel via the ALSA sound system . This is then translated into a digital audio stream in the kernel, and passed to a hardware codec via an AC97 or I2S interface. The codec - which in this case refers to a digital audio encoder/decoder, rather than a compression/encoding scheme - translates this digital audio to an analogue audio signal. The resulting analogue audio signal is typically exposed to the outside world via stereo TRS connectors  (Line out), possibly after an amplification stage (Headphone out).
Sound input follows the inverse path, from microphone in (typically mono), to codec, to CPU, kernel and finally user application.
All Snapper modules provide audio capabilities, and these are exposed on the Rig 200 via a number of connectors:
- Stereo line in
- Stereo line out
- Stereo headphones
- Mono microphone in
- Mono speaker (on board)
Control of audio is provided to userspace under Linux via the ALSA sound system, featuring a stable API and simple command-line mixer functions (amixer). Internally, audio support is provided by:
- Snapper 255: Philips UCB1400 stereo codec, using an AC97 interface
- Snapper CL15: TLV320AIC23B stereo coded, using an I2S interface