So, I've been working on this for a long time, and put in a bunch of research. Without doing a lot of lowlevel C hacking (which we don't have time for) Here's what each individual computer will be capable and incapable of. Remember that multiple computers can be connected to act together in any way we dream up, so long as the lower level components will allow it.
The audio component of the client system has three tiers.
Tier 1 - GStreamer & PyGst
This is the core of the audio playback technology. It provides the tools for decoding the audio, allowing us to play anything from mp3 to ogg to m4a to flacc without having to worry about what format it is. At this level we also have play/pause/seek support, seek supports resolution nanosecond level (hell yeah). Here we can also control volume, but only for both channels at once.
Tier 2 - Jack Audio Connection Kit (JACK) & PyJack
This provides our dynamic mixer and patchbay, two very important components. It also has a crazy low latency, with the proper kernel and memory it will have the lowest latency that the hardware physically allows (just a fun fact, really). The first important feature that JACK provides is the dynamic mixer. Without this, each channel on the sound card would be locked to a single audio stream. GStreamer could play a single stereo audio stream, and the sound card would block on any incoming data. Without JACK we can have no concurrent audio streams, and that's bad. The second feature that JACK provides is the patchbay. Say we have two GStreamer outputs, each of them in stereo. Say we have a sound card with two outputs. JACK allows us to patch GStreamer outputs 1&2 into hardware output 1, and GStreamer outputs 3&4 into hardware output 2. We can map any JACK-capable inputs to any sound cards compatible with Linux.
Tier 3 - Advanced Linux Sound Architecture & PyALSAAudio
The lowest level that we give a damn about. This is where JACK outputs to. Each of those hardware channels that Linux supports is abstracted by a kernel driver for ALSA. What this means for us is volume control for each and every supported hardware channel.
If you read this at more than a skim you realize that we are without volume control for individual channels at an audio stream level. This would have to be controlled at the GStreamer or JACK level, and so far as I can tell there's no way to do that that doesn't really, really suck e.g. I could deinterleave the channels in GStreamer, modify the volume on each and reinterleave them, but this is very very dirty, and more likely than not to cause damage to the stream integrity. Not. Cool. Every other option I've created has been just as awful (or worse). Still, progress is good!
No comments:
Post a Comment