True Story Follows
Back at the homefront, we’ve been looking into getting a baby monitor for our good son. Being that it’s 2K17, you’d think that there would be cheap and reliable products for this. I mean, there are…but for a reliable audio / video system it’s looking like we’ll have to spend close to $200 after the tax man is included in this transaction. But the reality of the situation is that we’ve obviously got a home server that’s already running in the house, I’ve got a DSLR camera at home with camera drivers I’ve already written, I can’t open a random drawer at the house without a random raspberry pi falling out of it, and I can purchase a very high quality microphone for $50. And when the kids get older, I don’t want these baby monitor components to go to waste; I’d rather those components be used for something like an image recognition system that controls servos to shoot water guns at squirrels that decide to trespass inside of our property. Or set up multiple microphones that triangulate the position of a person talking in the house based on the drop in amplitude of an audio signal.
So in my concrete case, I’m starting with video streaming, but in the abstract this is nothing more than a data transport layer.
So I want to stream data from one point to another. This sounds simple, but what if I want to beam the data over the internets to the other side of the world? Maybe I want to be in Russia and control the lights of my house later on. Point to point networking quickly becomes complicated because of NAT translation. Networking is not as simple as connecting from one IP address to another over some designated port. Subnetworks exist behind a router, and firewalls need to be opened up for particular ports that then forward to a particular IP in your local network. I didn’t want to deal with all of this.
Instead, the problem is simplified by using an intermediary server. If the use case is strictly inside of a single network, it’s fairly straightforward to set up a server with a static IP inside that network. If the use case is across the internets, it’s also fairly straightforward to set up a small EC2 instance or pay a small amount of money per month for some basic hosting.
Once we have a server, we can just use Redis to act as an intermediary buffer. A producer writes to Redis, and a consumer consumes from Redis. This is as primitive as you can probably get, all of the data lives in RAM, there are datastructures optimized for the producer / consumer case (i.e. a double ended queue), Redis’ pitfall of a lack of durability is not a problem here, and the software is open source so you don’t have to spend any money. It’s a win / win / win / win / win situation.
All of the code for this project is available on Github. Don’t you judge me for its lack of test coverage or its relatively high number of assignment statements. This was a simple home project.
Consider the case below and how it makes you feel:
Here, I’m piping a raw h264 video stream to another ffmpeg process that’s then converting the images to PPM frames. It’s irrelevant that the stream is h264 and the frames are PPM, that just so happens to be what I’m doing myself. The bigger point is simply that I’m piping the output of one ffmpeg instance to another; just connecting some tubes. What if I could add an intermediary layer, a “tube” if you will (and I will), in between the two pipes?
I wrote a quick program that reads from standard out and writes to Redis, and in an alternative case will read from Redis and write to standard out. Now if I run the program with those 2 cases across 2 separate computers, the data will be streamed from one host to the intermediary Redis server and from there to the final host machine.
So to expand on the above example, I can create a data producer like so:
Then on my machine that consumes the data and does something with it (in this case, writes PPM frames, but you can easily output the video to ffplay or whatever else), I can consume the data like so:
In my example here, the video stream is uncompressed for the sake of simplicity during a demonstration, but in practice I’m applying some resizing and compression to make the stream as small as possible for a given use case. So you can optimize for both your specific networking conditions and the computing power available between hosts (i.e., what if my producer is a robot where I don’t want to drain battery, but I have an abundance of internets? Or what if it’s the inverse?)
If I want to have multiple streams I can just change the “channel_id” to whatever channel ID I want.