News

23 Jan 2018

[Blog] Metadata format

Hello readers, this is Erik, and today we are going to be diving in-depth into some important updates we have been making to our internal metadata systems and communication handling.

Over the last couple of years "low latency" streaming has become more and more important, with viewers no longer accepting long buffering times or being delayed in their stream in any way. To achieve this all processes in your ecosystem will need to be able to work with the lowest latency possible, and having a media server that aids in this aspect is a large step in the right direction.

With this in mind we have been working on creating a new internal format for storing metadata, that allows multiple processes to read while having a single source process generate and register the incoming data. By doing this directly in memory we can now bring our internal latency down to 2 frames direct throughput, and this post is an overview of how we do this.

Communication in a modular system

Because MistServer is a multi-process environment - a separate binary is started for each and every viewer - efficiency is mostly dependent on the amount of overhead induced by the communication between the various processes. Our very first version used a connection between each output and a corresponding input, which has been replaced a couple of years ago by a technique called shared memory.

Shared memory is a technique where multiple processes - distributed over any number of executables - can access the same block of memory. By using this technique to distribute data, all source processes need to only write their data once, allowing any output process to read it simultaneously.

The main delaying factor in its current implementation, is that the metadata for a live stream only gets written to memory every second. As all output processes read once per second as well, this yields a communication delay of up to 2 seconds.

For our live streams we also have the additional use case where multiple source processes can be used for a single stream in order to generate a multi bitrate output. All source processes write their own data to memory pages and a separate MistInBuffer process handles the communication, authorization and negotiation of all tracks. Next to this it will make sure DVR constraints are met, and inactive tracks get removed from the stream.

During this it will parse the data written to a page by a source process, only to regenerate the metadata that was already available in the source to begin with. This in itself adds a delay as well, and moreover it demands processing power to recalculate information that was already known.

Synchronisation locks

To make matters worse, in order to maintain an up to date view on all data, all executables involved in this system will need to 'lock' the metadata page in its entirety to make sure it is the only process with access. Though the duration of this lock is generally measured in fractions of milliseconds, having a stream with hundreds or thousands of viewers at the same time does put a strain on keeping output in realtime.

Reliable Access

For the last couple of months we have been busy with a rework of this structure to improve our metadata handling. By using the new RelAccX structure we can generate a lock-less system based on records with fixed-size data fields.

If the field sizes do need to be changed a reload of the entire page can be forced to reset the header fields. By doing so all processes that reload the structure afterwards will be working with the new values, as these are stored in a structured header. This also allows us to add fields and maintain consistency going forward.

About latency

By using the above described structure we can assign a single page per incoming track, and make the source process write its updates to the same structure that is immediately available for all other processes as well. By setting the 'available' flag after writing all data, we can make sure that the data on the page matches the generated metadata. By doing this we have measured a latency of 2 frames end-to-end in our test environment.

In the same way we can set a record to 'unavailable' to indicate that while the corresponding data might actually still exist at the moment, it is considered an unstable part of the stream and will be removed from the buffer in the near future.

Communications

Besides having implemented this technique for the metadata, we have also upgraded the stability and speed of our internal communications. The key advantages here are that our statistics API can now give even more accurate data, and that output processes can now select any number of tracks from a source up from the previous limit of 10 simultaneous tracks - yes we have had customers reaching this limitation with their projects.

Conclusion

By updating the way we have structured our internal communications, we have been able to nearly remove all latency from the system, as well as attaining a reduced resource usage due to not having to recalculate 'known' data. This system will be added in our next release, requiring a full restart of the system. If you have any question on how to handle the downtime generated by this, or about the new way we handle things, feel free to contact us